Lightly

Latest version: v1.5.15

Safety actively analyzes 688007 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 19 of 22

1.1.10

Documentation updates and Miscellaneous

Documentation Updates
- Added two new tutorials to the docs.

Miscellaneous
- In the warning if a newer lightly version is available, the current version is also shown.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.9

Additional Support for Videos, Minor Bug Fixes, and Documentation Updates

Additional Video Formats
The `LightlyDataset` now works with `.mpg`, `.hevc`, `.m4v`, `.webm`, and `.mpeg` videos.

Bug Fixes
- Replaced the `squeeze` operation with `flatten` in the model forward passes. Thanks guarin for noticing and for the fix!
- Made `lightly` compatible with `pytorch-lightning>=1.3.0`.

Documentation Updates

- The `lightly-magic` command is finally featured in the docs. Thanks pranavsinghps1!
- Big update on the docker docs.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.8

BYOL model, Refactoring and New Tutorial for Active Learning

New Model: BYOL

- This release adds a new model for self-supervised learning: BYOL (see https://arxiv.org/abs/2006.07733)
- Thanks pranavsinghps1 for your contribution!

Improvements

- Refactored NTXent Loss. The new code is shorter and easier to understand.
- Added a scorer for semantic segmentation to do active learning with image segmentation
- Added color highlighting in CLI
- CLI returns now the `dataset_id` when creating a new dataset

New Active Learning Turorial using Detectron2

- This tutorial shows the full power of the lightly self-supervised embedding and active learning scorers
- Check it out here: https://docs.lightly.ai/tutorials/platform/tutorial_active_learning_detectron2.html

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.7

Active Learning Refactoring and Minor Improvements

Instantiate shuffle tensor directly on device
This change makes our momentum encoders more efficient by directly instantiating temporary tensors on device instead of moving them there after instantiation. Thanks a lot to guarin for pointing out the problem and swiftly fixing it!

Active Learning Refactoring
The new strategy of uploading active learning scores to a query tag instead of the preselected tag is enforced making our framework more flexible, easier to use, and allowing users to make several samplings with the same set of scores at the cost of little computational overhead.
Additionally, active learning scores were renamed to match the current literature. We now support uncertainty sampling with the least confidence, margin and entropy variant as described in http://burrsettles.com/pub/settles.activelearning.pdf, page 12f, chapter 3.1.

Minor Bug Fixes and Improvements
Better handling of edge cases when doing active learning for object detection.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.6

More Powerful CLI Commands, Stability Improvements and Updated Documentation
Create a new dataset directly when running`lightly-upload` and `lightly-magic`
Just replace the argument `dataset_id="your_dataset_id"` with the argument `new_dataset_name="your_dataset_name"`. To learn more, look at the docs,
Get only the newly added samples from a tag
`lightly-download` has the flag `exclude_parent_tag`
If this flag is set, the samples in the parent tag are excluded from being downloaded. This is very practical when doing active learning and you only want the filenames newly added to the tag.
`ActiveLearningAgent` has new attribute `added_set`
If you prefer getting the newly added samples from the active learning agent, just access its new attribute `added_set`

Minor Updates and Fixes
Updated documentation and docstrings to make working with lightly simpler.
Minor bug fixes and improvements.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.5

Hypersphere Loss, Stability Improvements and Updated Documentation

Hypersphere Loss (EelcoHoogendoorn)
Implemented the loss function [described here](https://arxiv.org/abs/2005.10242), which achieves competitive results with more cited ones (symmetric negative cosine similarity & contrastive loss) while providing better interpretability.

You can use the loss in combination with all other losses supported by lightly:
python
initialize loss function
loss_fn = HypersphereLoss()

generate two random transforms of images
t0 = transforms(images)
t1 = transforms(images)

feed through (e.g. SimSiam) model
out0, out1 = model(t0, t1)

calculate loss
loss = loss_fn(out0, out1)


Thank you, EelcoHoogendoorn, for your contribution

Minor Updates and Fixes
Updated documentation and docstrings to make working with lightly simpler.
Minor bug fixes and improvements.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

Page 19 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.