Lightly

Latest version: v1.5.19

Safety actively analyzes 723625 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 20 of 22

1.1.8

BYOL model, Refactoring and New Tutorial for Active Learning

New Model: BYOL

- This release adds a new model for self-supervised learning: BYOL (see https://arxiv.org/abs/2006.07733)
- Thanks pranavsinghps1 for your contribution!

Improvements

- Refactored NTXent Loss. The new code is shorter and easier to understand.
- Added a scorer for semantic segmentation to do active learning with image segmentation
- Added color highlighting in CLI
- CLI returns now the `dataset_id` when creating a new dataset

New Active Learning Turorial using Detectron2

- This tutorial shows the full power of the lightly self-supervised embedding and active learning scorers
- Check it out here: https://docs.lightly.ai/tutorials/platform/tutorial_active_learning_detectron2.html

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.7

Active Learning Refactoring and Minor Improvements

Instantiate shuffle tensor directly on device
This change makes our momentum encoders more efficient by directly instantiating temporary tensors on device instead of moving them there after instantiation. Thanks a lot to guarin for pointing out the problem and swiftly fixing it!

Active Learning Refactoring
The new strategy of uploading active learning scores to a query tag instead of the preselected tag is enforced making our framework more flexible, easier to use, and allowing users to make several samplings with the same set of scores at the cost of little computational overhead.
Additionally, active learning scores were renamed to match the current literature. We now support uncertainty sampling with the least confidence, margin and entropy variant as described in http://burrsettles.com/pub/settles.activelearning.pdf, page 12f, chapter 3.1.

Minor Bug Fixes and Improvements
Better handling of edge cases when doing active learning for object detection.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.6

More Powerful CLI Commands, Stability Improvements and Updated Documentation
Create a new dataset directly when running`lightly-upload` and `lightly-magic`
Just replace the argument `dataset_id="your_dataset_id"` with the argument `new_dataset_name="your_dataset_name"`. To learn more, look at the docs,
Get only the newly added samples from a tag
`lightly-download` has the flag `exclude_parent_tag`
If this flag is set, the samples in the parent tag are excluded from being downloaded. This is very practical when doing active learning and you only want the filenames newly added to the tag.
`ActiveLearningAgent` has new attribute `added_set`
If you prefer getting the newly added samples from the active learning agent, just access its new attribute `added_set`

Minor Updates and Fixes
Updated documentation and docstrings to make working with lightly simpler.
Minor bug fixes and improvements.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.5

Hypersphere Loss, Stability Improvements and Updated Documentation

Hypersphere Loss (EelcoHoogendoorn)
Implemented the loss function [described here](https://arxiv.org/abs/2005.10242), which achieves competitive results with more cited ones (symmetric negative cosine similarity & contrastive loss) while providing better interpretability.

You can use the loss in combination with all other losses supported by lightly:
python
initialize loss function
loss_fn = HypersphereLoss()

generate two random transforms of images
t0 = transforms(images)
t1 = transforms(images)

feed through (e.g. SimSiam) model
out0, out1 = model(t0, t1)

calculate loss
loss = loss_fn(out0, out1)


Thank you, EelcoHoogendoorn, for your contribution

Minor Updates and Fixes
Updated documentation and docstrings to make working with lightly simpler.
Minor bug fixes and improvements.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

1.1.4

Consistency Regularization, CLI update, and API client update

Consistency Regularization
This release contains an implementation of the [CO2 (consistency contrast) regularization](https://arxiv.org/abs/2010.02217) which can be used together with our contrastive loss function. We observed consistent (although marginal) improvements when applying the regularizer to our models!

lightly-version
A new CLI command was added to enable users to easily check the installed version from the command line. This is especially useful when working with different environments and it's not clear which version of lightly is being used. The command is:

> lightly-version

1.1.3

New Augmentation (Solarization) and Updates to README and Docs

Solarization
Solarization is an augmentation which inverts all pixels above a given threshold. It is being applied in many papers about self-supervised learning. For example, in [BYOL](https://arxiv.org/abs/2006.07733) and [Barlow Twins](https://arxiv.org/abs/2103.03230).

Updates to README and Docs (multi GPU training)
The README received a code example to show how to use lightly. The documentation was polished and received a section about how to use lightly with multiple GPUs.

Experimental: Active Learning Scorers for Object Detection
Scorers for active learning with object detection were added. These scorers will not work with the API yet and are therefore also not yet documented.

Models

- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)

Page 20 of 22

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.