Lightly

Latest version: v1.5.19

Safety actively analyzes 724051 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 18 of 22

1.1.20

Refactor Models, SwAV Model, S3-Bucket Integration
Refactor Models
This release will make it much easier to implement new models or adapt existing models by using basic building blocks. E.g. you can define your own model out of blocks like a backbone, projection head, momentum encoder, nearest neighbour memory bank and more.
We want you to see easily how the models in current papers are build and that different papers often only differ in one or two of these blocks.
Compatible examples of all models are shown in the benchmarking scripts for [imagenette](https://github.com/lightly-ai/lightly/blob/master/docs/source/getting_started/benchmarks/imagenette_benchmark.py) and [cifar10](https://github.com/lightly-ai/lightly/blob/master/docs/source/getting_started/benchmarks/cifar10_benchmark.py).

As part of this refactoring to improve flexibility of the framework we have added a deprecation warning to all old models under `lightly/models`, e.g.:


The high-level building block NNCLR will be deprecated in version 1.2.0.
Use low-level building blocks instead.
See https://docs.lightly.ai/lightly.models.html for more information


These models will be removed with the upcoming version 1.2. The necessity of the refactoring stems from a lack of flexibility which makes it difficult to keep up with the latest publications.

SwAV Model
Lightly now supports the [Swapping assignment between views (SWaV) paper](https://arxiv.org/abs/2006.09882). Thanks to the new system with building blocks, we could implement it more easily.

S3Bucket Integration
- We added documentation on how to use an S3Bucket as input directory for lightly. It allows you to train your model and create embeddings without needing to download all your data.

Other
- When uploading the embeddings to the Lightly Platform, no file `embeddings_sorted.csv` is created anymore, as it was only used internally. We also made the upload of large embeddings files faster.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

1.1.19

Refactored Prediction Heads and Jigsaw

Refactored Prediction Heads
Excited to bring the newly refactored prediction and projection heads to you! The new abstractions are easy to understand and
can be extended to arbitrary projection head implementations - making the framework more flexible. Additionally, the implementation of each projection head is now based on a direct citation from the respective paper. Check it out [here](https://github.com/lightly-ai/lightly/blob/master/lightly/models/modules/heads.py).

**Breaking Changes:**
- The argument `num_mlp_layers` was removed from SimSiam and NNCLR and defaults to 3 (as in the respective papers).
- The projection heads and prediction heads of the models are now separate modules which might break old checkpoints. However, the following function helps loading old checkpoints: [`load_from_state_dict`](https://github.com/lightly-ai/lightly/blob/0af0563e02d55ee8363769a18a1a36d2a1408f64/lightly/cli/_helpers.py#L180)

Jigsaw (shikharmn)
Lightly now features the jigsaw augmentation! Thanks a lot shikharmn for your contribution.

Documentation Updates
Parts of the [documentation](https://docs.lightly.ai/) have been refactored to give a clearer overview of the features lightly provides. Additionally, external tutorials have been linked so that everything is in one place.

Bug Fixes
- The `lightly-crop` feature now has a smaller memory footprint
- Filenames containing commas are now ignored
- Checks for the latest pip version occur less often

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.18

Custom Metadata
Lightly now supports uploading custom metadata, which can be used in the [Lightly Web-app](https://app.lightly.ai).

Tutorial on custom metadata
We added a new [tutorial](https://docs.lightly.ai/tutorials/platform/tutorial_aquarium_custom_metadata.html) on how to create and use custom metadata to understand your dataset even better.

Tutorial to use lightly to find false negatives in object detection.
Do you have problems with your object detector not finding all objects? Lightly can help to you to find these false negatives. We created a [tutorial](https://docs.lightly.ai/tutorials/platform/tutorial_cropped_objects_metadata.html) describing how to do it.

Tutorial to embed the Lightly docker into a Dagster pipeline
Do you want to use the Lightly Docker as part of a bigger data pipeline, e.g. with [Dagster](https://dagster.io)? We added a [tutorial](https://docs.lightly.ai/docker/integration/dagster_aws.html) on how to do it.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.17

Active Learning Score Upload

Active Learning Score Upload
The lightly `ActiveLearningAgent` now supports an easy way to upload active learning scores to the [Lightly Web-app](https://app.lightly.ai).

Register Datasets before Upload
The refactored dataset upload now registers a dataset in the web-app before uploading the samples. This makes the upload more efficient and stable. Additionally, the progress of the upload can now be observed in the [Lightly Web-app](https://app.lightly.ai).

Documentation Updates
The [lightly on-premise documentation](https://docs.lightly.ai/docker/overview.html) was updated.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.16

Improved Tutorial and Bug Fix in Masked Select

Improved Tutorial
The "Sunflowers" Tutorial has been overhauled and provides a great starting point for anyone trying to clean up their data.

Bug Fix in Masked Select
Major bug fix which solves confusion about little and big endian representation of the bit masks used for active learning.

Updated Requirements
`lightly` now requires the latest minor version (`0.0.*`) of the `lightly-utils` package instead of a fixed version. This allows quicker bug fixes and updates from the maintainers.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.15

Resume Upload and Minor Updates

Resume Upload
The upload of a dataset can now be resumed if interrupted, as the `lightly-upload` and `lightly-magic` commands will skip files which are already on the platform.

Minor Updates
Filenames of images which are uploaded to the platform can now be up to 255 characters long.
Lightly can now be [cited](https://github.com/lightly-ai/lightly#bibtex) :)

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

Page 18 of 22

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.