Lightly

Latest version: v1.5.15

Safety actively analyzes 688007 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 17 of 22

1.1.22

Dataset Upsizing, Bugfixes
Dataset Upsizing
You can now add new samples and embedding to an existing dataset. Just run the usual `lightly-upload` or `lightly-magic` command with the `dataset_id` of an existing dataset and it will upload all new images to it. The embeddings are also updated.

Bugfix: ResnetGenerator now uses the argument `num_classes` correctly.
Before the fix, it was hardcoded to 10 classes. Thanks to smartdanny for finding and fixing this bug!

Bugfix: NNCLR
NNCLR had a bug that the projection and prediction head were not connected correctly. Thanks to HBU-Lin-Li for finding this bug!

Bugfix: Version check timeout
When lightly starts, it checks if a newer version is available. This check could occur multiple times due to circular imports and it could take long if your don't have an internet connect. We fixed this to do only one version check and restrict its duration to 1s. Thanks to luzuku for finding this bug!

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

1.1.21

Video Datasets with Subfolders, Specify Relevant Files

Video Datasets with Subfolders
Just like for image datasets, now also video datasets with the videos in subfolders are supported. E.g. you can have the following input directory:

/path/to/data/
L subfolder_1/
L my-video-1-1.mp4
L my-video-1-2.mp4
L subfolder_2/
L my-video-2-1.mp4


Specify relevant files
When creating a `LightlyDataset` you can now also specify the argument `filenames`. It must be a list of filenames relative to the input directory. Then the dataset only uses the files specified and ignores all other files. E.g. using
python
LightlyDataset(input_dir='/path/to/data', filenames=['subfolder_1/my-video-1-1.mp4', 'subfolder_2/my-video-2-1.mp4'])

will only create a dataset out of the two specified files and ignore the third file.

Other
We added the SwAV model to the README, it was already added to the documentation.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

1.1.20

Refactor Models, SwAV Model, S3-Bucket Integration
Refactor Models
This release will make it much easier to implement new models or adapt existing models by using basic building blocks. E.g. you can define your own model out of blocks like a backbone, projection head, momentum encoder, nearest neighbour memory bank and more.
We want you to see easily how the models in current papers are build and that different papers often only differ in one or two of these blocks.
Compatible examples of all models are shown in the benchmarking scripts for [imagenette](https://github.com/lightly-ai/lightly/blob/master/docs/source/getting_started/benchmarks/imagenette_benchmark.py) and [cifar10](https://github.com/lightly-ai/lightly/blob/master/docs/source/getting_started/benchmarks/cifar10_benchmark.py).

As part of this refactoring to improve flexibility of the framework we have added a deprecation warning to all old models under `lightly/models`, e.g.:


The high-level building block NNCLR will be deprecated in version 1.2.0.
Use low-level building blocks instead.
See https://docs.lightly.ai/lightly.models.html for more information


These models will be removed with the upcoming version 1.2. The necessity of the refactoring stems from a lack of flexibility which makes it difficult to keep up with the latest publications.

SwAV Model
Lightly now supports the [Swapping assignment between views (SWaV) paper](https://arxiv.org/abs/2006.09882). Thanks to the new system with building blocks, we could implement it more easily.

S3Bucket Integration
- We added documentation on how to use an S3Bucket as input directory for lightly. It allows you to train your model and create embeddings without needing to download all your data.

Other
- When uploading the embeddings to the Lightly Platform, no file `embeddings_sorted.csv` is created anymore, as it was only used internally. We also made the upload of large embeddings files faster.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

1.1.19

Refactored Prediction Heads and Jigsaw

Refactored Prediction Heads
Excited to bring the newly refactored prediction and projection heads to you! The new abstractions are easy to understand and
can be extended to arbitrary projection head implementations - making the framework more flexible. Additionally, the implementation of each projection head is now based on a direct citation from the respective paper. Check it out [here](https://github.com/lightly-ai/lightly/blob/master/lightly/models/modules/heads.py).

**Breaking Changes:**
- The argument `num_mlp_layers` was removed from SimSiam and NNCLR and defaults to 3 (as in the respective papers).
- The projection heads and prediction heads of the models are now separate modules which might break old checkpoints. However, the following function helps loading old checkpoints: [`load_from_state_dict`](https://github.com/lightly-ai/lightly/blob/0af0563e02d55ee8363769a18a1a36d2a1408f64/lightly/cli/_helpers.py#L180)

Jigsaw (shikharmn)
Lightly now features the jigsaw augmentation! Thanks a lot shikharmn for your contribution.

Documentation Updates
Parts of the [documentation](https://docs.lightly.ai/) have been refactored to give a clearer overview of the features lightly provides. Additionally, external tutorials have been linked so that everything is in one place.

Bug Fixes
- The `lightly-crop` feature now has a smaller memory footprint
- Filenames containing commas are now ignored
- Checks for the latest pip version occur less often

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.18

Custom Metadata
Lightly now supports uploading custom metadata, which can be used in the [Lightly Web-app](https://app.lightly.ai).

Tutorial on custom metadata
We added a new [tutorial](https://docs.lightly.ai/tutorials/platform/tutorial_aquarium_custom_metadata.html) on how to create and use custom metadata to understand your dataset even better.

Tutorial to use lightly to find false negatives in object detection.
Do you have problems with your object detector not finding all objects? Lightly can help to you to find these false negatives. We created a [tutorial](https://docs.lightly.ai/tutorials/platform/tutorial_cropped_objects_metadata.html) describing how to do it.

Tutorial to embed the Lightly docker into a Dagster pipeline
Do you want to use the Lightly Docker as part of a bigger data pipeline, e.g. with [Dagster](https://dagster.io)? We added a [tutorial](https://docs.lightly.ai/docker/integration/dagster_aws.html) on how to do it.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.17

Active Learning Score Upload

Active Learning Score Upload
The lightly `ActiveLearningAgent` now supports an easy way to upload active learning scores to the [Lightly Web-app](https://app.lightly.ai).

Register Datasets before Upload
The refactored dataset upload now registers a dataset in the web-app before uploading the samples. This makes the upload more efficient and stable. Additionally, the progress of the upload can now be observed in the [Lightly Web-app](https://app.lightly.ai).

Documentation Updates
The [lightly on-premise documentation](https://docs.lightly.ai/docker/overview.html) was updated.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

Page 17 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.