Lightly

Latest version: v1.5.15

Safety actively analyzes 687990 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 16 of 22

1.2.4

Multi-GPU support, format check for embedding file

Multi-GPU
We now support using lightly with multiple GPUs. For reference, look at the [docs](https://docs.lightly.ai/getting_started/distributed_training.html).

Format check for embedding file
When trying to upload an embedding file to the Lightly Platform it is now checked to have the correct format: It must have the column names in the correct order and without whitespaces and must not have empty rows.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

1.2.3

Faster metadata upload, bugfixes and improvements

Faster metadata upload
The upload of custom metadata is now done async and with multiple workers in parallel, allowing to speed up the upload process by up to 30 times.

Bugfixes
- When there is a failure uploading a file to a signed url, now the status code is printed correctly.
- Creating a `LightlyDataset` with an `input_dir` with videos will now raise all errors scanning the input directory instead of ignoring them. This means e.g. that if a subfolder without read permissions is encountered, a `PermissionError` will be raised instead of silently ignoring the subfolder.
- When embedding, the order of the embeddings in the output will now be the order of the samples in the dataset, even if multiple workers are used in the dataloader. Thus also the embeddings in the embedding file are in the sorted order. This is not directly a bugfix but might prevent problems later on.

Improvements
- The usage of resnet backbones in the example models is now consistent. Thanks for bringing this up JeanKaddour!
- The SimCLR example now does not use Gaussian blur anymore, just like in the paper. Thanks for pointing this out littleolex!
- The BarlowTwins example now also uses an input size of 32 to make it consistent with the other examples. Thanks for bringing this up heytitle!
- The documentation for setting up Azure as cloud storage for the Lightly Platform has been improved.
Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

1.2.2

Bug fixes and documentation updates

Documentation Updates
We have now added support for other cloud storage providers. You can now work directly with data stored in AWS S3, Azure Blob Storage, and Google Cloud Storage. Furthermore, you can stream data directly from your local filesystem in the [Lightly Platform](https://app.lightly.ai) without uploading any images/ videos to any cloud. [Check out the instructions here!](https://docs.lightly.ai/getting_started/platform.html#create-a-dataset-from-a-local-folder-or-cloud-bucket)

Performance
- We improved the dataset indexing that is used whenever you create a lightly dataset. Indexing of large datasets (>1 mio samples) now works much faster.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

1.2.1

Bug fixes and documentation updates

Bug fixes
We fixed the bug that `lightly-download` with the option `exclude_parent_tag` didn't work and a bug in the `api_workflow_upload_metadata` introduced by the change which made the apis private members.

Documentation Updates
The docs have received a fresh new look! Additionally, we have added tutorials about how to use [Lightly with data hosted on S3](https://docs.lightly.ai/getting_started/platform.html#how-to-use-s3-with-lightly) and how to [export data directly to Labelstudio](https://docs.lightly.ai/tutorials/platform/tutorial_label_studio_export.html) (no download needed!)

CLI
The CLI now stores important results such as the `dataset_id`, `path_to_embeddings` and `path_to_checkpoint` in the environment variables `LIGHTLY_LAST_CHECKPOINT_PATH`, `LIGHTLY_LAST_EMBEDDING_PATH`, and `LIGHTLY_LAST_DATASET_ID`.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

1.2.0

Low-Level Building Blocks
To improve the flexibility of the lightly framework we refactored our models into smaller building blocks. These blocks can now easily be assembled into novel model architectures and allow lightly to better integrate with other deep learning libraries such as PyTorch Lightning.

Examples
We provide example implementations using low-level building blocks for all the models we support in the new [Examples](https://docs.lightly.ai/examples/models.html) section in our documentation.

We also updated the other parts of the documentation to use the new building blocks. We hope that this makes it easier to integrate lightly into your own project!

Deprecation
As part of this refactoring have added a deprecation warning to all old models under `lightly/models`. We intend to remove those models in version 1.3.0.

Detectron2 Pretraining Tutorial
We created a new [tutorial](https://docs.lightly.ai/tutorials/package/tutorial_pretrain_detectron2.html) which shows how to use lightly to pre-train an object detection model with the [Detectron2](https://github.com/facebookresearch/detectron2) framework.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

1.1.23

Bugfix: Upload Embeddings
Uploading embeddings to a new dataset through `lightly-magic` or `lightly-upload` raised an error. This is now fixed. Thanks to natejenkins for the help!

Bugfix: `lightly-download` with integer tag names
`lightly-download` now supports downloading datasets with integer tag names.

Readme Overview Image
The overview image in the readme file should now point again to the right address. Thanks vnshanmukh for the contribution!

VideoDatasets are more efficient
VideoDatasets now precompute the dataset length instead of recalculating it every time a frame is accessed.

Other
Added `scikit-learn` and `pandas` to dev dependencies.
Added more API tests.

Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)

Page 16 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.