Lightly

Latest version: v1.5.15

Safety actively analyzes 688007 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 18 of 22

1.1.16

Improved Tutorial and Bug Fix in Masked Select

Improved Tutorial
The "Sunflowers" Tutorial has been overhauled and provides a great starting point for anyone trying to clean up their data.

Bug Fix in Masked Select
Major bug fix which solves confusion about little and big endian representation of the bit masks used for active learning.

Updated Requirements
`lightly` now requires the latest minor version (`0.0.*`) of the `lightly-utils` package instead of a fixed version. This allows quicker bug fixes and updates from the maintainers.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.15

Resume Upload and Minor Updates

Resume Upload
The upload of a dataset can now be resumed if interrupted, as the `lightly-upload` and `lightly-magic` commands will skip files which are already on the platform.

Minor Updates
Filenames of images which are uploaded to the platform can now be up to 255 characters long.
Lightly can now be [cited](https://github.com/lightly-ai/lightly#bibtex) :)

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.14

Lightly-Crop Command, Much Faster Upload, Faster Ntxent Loss

The `lightly-crop` CLI command crops objects out of the input images based on labels and copies them into an output folder. This is very useful for doing SSL on an object-level instead of an image level. For more information, look at the documentation at https://docs.lightly.ai/getting_started/command_line_tool.html#crop-images-using-labels-or-predictions

We made the upload to the Lightly platform via `lightly-upload`or `lightly-magic`much faster. It should be at least 2 times faster (for smaller images) and even faster for large and compressed images like large jpegs.

The ntxent loss has a higher performance by optimising transfer between CPU and GPU

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.13

More CLI parameters, Bugfixes, Documentation

This release adds the new CLI parameter `trainer.weights_summary` allowing you to set the respective parameter of the pytorch lightning trainer for controlling how much information about your embedding model should be printed.

It also includes some bugfixes and documentation improvements.

Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.12

New ImageNette Benchmarks and Faster Dataset Indexing

This release contains smaller fixes on the data processing side:

- Dataset indexing is now up to twice as fast when working with larger datasets
- By default, we don't use `0` workers anymore. The default argument of `-1` automatically detects the number of available cores and uses them. This can speed up the loading of data as well as the uploading of data to the Lighlty Platform.

New ImageNette Benchmarks
We added new benchmarks for the ImageNette dataset.

| Model | Epochs | Batch Size | Test Accuracy |
|-------------|--------|------------|---------------|
| MoCo | 800 | 256 | 0.827 |
| SimCLR | 800 | 256 | 0.847 |
| SimSiam | 800 | 256 | 0.827 |
| BarlowTwins | 800 | 256 | 0.801 |
| BYOL | 800 | 256 | 0.851 |


Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

1.1.11

Nearest Neighbour Contrastive Learning of Representations (NNCLR)

New NNCLR model
NNCLR[0] is basically SimCLR, but replaces samples by their nearest neighbours as an additional "augmentation" step.
As part of it, a Nearest Neighbour Memory Bank Module was implemented, which could also be used for other models.

[0] [With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations](https://arxiv.org/abs/2104.14548v1)

python
resnet = torchvision.models.resnet18()
backbone = nn.Sequential(
*list(resnet.children())[:-1],
nn.AdaptiveAvgPool2d(1),
)

NNCLR
model = NNCLR(backbone)
criterion = NTXentLoss()

Prefer SimSiam with nearest neighbour?
model = SimSiam(backbone)
criterion = SymNegCosineSimilarityLoss()

Prefer BYOL with nearest neighbour?
model = BYOL(backbone)
criterion = SymNegCosineSimilarityLoss()

nn_replacer = NNMemoryBankModule(size=2 ** 16)

forward pass
(z0, p0), (z1, p1) = model(x0, x1)
z0 = nn_replacer(z0.detach(), update=False)
z1 = nn_replacer(z1.detach(), update=True)
loss = 0.5 * (criterion(z0, p1) + criterion(z1, p0))


Models

- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)

Page 18 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.