Refactor Models, SwAV Model, S3-Bucket Integration
Refactor Models
This release will make it much easier to implement new models or adapt existing models by using basic building blocks. E.g. you can define your own model out of blocks like a backbone, projection head, momentum encoder, nearest neighbour memory bank and more.
We want you to see easily how the models in current papers are build and that different papers often only differ in one or two of these blocks.
Compatible examples of all models are shown in the benchmarking scripts for [imagenette](https://github.com/lightly-ai/lightly/blob/master/docs/source/getting_started/benchmarks/imagenette_benchmark.py) and [cifar10](https://github.com/lightly-ai/lightly/blob/master/docs/source/getting_started/benchmarks/cifar10_benchmark.py).
As part of this refactoring to improve flexibility of the framework we have added a deprecation warning to all old models under `lightly/models`, e.g.:
The high-level building block NNCLR will be deprecated in version 1.2.0.
Use low-level building blocks instead.
See https://docs.lightly.ai/lightly.models.html for more information
These models will be removed with the upcoming version 1.2. The necessity of the refactoring stems from a lack of flexibility which makes it difficult to keep up with the latest publications.
SwAV Model
Lightly now supports the [Swapping assignment between views (SWaV) paper](https://arxiv.org/abs/2006.09882). Thanks to the new system with building blocks, we could implement it more easily.
S3Bucket Integration
- We added documentation on how to use an S3Bucket as input directory for lightly. It allows you to train your model and create embeddings without needing to download all your data.
Other
- When uploading the embeddings to the Lightly Platform, no file `embeddings_sorted.csv` is created anymore, as it was only used internally. We also made the upload of large embeddings files faster.
Models
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)