Lightly

Latest version: v1.5.15

Safety actively analyzes 688007 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 22

1.5.15

Changes

New transforms
- Add PhaseShift Transform ([1714](https://github.com/lightly-ai/lightly/issues/1714)) by pearguacamole
- Add FDATransform ([1734](https://github.com/lightly-ai/lightly/issues/1734)) by vectorvp

Switch to version-independent torchvision transforms.
- If torchvision transforms v2 are available, they are used. Otherwise torchvision transforms v1 are used. For details see [this comment](https://github.com/lightly-ai/lightly/issues/1547#issuecomment-2124050272).
- Add Transform for DetCon + MultiViewTransformV2 for torchvision.transforms.v2 ([1737](https://github.com/lightly-ai/lightly/issues/1737))

Typing, naming & docstring improvements
- Type `data/_utils `([1740](https://github.com/lightly-ai/lightly/issues/1740)), `data/_helpers` ([#1742](https://github.com/lightly-ai/lightly/issues/1742)) and `tests/models` ([#1744](https://github.com/lightly-ai/lightly/issues/1744)) by vectorvp
- Cleanup: docstrings in the lightly/data subpackage ([1741](https://github.com/lightly-ai/lightly/issues/1741)) by ChiragAgg5k
- Refactor: Update naming and remove unused package from AmplitudeRescaleTransform ([1732](https://github.com/lightly-ai/lightly/issues/1732)) by vectorvp

Other
- Fix DINOProjectionHead BatchNorm Handling ([1729](https://github.com/lightly-ai/lightly/issues/1729))
- Add masked average pooling for pooling with segmentation masks (DetCon)([1739](https://github.com/lightly-ai/lightly/issues/1739))

Many thanks to all of our contributors!

Models
- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https://arxiv.org/pdf/2401.08541.pdf)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https://arxiv.org/abs/2011.09157)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https://link.springer.com/chapter/10.1007/978-3-031-16788-1_4)
- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https://arxiv.org/abs/2301.08243)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [PMSN: Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.5.14

Changes

New transforms
- [Added RFFT2D and IRFFT2D transforms](https://github.com/lightly-ai/lightly/commit/cb4f1f68ad6967c04500d540b16a22427ba211f8) snehilchatterjee
- [Add RandomFrequencyMaskTransform](https://github.com/lightly-ai/lightly/commit/9da0a244776bf24b6486f9ce77b813b6953870e7) payo101
- [Add GaussianMixtureMaskTransform](https://github.com/lightly-ai/lightly/commit/fe7664a3959ba4bca2099d53863d4592a38fb396) snehilchatterjee
- [Add AmplitudeRescaleTransform](https://github.com/lightly-ai/lightly/commit/9578268ee32465bac357196b2af043f1c130bb2e) payo101
- Better support for both torchvision.transforms v1 and v2 without warnings/errors.

Added and updated docstrings
- Many improvements by Prathamesh010, ayush22iitbhu, ChiragAgg5k HarshitVashisht11

Docs improvements
- Improvements of the README.md bhargavshirin and kushal34712 eltociear Mefisto04 ayush22iitbhu
- Improvements of other parts of the the docs and tutorials jizhang02
- Fix examples on Windows snehilchatterjee
- Improve CONTRIBUTING.md Prathamesh010
- Added a back to top button for easier navigation hackit-coder

More and better typing
- Testing typing for all python versions
- Typing of serve.py ishaanagw
- Cleanup: _image.py and _utils.py file in data subpackage ChiragAgg5k

Better formatting
- Move classes and public functions to top of file fadkeabhi and SauravMaheshkar

Other
- [Print Aggregate Metrics at End of Benchmarks as Markdown Table](https://github.com/lightly-ai/lightly/commit/b6955fd40b9b8e2f11cbd6d291820281ed47ba3a) EricLiclair

Many thanks to all of our contributors!

Models
- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https://arxiv.org/pdf/2401.08541.pdf)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https://arxiv.org/abs/2011.09157)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https://link.springer.com/chapter/10.1007/978-3-031-16788-1_4)
- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https://arxiv.org/abs/2301.08243)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [PMSN: Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.5.13

- Support python 3.12, thanks MalteEbner
- update cosine warmup scheduler, thanks guarin

Models
- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https://arxiv.org/pdf/2401.08541.pdf)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https://arxiv.org/abs/2011.09157)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https://link.springer.com/chapter/10.1007/978-3-031-16788-1_4)
- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https://arxiv.org/abs/2301.08243)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [PMSN: Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.5.12

- Use TiCoTransform Everywhere
- Refactor DINOLoss to not use center module
- Add CenterCrop to val transform

Dependencies
- Make library compatible with torch 1.10, torchvision 0.11, and pytorch lightning 1.6 (by using [uv](https://github.com/astral-sh/uv)), thanks guarin

Docs
- Add notebooks, thanks SauravMaheshkar
- Add Timm Backbone Tutorial, thanks SauravMaheshkar
- Further docs and tutorial improvements

Models
- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https://arxiv.org/pdf/2401.08541.pdf)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https://arxiv.org/abs/2011.09157)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https://link.springer.com/chapter/10.1007/978-3-031-16788-1_4)
- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https://arxiv.org/abs/2301.08243)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [PMSN: Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.5.11

- Added IBOTPatchLoss, KoLeoLoss and block masking, thanks guarin
- Allow learnable positional embeddings and boolean masking in masked vision transformer
- Refactor IJEPA to use timm, thanks radiradev

Dependencies
- Allow NumPy 2, thanks adamjstewart
- Removed lightning-bolts dependency

Docs
- Add finetuning tutorial, thanks SauravMaheshkar
- Fix MoCo link in DenseCL docs and further docs and tutorial improvements

Models
- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https://arxiv.org/pdf/2401.08541.pdf)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https://arxiv.org/abs/2011.09157)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https://link.springer.com/chapter/10.1007/978-3-031-16788-1_4)
- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https://arxiv.org/abs/2301.08243)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [PMSN: Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

1.5.10

- Adds the DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training method. See the [docs](https://docs.lightly.ai/self-supervised-learning/examples/densecl.html).
- Add TiCoTransform, thanks radiradev!
- Improvements to the pre-commit hooks, thanks SauravMaheshkar!
- Fix memory bank issue when using `gather_distributed=True` and training on a single GPU
- Fix student head update in DINO benchmark
- Various improvements to MaskedVisionTransformer
- Renaming of Lightly SSL to Lightly**SSL**

Models
- [AIM: Scalable Pre-training of Large Autoregressive Image Models](https://arxiv.org/pdf/2401.08541.pdf)
- [Barlow Twins: Self-Supervised Learning via Redundancy Reduction, 2021](https://arxiv.org/abs/2103.03230)
- [Bootstrap your own latent: A new approach to self-supervised Learning, 2020](https://arxiv.org/abs/2006.07733)
- [DCL: Decoupled Contrastive Learning, 2021](https://arxiv.org/abs/2110.06848)
- [DenseCL: Dense Contrastive Learning for Self-Supervised Visual Pre-Training, 2021](https://arxiv.org/abs/2011.09157)
- [DINO: Emerging Properties in Self-Supervised Vision Transformers, 2021](https://arxiv.org/abs/2104.14294)
- [FastSiam: Resource-Efficient Self-supervised Learning on a Single GPU, 2022](https://link.springer.com/chapter/10.1007/978-3-031-16788-1_4)
- [I-JEPA: Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture, 2023](https://arxiv.org/abs/2301.08243)
- [MAE: Masked Autoencoders Are Scalable Vision Learners, 2021](https://arxiv.org/abs/2111.06377)
- [MSN: Masked Siamese Networks for Label-Efficient Learning, 2022](https://arxiv.org/abs/2204.07141)
- [MoCo: Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- [NNCLR: Nearest-Neighbor Contrastive Learning of Visual Representations, 2021](https://arxiv.org/pdf/2104.14548.pdf)
- [PMSN: Prior Matching for Siamese Networks, 2022](https://arxiv.org/abs/2210.07277)
- [SimCLR: A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)
- [SimMIM: A Simple Framework for Masked Image Modeling, 2021](https://arxiv.org/abs/2111.09886)
- [SimSiam: Exploring Simple Siamese Representation Learning, 2020](https://arxiv.org/abs/2011.10566)
- [SMoG: Unsupervised Visual Representation Learning by Synchronous Momentum Grouping, 2022](https://arxiv.org/pdf/2207.06167.pdf)
- [SwAV: Unsupervised Learning of Visual Features by Contrasting Cluster Assignments, M. Caron, 2020](https://arxiv.org/abs/2006.09882)
- [TiCo: Transformation Invariance and Covariance Contrast for Self-Supervised Visual Representation Learning, 2022](https://arxiv.org/pdf/2206.10698.pdf)
- [VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning, Bardes, A. et. al, 2022](https://arxiv.org/abs/2105.04906)
- [VICRegL: VICRegL: Self-Supervised Learning of Local Visual Features, 2022](https://arxiv.org/abs/2210.01571)

Page 1 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.