MoCo and New Documentation
New Model: MoCo
`lightly.models.moco.ResNetMoCo` implements the momentum encoder architecture for self-supervised visual representation learning.
`lightly.loss.memory_bank.MemoryBankWrapper` allows the training of self-supervised models with a queue of negative samples.
New Documentation
The URL for the documentation has changed to https://docs.lightly.ai.
New section on [how lightly works](https://docs.lightly.ai/getting_started/lightly_at_a_glance.html).
New tutorials have been added, check out the [Pizza Tutorial](https://docs.lightly.ai/tutorials/platform/plot_pizza_filter.html) to learn how to train a pizza classifier.
Further Changes:
Refactoring of `lightly.api`.
Default collate functions which implement the SimCLR and MoCo (v1) transfomations.
Collate functions work with tuple as `input_size`.
New tests and tox environments.
Removed `sklearn` dependency for PCA.
Models:
- MoCo: [Momentum Contrast for Unsupervised Visual Representation Learning, 2019](https://arxiv.org/abs/1911.05722)
- SimCLR: [A Simple Framework for Contrastive Learning of Visual Representations, 2020](https://arxiv.org/abs/2002.05709)