New Features
- Support various popular backbones (popular ConvNets and ViTs).
- Support mixed precision training (NVIDIA Apex or MMCV Apex).
- Support supervised, self- & semi-supervised learning methods and benchmarks.
- Support fast configs generation from a basic config file by `auto_train.py`.
Bug Fixes
- Fix bugs of code refactoring (backbones, fp16 training, etc.).
OpenSelfSup (v0.3.0, 14/10/2020) Supported Features
This repo is originally built on OpenSelfSup (the old version of [MMSelfSup](https://github.com/open-mmlab/mmselfsup)) and borrows some implementations from [MMClassification](https://github.com/open-mmlab/mmclassification).
- Mixed Precision Training (based on NVIDIA Apex for **PyTorch 1.6**).
- Improvement of GaussianBlur doubles the training speed of MoCo V2, SimCLR, and BYOL.
- More benchmarking results, including benchmarks on Places, VOC, COCO, and linear/semi-supervised benchmarks.
- Fix bugs in moco v2 and BYOL so that the reported results are reproducible.
- Provide benchmarking results and model download links.
- Support updating the network every several iterations (accumulation).
- Support LARS and LAMB optimizer with Nesterov (LAMB from [MMClassification](https://github.com/open-mmlab/mmclassification)).
- Support excluding specific parameter-wise settings from the optimizer updating.