- DatasetInterface class removal- refactored as torch.DataLoader objects configured by src/training/recipes, using super_gradients.dataloaders.get() ([see new updated tutorials and snippets)](https://github.com/Deci-AI/super-gradients#readme).
- Trainer.build_model() removal- models initialisation refactored with super_gradients.models.get() ([see updated tutorials and notebooks).](https://github.com/Deci-AI/super-gradients#readme)
- Coded DDP launch (no need for python -m torch.distributed.launch ...), see new snippets [here](https://github.com/Deci-AI/super-gradients#using-ddp) .
- Updated notebooks, tutorials and code snippets in [readme.md](https://github.com/Deci-AI/super-gradients#readme).
- Extract recipes training hyper_params config with super_gradients.training_hyperparams.get() ([see updated tutorials and notebooks](https://github.com/Deci-AI/super-gradients#readme)).
- Simplfied resume- now passed through train_params in SgTrainer.train() (see updated snippets in [readme.md](https://github.com/Deci-AI/super-gradients#readme)).
- Removal of "loss_loggging_items_names" from train_params in Trainer.train().
- Trainer.__init__ old, unnecessary args removed.
- Add support for getting models from Deci's platform using super_gradients.models.get(), more info regarding Deci's platform in [readme.md](https://github.com/Deci-AI/super-gradients#readme).