New Features:
* PyTorch 1.13 support (1143)
* Enabled patch versions for torchvision 0.14.x (1557)
* YOLOv8 sparsification pipelines ([view](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/yolov8))
* Per layer distillation support for PyTorch Distillation modifier (1272)
* Torchvision training pipelines:
* Wandb, TensorBoard, and console logging (1299)
* DataParallel module (1332)
* Distillation (1310)
* Product usage analytics tracking; to disable, run the command `export NM_DISABLE_ANALYTICS=True` (1487)
Changes:
* Transformers and YOLOv5 integrations migrated from auto install to install from PyPI packages. Going forward, `pip install sparseml[transformers]` and `pip install sparseml[yolov5]` will need to be used.
* Error message updated when utilizing wandb loggers and wandb is not installed in the environment, telling user to pip install wandb. (1374)
* Keras and TensorFlow tests have been removed; these are no longer actively supported pathways.
* `scikit-learn` now replaced with `sklearn` to stay current with dependency name changes. (1294)
Resolved Issues:
* Using recipes that utilized the legacy PyTorch QuantizationModifier with DDP when restoring weights for sparse transfer no longer crashes. (1490)
* If labels were not being set correctly when utilizing a distillation teacher different from the student with token classification pipelines, training runs would crash. (1414)
* Q/DQ folding fixed on ONNX export for quantization nodes occurring before Softmax in transformer graphs; performance issues would result for some transformer models in DeepSparse. (1343)
* Inaccurate metrics calculations for torchvision training pipelines led to discrepancies in top1 and top5 accuracies by ~1%. (1341)
Known Issues:
* None