New Features:
* One-shot and recipe arguments support added for transformers, yolov5, and torchvision.
* Dockerfiles and new build processes created for Docker.
* CLI formats and inclusion standardized on install of SparseML for transformers, yolov5, and torchvision.
* N:M pruning mask creator deployed for use in PyTorch pruning modifiers.
* Masked_language_modeling training CLI added for transformers.
* Documentation additions made across all standard integrations and pathways.
* GitHub action tests running for end-to-end testing of integrations.
Changes:
* Click as a root dependency added as the new preferred route for CLI invocation and arg management.
* Provider parameter added for ONNXRuntime InferenceSessions.
* Moved `onnxruntime` to optional install extra. `onnxruntime` no longer a root dependency and will only be imported when using specific pathways.
* QAT export pipelines improved with better support for QATMatMul and custom operators.
Resolved Issues:
* Incorrect commands and models updated for older docs for transformers, yolov5, and torchvision.
* YOLOv5 issues addressed with data files, configs, and datasets not being easily accessible with the new install pathway. They are now included in the `sparseml` src folder for yolov5.
* An extra batch no longer runs for the PyTorch ModuleRunner.
* None sparsity parameter was being improperly propagated for sparsity in the PyTorch ConstantPruningModifier.
* PyPI dependency conflicts no longer occur with the latest ONNX and Protobuf upgrades.
* When GPUs were not available, yolov5 pathways were not working.
* Transformers export was not working properly when neither `--do_train` or `--do_eval arguments` were passed in.
* Non-string keys now allowed within recipes.
* Numerous fixes applied for pruning modifiers including improper masks casting, improper initialization, and improper arguments passed through for MFAC.
* YOLOv5 export formatting error addressed.
* Missing or incorrect data corrected for logging and recording statements.
* PyTorch DistillationModifier for transformers was ignoring both "self" distillation and "disable" distillation values; instead, normal distillation would be used.
* FP16 not deactivating on QAT start for torchvision.
Known Issues:
* PyTorch > 1.9 quantized ONNX export is broken; waiting on PyTorch resolution and testing.