Sparseml

Latest version: v1.8.0

Safety actively analyzes 681844 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 7

0.6.0

New Features:

* YOLOv5 sparsification [tutorials and recipes added](https://docs.neuralmagic.com/main/source/model-pages/cv-detection-yolov5.html).
* YOLOv3 [sparse transfer learning tutorial added](https://docs.neuralmagic.com/main/source/model-pages/cv-detection-yolov3.html#sparse-training).
* PyTorch image classification [using recipes and recipes for ResNet-50](https://docs.neuralmagic.com/main/source/model-pages/cv-classification-resnet50.html#sparse-training) and [MobileNet tutorial ](https://docs.neuralmagic.com/main/source/model-pages/cv-classification-mobilenet.html#sparse-training) added.
* BERT [additional recipes added for FP32](https://docs.neuralmagic.com/main/source/model-pages/nlp-bert.html#sparse-training), 3 and 6 layer sparse models.
* Support for phased pruning added: https://arxiv.org/pdf/2106.12379.pdf
* [Research folder](https://github.com/neuralmagic/sparseml/tree/main/research) created for sparsifying passage retrieval.

Changes:

* README updated for Hugging Face transformers integration based on the new implementation.
* ONNX export in PyTorch now supports dictionary inputs.
* Quantized graph export optimizations for YOLOv5.
* PyTorch image classification integration updated to use new manager.modify(...) apis and saves recipes to runs folder.
* DeepSparse YOLO links updated to point at new example location.
* kwargs support added for ONNX export in PyTorch to enable dyanmic_axes and named inputs.

Resolved Issues:

* torch 1.8 quantization export no longer folds incorrectly.
* ONNX toposort issue addressed for nodes with more than two outputs.
* Unused initializers removed in quantized ONNX graphs.

Known Issues:

* None

0.5.1

This is a patch release for 0.5.0 that contains the following changes:

- version updated for consistency to reflect DeepSparse repo hotfix.

0.5.0

New Features:

* `research` folder added to root directory intended for research contributions.
* First research contributions added for information retrieval.
* [Tutorial for sparsifying BERT on the SQuAD dataset](https://github.com/neuralmagic/sparseml/blob/main/integrations/huggingface-transformers/tutorials/sparsifying_bert_using_recipes.md) created.
* `LayerPruningModifier` and `LearningRateFunctionModifier` implementations added for PyTorch.

Changes:

* Hugging Face transformers integration reworked to match new integration standards.
* CIFAR data augmentations updated for PyTorch datasets.
* Pruning algorithms using a pruning scorer object for better extensibility refactored with new pruning methods.

Resolved Issues:

* If the source URL is down, tests no longer fail for VOC dataset.
* Because the DeepSparse API includes more information for kernel sparsify performance analysis, previously failing tests have been updated to correctly check and return the updated info.
* Models with more than 1 input can now complete the PyTorch ONNX export process.
* Edge cases and better defaults improved with the WoodFisher/M-FAC algorithm for better recovery.
* Deprecated use of torch.nonzero API call in the pruning modifiers to .nonzero(as_tuple=False).

Known Issues:

* None

0.4.0

New Features:

* M-FAC/Woodfisher pruning algorithm alpha implemented.
* Movement pruning algorithm alpha implemented.
* Distillation code added for GLUE dataset in Hugging Face/transformers integration.
* BERT quantization pipeline enabled for training and export to ONNX.

Changes:

* Readme redesigned for better clarity on the repository's purpose.
* All examples, notebooks, and scripts moved under the integrations directory.
* Integrations for ultralytics/yolov3, ultralytics/yolov5, pytorch, keras, tensorflow reworked to match new integrations standards for better ease of use.

Resolved Issues:

* rwightman/timm integration bugs dealing with checkpoint paths and loading models addressed.
* tensorflow-gpu for tensorflow v1 now recognized correctly.
* Neural Magic dependencies upgrade to intended bug versions instead of minor versions.

Known Issues:

* Movement pruning is currently only working with FP16 training on GPUs, FP32 is diverging to NaN.

0.3.1

This is a patch release for 0.3.0 that contains the following changes:

- DeepSparse APIs now properly referencing VNNI check
- Block sparse masks now applied for pruning modifiers
- Some tests marked as flaky to make tests more consistent
- Docs updated for new Discourse and Slack links
- Modifier code refactored to better support Automatic Mixed Precision Training (AMP) in PyTorch
- Emulated_step added to manager for inconsistent steps_per_epoch in PyTorch
- Serialization of block sparse-enabled pruning modifiers no longer fail on reload

0.3.0

New Features:

* YOLO integration with Ultralytics deployed including DeepSparse examples for benchmarking and running detection over videos.
* Framework and Sparsification Info APIs now available for all supported ML frameworks.
* Properties added to the ScheduledManager class to allow for lookup of contained modifiers such as pruning and quantization.
* ALL_PRUNABLE token added for pruning modifiers.
* PyTorch global magnitude pruning support implemented.
* QAT support added for BERT.

Changes:

* Version changed to be loaded from version.py file, default build on branches is now nightly.
* Additional unit tests added in for Keras integration.
* PyTorch max supported version updated to 1.7.
* Improved performance for parsing and fixing QAT ONNX graphs from PyTorch.

Resolved Issues:

* Docs typos and broken links addressed.
* Pickling models with PyTorch pruning hooks work as expected.
* Incorrect loss scaling for DDP in PyTorch vision.py script addressed.

Known Issues:

* None

Page 6 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.