Sparseml

Latest version: v1.8.0

Safety actively analyzes 714973 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 7

1.4.0

New Features:
* OpenPifPaf training prototype support (1171)
* Layerwise distillation support for the PyTorch DistillationModifier (1272)
* Recipe template API added in PyTorch for simple creation of default recipes (1147)
* Ability to create sample inputs and outputs on export for transformers, YOLOv5, and image classification pathways (1180)
* Loggers and one-shot support for torchvision training script (1299, 1300)

Changes:
* Refactored the ONNX Export pipeline to standardize implementations, adding functionality for more complicated models, and adding better debugging support. (1192)
* Refactored the PyTorch QuantizationModifier to expand supported models and operators and simplify the interface. (1183)
* YOLOv5 integration upgraded to the latest upstream. (1322)

Resolved Issues:
* `recipe_template` CLI no longer has improper code documentation, impairing operability. (1170)
* ONNX export now enforces that all quantized graphs will have unit8 values. fixing issues for some quantized models that were crashing in DeepSparse. (1181)
* Changed over to vector_norm for PyTorch pruning modifiers that were leading to crashes in older PyTorch versions. (1167)
* Model loading for torchvision script fixed where models were failing on load unless a recipe was supplied. (1281)

Known Issues:
* None

1.3.1

This is a patch release for 1.3.0 that contains the following changes:

- NumPy version pinned to <=1.21.6 to avoid deprecation warning/index errors in pipelines.

1.3.0

New Features:
* NLP multi-label training and eval support added.
* SQuAD v2.0 support provided.
* Recipe template APIs introduced, enabling easier creation of recipes for custom models with standard sparsification pathways.
* EfficientNetV2 model architectures implemented.
* Sample inputs and outputs exportable for YOLOv5, transformers, and image classification integrations.

Changes:
* PyTorch 1.12 and Python 3.10 now supported.
* YOLOv5 pipelines upgraded to the latest version from Ultralytics.
* Transformers pipelines upgraded to latest version from Hugging Face.
* PyTorch image classification pathway upgraded using torchvision standards.
* Recipe arguments now support list types.

Resolved Issues:
* Improper URLs fixed for ONNX export documentation to proper documentation links.
* Latest transformers version hosted by Neural Magic automatically installs; previously it would pin on older versions and not receive updates

Known Issues:
* None

1.2.0

New Features:
* Document classification training and export pipelines added for transformers integration.

Changes:
* Refactor of transformers training and export integration code now enables more code reuse across use cases.
* List of supported quantized nodes expanded to enable more complex quantization patterns for ResNet-50 and MobileBERT enabling better performance for similar models.
* Transformers integration expanded to enable saving and reloading of optimizer state from trained checkpoints.
* Deployment folder added for image classification integration which will export to deployment.
* Gradient accumulation support added for image classification integration.
* Minimum Python version changed to 3.7 as 3.6 as reached EOL.

Resolved Issues:
* Quantized checkpoints for image classification models now instantiates correctly, no longer leading to random initialization of weights rather than restore.
* TraininableParamsModifier for PyTorch now enables and disables params properly so weights are frozen while training.
* Quantized embeddings no longer causes crashes while training with distributed data parallel.
* Improper EfficientNet definitions fixed that would lead to accuracy issues due to convolutional strides being duplicated.
* Protobuf version for ONNX 1.12 compatibility pinned to prevent install failures on some systems.

Known Issues:
* None

1.1.1

This is a patch release for 1.1.0 that contains the following changes:

- Some structurally modified image classification models in PyTorch would crash on reload; they now reload properly.

1.1.0

New Features:
* YOLACT Segmentation native training integration made for SparseML.
* OBSPruning modifier added (https://arxiv.org/abs/2203.07259).
* QAT now supported for MobileBERT.
* Custom module support provided for QAT to enable quantization of layers such as GELU.

Changes:
* Updates made across the repository for new SparseZoo Python APIs.
* Non-string keys are now supported in recipes for layer and module names.
* Native support added for DDP training with pruning in PyTorch pathways.
* YOLOV5p6 models default to their native activations instead of overwriting to Hardswish.
* Transformers eval pathways changed to turn off Amp (fFP16) to give more stable results.
* TensorBoard logger added to transformers integration.
* Python setuptools set as required at 59.5 to avoid installation issues with other packages.
* DDP now works for quantized training of embedding layers where tensors were being placed on incorrect devices and causing training crashes.

Resolved Issues:
* ConstantPruningModifier propagated None in place of the start_epoch value when start_epoch > 0. It now propagates the proper value.
* Quantization of BERT models were dropping accuracy improperly by quantizing the identify branches.
* SparseZoo stubs were not loading model weights for image classification pathways when using DDP training.

Known Issues:
* None

Page 3 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.