Sparseml

Latest version: v1.8.0

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 7

0.11.0

New Features:
* Hugging Face NLP masked language modeling CLI and support implemented for training and export.
* PyTorch Image classification CLIs deployed.
* WoodFisher/M-FAC pruning algorithm, AC/DC pruning algorithm, and structured pruning algorithm support added with modifiers for PyTorch.
* Reduced precision support provided for quantization in PyTorch (< INT8).

Changes:
* Refactored pruning and quantization algorithms from the `sparseml.torch.optim` package to the `sparseml.torch.sparsification` package.

Resolved Issues:
* None

Known Issues:
* None

0.10.1

This is a patch release for 0.10.0 that contains the following changes:

- Conversion of Hugging Face BERT models from PyTorch to ONNX no longer drops accuracy, previously ranging from 1-25% depending on the task and dataset.

0.10.0

New Features:
* Hugging Face Transformers native integration and CLIs implemented for installation to train transformer models.
* Cyclic LR support added to `LearningRateFunctionModifier` in PyTorch.
* [ViT (vision transformer) examples](https://github.com/neuralmagic/sparseml/tree/main/integrations/rwightman-timm#recipes) added with the `rwightman/timm` integration.

Changes:
* Quantization implementation for BERT models improved (shorter schedules and better recovery).
* PyTorch image classification script saves based on top 1 accuracy now instead of loss.
* Integration `rwightman/timm` updated for ease-of-use with `setup_integration.sh` to set up the environment properly.

Resolved Issues:
* Github actions now triggering for external forks.

Known Issues:
* Conversion of quantized Hugging Face BERT models from PyTorch to ONNX is currently dropping accuracy, ranging from 1-25% depending on the task and dataset. A hotfix is being pursued; users can fall back to version 0.9.0 to prevent the issue.
* Export for masked language modeling with Hugging Face BERT models from PyTorch is currently exporting incorrectly due to a configuration issue. A hotfix is being pursued; users can fall back to version 0.9.0 to prevent the issue.

0.9.0

New Features:

* `dbolya/yolact` [integration added](https://github.com/neuralmagic/sparseml/tree/main/integrations/dbolya-yolact) with recipes, tutorial, and performant models for the YOLACT segmentation model.
* Automatic recipe creation API for pruning recipes added, create_pruning_recipe, along with base class implementations for future expansion of RecipeEditor and RecipeBuilder.
* Structured pruning now supported for channels and filters with StructuredPruningModifier and LayerThinningModifier.
* PyTorch QAT pipelines: added support for automatic fusing of Conv-ReLU blocks, FPN layers, and Convs with shared weights.
* Analyzer implementations for evaluating a model's performance and loss sensitivity to pruning and other algorithms added for ONNX framework.
* Up-to-date version check implemented for SparseML.

Changes:

* Automatically unwrap PyTorch distributed modules so recipes do not need to be changed for distributed pipelines.
* BERT recipes updated to use the distillation modifier.
* References to num_sockets for the DeepSparse engine were removed following the deprecated support for DeepSparse 0.9.
* Changed the block pruning flow to use FourBlockMaskCreator for block sparsity which will not impose any constraint on the divisibility of the channel's dimensions to be pruned on with the block size.
* API docs recompiled.

Resolved Issues:

* Improper variable names corrected that were causing crashes for specific flows in the WoodFisher pruning algorithm.

Known Issues:

* None

0.8.0

New Features:

* ONNX benchmarking APIs added.
* QAT and export support added for torch.nn.Embedding layers.
* PyTorch distillation modifier implemented.
* Arithmetic and equation support for recipes added.
* Sparsification oracle available now with initial support for automatic recipe creation.

Changes:

* Torchvision integration and training pipeline reworked to simplify and streamline the codebase.
* Migration of PyTorch modifiers to base classes to be shared across all frameworks.

Resolved Issues:

* None

Known Issues:

* None

0.7.0

New Features:

* Support added for
* PyTorch 1.9.0.
* Python 3.9.
* ONNX versions 1.8 - 1.10.
* PyTorch QATWrapper class to support quantization of custom modules through recipes added.
* PyTorch image classification [sparse transfer learning recipe and tutorial](https://github.com/neuralmagic/sparseml/blob/main/integrations/pytorch/tutorials/classification_sparse_transfer_learning_tutorial.md) created.
* Generic benchmarking API provided that can be overwritten for specific framework implementations.
* M-FAC (WoodFisher) pruning implemented along with relat3ed documentation, and tutorials for one-shot and training-aware: https://arxiv.org/abs/2004.14340

Changes:

* Performance sensitivity analysis tests updated to respect new information coming from a change in the DeepSparse analysis API.

Resolved Issues:

* Repeated apply calls no longer occur for PyTorch pruning masks.
* Neural Magic dependencies no longer require only matching major.minor versions (allow any bug version).
* Support added for getting nightly versions if installed for framework info and Neural Magic package versions.

Known Issues:

* Hugging Face transformers integrations with num_epochs override from recipes is not currently working. The workaround is to set the num_epochs argument to the maximum number of epochs in the recipe.

Page 5 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.