New Features:
* Version support added:
- Python 3.11 (1764)
- PyTorch 2.0 (1618, 1635)
- ONNX 1.14 and Opset 14 ([Documentation](https://github.com/neuralmagic/sparseml/blob/b8030b1a5795e9c81aab8af99753b2068e8ca764/src/sparseml/pytorch/utils/exporter.py#L438)) (1627, 1641, 1660, 1767, 1768)
- NumPy 1.21.6 (1623)
* Ultralytics YOLOv8 training and sparsification pipelines added. ([Documentation](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/yolov8)) (#1517, 1522, 1520, 1528, 1521, 1561, 1579, 1597, 1599, 1629, 1637, 1638, 1673, 1686, 1656, 1787)
* [NOTICE](https://github.com/neuralmagic/sparseml/blob/main/NOTICE) updated to reflect now public-facing [Ultralytics Enterprise Software License Agreement](https://github.com/neuralmagic/sparseml/blob/main/LICENSE-ULTRALYTICS) for YOLOv3/v5/v8.
* Initial sparsification framework v2 added for better generative AI support and improved functionality and extensibility. (Documentation available in v1.7) (1713, 1751, 1742, 1763, 1759, 1769)
* BLOOM, CodeGen, OPT, Falcon, GPTNeo, LLAMA, MPT, and Whisper large language and generative models are supported through transformers training, sparsification, and export pipelines. ([Documentation](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/experimental/sparsegpt/examples)) (#1562, 1571, 1585, 1584, 1616, 1633, 1590, 1644, 1615, 1664, 1646, 1631, 1648, 1683, 1687, 1677, 1692, 1694, 1699, 1703, 1709, 1691, 171, 1720, 1746)
* QuantizationModifier for PyTorch sparsification pathways implemented to enable cleaner, more robust, and simpler arguments for quantizing models in comparison to the legacy quantization modifier. ([Documentation](https://github.com/neuralmagic/sparseml/blob/main/src/sparseml/modifiers/quantization/base.py#L25)) (1568, 1594, 1639, 1693, 1745, 1738)
* CLIP pruning, quantization, and export supported. ([Documentation](https://github.com/neuralmagic/sparseml/blob/b8030b1a5795e9c81aab8af99753b2068e8ca764/integrations/clip/README.md?plain=1#L17)) ( 1581, 1626, 1711)
* INT4 quantization support added for model sparsification and export. (Documentation available in v1.8 with LLM support expansion)(1670)
* DDP support added to Torchvision image classification training and sparsification pipelines. (Documentation available in v1.8 with new research paper)(1698, 1784)
* SparseGPT, OBC, and OBQ one-shot/post-training pruning and quantization modifiers added for PyTorch pathways. ([Documentation](https://github.com/neuralmagic/sparseml/tree/main/src/sparseml/experimental/sparsegpt/examples)) (#1705, 1736, 1737, 1761, 1770, 1781, 1776, 1777, 1758)
Changes:
* SparseML upgraded for SparseZoo V2 model file structure changes, which expands the number of supported files and reduces the number of bytes that need to be downloaded for model checkpoints, folders, and files. (1719)
* Docker builds updated to consistently rebuild for new releases and nightlies. (1506, 1531, 1543, 1537, 1665, 1684)
* README and documentation updated to include: Slack Community name change, Contact Us form introduction, Python version changes; corrections for YOLOv5 torchvision, transformers, and SparseZoo broken links; and installation command. (1536, 1577, 1578, 1610, 1617, 1612, 1602, 1659, 1721, 1725 , 1726, 1785)
* Improved support for large ONNX files to improve loading performance and limit memory performance issues, especially for LLMs. (1515, 1540, 1514, 1586)
* Transformers datasets can now be created without a model needing to be passed in. (1544, 1545)
* Torchvision training and sparsification pipelines updated to enable patch versions of torchvision as installable dependencies, whereas before the version was restricted to 0.14.0 and now supports 0.14.x. (1556)
* Image classification training and sparsification pipelines for torchvision now support arguments for RGB emans and standard deviations to be passed in, enabling overriding of the default ImageNet values that were hardcoded. (1546)
* YOLOv5 training and sparsification pipelines migrated to install from `nm-yolov5` on PyPI and remove the autoinstall from the `nm-yolov5` GitHub repository that would happen on invocation of the relevant pathways, enabling more predictable environments. (1518, 1564, 1566)
* Transformers training and sparsification pipelines migrated to install from `nm-transformers `on PyPI and remove the autoinstall from the `nm-transformers` GitHub repository that would happen on invocation of the relevant pathways, enabling more predictable environments. (1518, 1553, 1564, 1566, 1730)
* Deprecated and no longer supported:
- Keras pathways (1585, 1607)
- TensorFlow pathways (1606, 1607)
- Python 3.7 (1611)
- `sparseml.benchmark` commands and utilities; may be refactored in a future release (1625)
- SSD ResNet models sparsification and model loading; will be removed in a future release (1739)
* Pydantic version pinned to <2.0 preventing potential issues with untested versions. (1645)
* Automatic link checking added to GitHub actions. (1525)
Resolved Issues:
* ONNX export for MobileBERT results in an exported ONNX model that previously had poor performance in DeepSparse. (1539)
* OpenCV is now installed for image classification pathways when running` pip install sparseml[torchvision]`. Before it would crash with a missing dependency error of opencv unless installed. (1575)
* Scipy version dependency issues resolved with `scikit-image` which would result in incompatibility errors on install of `scikit-image` for computer vision pathways. (1570)
* Transformers export pathways for quantized models addressed where the export would improperly crash and not export for all transformers models. (1654)
* Transformers data support for jsonl files through the question answering pathways was resulting in a JSONDecodeError; these are now loading correctly. (1667, 1669)
* Unit and integration tests updated to remove temporary test files and limit test file creation which were not being properly deleted. (1609, 1668, 1672, 1696)
* Image classification pipelines no longer crash with an extra argument error when using CIFAR10 or CIFAR100 datasets. (1671)
Known Issues:
* The compile time for dense LLMs can be very slow. Compile time to be addressed in forthcoming release.
* Docker images are not currently pushing. A resolution is forthcoming for functional Docker builds. [RESOLVED]