Neural-compressor

Latest version: v3.3.1

Safety actively analyzes 723929 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 8

1.14.1

Not secure
- Bug Fixes
- Productivity
- Examples

**Bug Fixes**
- Fix name matching issue of scale and zero-point in PyTorch (commit [fd7a53](https://github.com/intel/neural-compressor/commit/fd7a53f2a3ac904c3cf8dbb388e9de50b3ea6bc2))
- Fix incorrect output quantization mode of MatMul + Relu fusion in TensorFlow (commit [9b5293](https://github.com/intel/neural-compressor/commit/9b529388bf3a6589e2a25cd4c6391c11d63b2b93))

**Productivity**
- Support Onnx model with Python3.10 (commit [2faf0b](https://github.com/intel-innersource/frameworks.ai.lpot.intel-lpot/releases/1.%09https:/github.com/intel/neural-compressor/commit/2faf0bc2be6f03f31bca1cc978f4feccea4abc5a))
- Using TensorFlow create_file_writer API to support histogram of Tensorboard (commit [f34852](https://github.com/intel/neural-compressor/commit/f348529429c32cd82b42970212f1283980876ac2))

**Examples**
- Add NAS notebooks (commit [5f0adf](https://github.com/intel-innersource/frameworks.ai.lpot.intel-lpot/releases/%E2%80%A2%09https:/github.com/intel/neural-compressor/commit/5f0adfee344abf60e6779d05bc77cebc27ab6aed))
- Add Bert mini 2:4, 1x4 and mixed examples with new Pruning API (commit [a52074](https://github.com/intel/neural-compressor/commit/a520746a5eceb1159b481c10a0ebd670226a8c47))
- Add keras in, saved_model out resnet101, inception_v3, mobilenetv2, xception, resnetv2 examples (commit [fdd40e](https://github.com/intel/neural-compressor/commit/fdd40e13626f9d4bc826dc281cc920ec1ae3ce2f))

**Validated Configurations**
- Python 3.7, 3.8, 3.9, 3.10
- Centos 8.3 & Ubuntu 18.04 & Win10
- TensorFlow 2.9, 2.10
- Intel TensorFlow 2.7, 2.8, 2.9
- PyTorch 1.10.0+cpu, 1.11.0+cpu, 1.12.0+cpu
- IPEX 1.10.0, 1.11.0, 1.12.0
- MxNet 1.7, 1.9
- ONNX Runtime 1.10, 1.11, 1.12

1.14

Not secure
- Highlights
- New Features
- Improvements
- Bug Fixes
- Productivity
- Examples

**Highlights**
We are excited to announce the release of Intel® Neural Compressor v1.14! We release new Pruning API for PyTorch, allowing users select better combinations of criteria, pattern and scheduler to achieve better pruning accuracy. This release also supports Keras input for TensorFlow quantization, and self-distilled quantization for better quantization accuracy.

**New Features**

- Pruning/Sparsity
- Support new structured sparse patterns N in M and NxM (commit [6cec70](https://github.com/intel/neural-compressor/commit/6cec70bb2c5fd3079e4d572e22a89b152a229941))
- Add pruning criteria snip and snip momentum (commit [6cec70](https://github.com/intel/neural-compressor/commit/6cec70bb2c5fd3079e4d572e22a89b152a229941))
- Add iterative pruning and decay types (commit [6cec70](https://github.com/intel/neural-compressor/commit/6cec70bb2c5fd3079e4d572e22a89b152a229941))
- Quantization
- Support different Keras formats (h5, keras, keras saved model) as input and output of TensorFlow saved model (commit [5a6f09](https://github.com/intel/neural-compressor/commit/5a6f092088e0deaa64601ab5aa88a572180cca8a))
- Enable Distillation for Quantization (commit [03f1f3](https://github.com/intel/neural-compressor/commit/03f1f3e049494192200c304e051a34d2ce654c18) & [e20c76](https://github.com/intel/neural-compressor/commit/e20c76a148b4aaf97492e297413795aacfdad987))
- GUI
- Add mixed precision (commit [26e902](https://github.com/intel/neural-compressor/commit/26e902d24e2993a43d8fb52373ab4841377d0efb))

**Improvement**
- Enhance tuning for Quantization with IPEX 1.12 to remove additional Quant/DeQuant (commit [192100](https://github.com/intel/neural-compressor/commit/1921007997d281121bf36d5356629b471800b101))
- Add upstream and download API for HuggingFace model hub, which can handle configuration files, tokenizer files and int8 model weights in the format of transformers (commit 46d945)
- Align with Intel PyTorch extension new API (commit [cc368a](https://github.com/intel/neural-compressor/commit/cc368a8f7433d98fedf699dfcde98b9b6ffe6cc7))
- Add load with yaml and pt to be compatible with older PyTorch model saving type (commit [a28705](https://github.com/intel/neural-compressor/commit/a28705c09f7be415fdd348a56cc1a300f9159a44))

**Bug Fixes**
- Quantization
- Fix data type of ONNX Runtime quantization from fp64 to fp32 (commit [cb7b48](https://github.com/intel/neural-compressor/commit/cb7b4859bf3c9c6b6ca6d4140c4d896d97364e74))
- Fix MXNET config issue with default config (commit [b75ff2](https://github.com/intel/neural-compressor/commit/b75ff270979f2612d82b509dbbb186dcc16e508c))
- Export
- Fix export_to_onnx API (commit [158c7f](https://github.com/intel/neural-compressor/commit/158c7f41f40c7b18ef0eb9f295e9f82b57491ebd))

**Productivity**
- Support TensorFlow 2.10.0 (commit [d6b6c9](https://github.com/intel/neural-compressor/commit/d6b6c9d2b59403fd40476361c0b1aa9f345bcdf8) & [8130e7](https://github.com/intel/neural-compressor/commit/8130e7fcdad97e6a098d59538316449b7a125d8e))
- Support OnnxRuntime 1.12 (commit [498ac4](https://github.com/intel/neural-compressor/commit/498ac48c67db61105e5c83322b2b737c7e7b3760))
- Export PyTorch QAT to Onnx (commit [029a63](https://github.com/intel/neural-compressor/commit/029a6325748210e102a566603ad7220a0fc70eea))
- Add Tensorflow and PyTorch container tpp file (commit [d245b5](https://github.com/intel/neural-compressor/commit/d245b51e369f51a0706d78803bc64089d03655a4))

**Examples**
- Add example of download from HuggingFace model hub and example of upstream models to the hub (commit [46d945](https://github.com/intel/neural-compressor/commit/46d945348c3144e20ab3f54854a9f4e6566220c4))
- Add notebooks for Neural Coder (commit [105db7](https://github.com/intel/neural-compressor/commit/105db7b1c141ef78ac98e83f9c42d37b9b3d6cce))
- Add 2 IPEX examples: bert_large (squad), distilbert_base (squad) (commit [192100](https://github.com/intel/neural-compressor/commit/1921007997d281121bf36d5356629b471800b101))
- ADD 2 DDP for prune once for all examples: roberta-base and Bert Base (commit [26a476](https://github.com/intel/neural-compressor/commit/26a47627895072d7d7bc1ecfa2537cdcf3917e10))

**Validated Configurations**
- Python 3.7, 3.8, 3.9, 3.10
- Centos 8.3 & Ubuntu 18.04 & Win10
- TensorFlow 2.9, 2.10
- Intel TensorFlow 2.7, 2.8, 2.9
- PyTorch 1.10.0+cpu, 1.11.0+cpu, 1.12.0+cpu
- IPEX 1.10.0, 1.11.0, 1.12.0
- MxNet 1.7, 1.9
- ONNX Runtime 1.10, 1.11, 1.12

1.13.1

Not secure
Features
* Support experimental auto-coding quantization for PyTorch
* Post-training static and dynamic quantization for PyTorch
* Post-training static quantization for IPEX
* Mixed-precision (BF16, INT8, and FP32) for PyTorch

* Refactor quantization utilities for ONNX Runtime

Bug fix
* Fixed model compression orchestration issue caused by PyTorch v1.11
* Fixed GUI issues

Validated Configurations
* Python 3.8
* Centos 8.4
* TensorFlow 2.9
* Intel TensorFlow 2.9
* PyTorch 1.12.0+cpu
* IPEX 1.12.0
* MXNet 1.7.0
* ONNX Runtime 1.11.0

1.13

Not secure
Features

* Quantization
* Support new quantization APIs for Intel TensorFlow
* Support FakeQuant (QDQ) quantization format for ITEX
* Improve INT8 quantization recipes for ONNX Runtime

* Mixed Precision
* Enhance mixed precision interface to support BF16 (FP16) mixed with FP32

* Neural Architecture Search
* Support SuperNet-based neural architecture search (DyNAS)

* Sparsity
* Support training for block-wise structured sparsity

* Strategy
* Support operator-type based tuning strategy

Productivity

* Support light (default) and full binary packages (default package size 0.5MB, full package size 2MB)
* Add experimental accuracy diagnostic feature for INT8 quantization including tensor statistics visualization and fine-grained precision setting
* Add experimental one-click BF16/INT8 low precision enabling & inference optimization, first-ever code-free solution in industry

Ecosystem

* Upstream 4 more quantized models (emotion_ferplus, ultraface, arcfase, bidaf) to ONNX Model Zoo
* Upstream 10 quantized Transformers-based models to HuggingFace Model Hub

Examples

* Add notebooks for Quantization on Intel DevCloud, Distillation/Sparsity/Quantization for BERT-Mini SST-2, and Neural Architecture Search (DyNAS)
* Add more quantization examples from TensorFlow Model Zoo

Validated Configurations
* Python 3.8, 3.9, 3.10
* Centos 8.3 & Ubuntu 18.04 & Win10
* TensorFlow 2.7, 2.8, 2.9
* Intel TensorFlow 2.7, 2.8, 2.9
* PyTorch 1.10.0+cpu, 1.11.0+cpu, 1.12.0+cpu
* IPEX 1.10.0, 1.11.0, 1.12.0
* MxNet 1.6.0, 1.7.0, 1.8.0
* ONNX Runtime 1.9.0, 1.10.0, 1.11.0

1.12

Not secure
Features
* Quantization
* Support accuracy-aware AMP (INT8/BF16/FP32) on PyTorch
* Improve post-training quantization (static & dynamic) on PyTorch
* Improve post-training quantization on TensorFlow
* Improve QLinear and QDQ quantization modes on ONNX Runtime
* Improve accuracy-aware AMP (INT8/FP32) on ONNX Runtime

* Pruning
* Improve pruning-once-for-all for NLP models

* Sparsity
* Support experimental sparse kernel for reference examples

Productivity
* Support model deployment by loading INT8 models directly from HuggingFace model hub
* Improve GUI with optimized model downloading, performance profiling, etc.

Ecosystem
* Highlight simple quantization usage with few clicks on ONNX Model Zoo
* Upstream INC quantized models (ResNet101, Tiny YoloV3) to ONNX Model Zoo

Examples
* Add Bert-mini distillation + quantization notebook example
* Add DLRM & SSD-ResNet34 quantization examples on IPEX
* Improve BERT structured sparsity training example

Validated Configurations
* Python 3.8, 3.9, 3.10
* Centos 8.3 & Ubuntu 18.04 & Win10
* TensorFlow 2.6.2, 2.7, 2.8
* Intel TensorFlow 1.15.0 UP3, 2.7, 2.8
* PyTorch 1.8.0+cpu, 1.9.0+cpu, 1.10.0+cpu
* IPEX 1.8.0, 1.9.0, 1.10.0
* MxNet 1.6.0, 1.7.0, 1.8.0
* ONNX Runtime 1.8.0, 1.9.0, 1.10.0

1.11

Not secure
Features
* Quantization
* Supported QDQ as experimental quantization format for ONNX Runtime
* Improved FX symbolic tracing for PyTorch
* Supported multi-metrics for quantization tuning
* Knowledge distillation
* Improved distillation algorithm for intermediate layer knowledge transfer
* Productivity
* Improved quantization productivity for ONNX Runtime through GUI
* Improved PyTorch INT8 model save/load methods
* Ecosystem
* Upstreamed INC quantized Yolov3, DenseNet, Mask-Rcnn, Yolov4 models to ONNX Model Zoo
* Became PyTorch ecosystem tool shortly after published PyTorch INC tutorial
* Examples
* Added INC quantized ResNet50 v1.5 and BERT-Large model for IPEX
* Supported dynamic quantization & weight sharing on bare metal reference engine

Page 4 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.