Neural-compressor

Latest version: v3.3.1

Safety actively analyzes 723607 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 8

1.3.1

Intel® Low Precision Optimization Tool v1.3 release is featured by:

1. Improve graph optimization without explicit input/output setting

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0, 1.7.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.3

Intel® Low Precision Optimization Tool v1.3 release is featured by:

1. FP32 optimization & auto-mixed precision (BF16/FP32) for TensorFlow
2. Dynamic quantization support for PyTorch
3. ONNX Runtime v1.7 support
4. Configurable benchmarking support (multi-instances, warmup, etc.)
5. Multiple batch size calibration & mAP metrics for object detection models
6. Experimental user facing APIs for better usability
7. Various HuggingFace models support

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0, 1.7.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.2.1

Intel® Low Precision Optimization Tool v1.2.1 release is featured by:
1. user-facing APIs backward compatibility with v1.1 and v1.0.
2. refined experimental user-facing APIs for better out-of-box experience.

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.2

Intel® Low Precision Optimization Tool v1.2 release is featured by:

* Broad TensorFlow model type support
* operator-wise quantization scheme for ONNX RT
* MSE driven tuning for metric-free use cases
* UX improvement, including UI web server preview support
* More key model supports

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.1

Intel® Low Precision Optimization Tool v1.1 release is featured by:

* New backends (PyTorch/IPEX, ONNX Runtime) backend preview support
* Add built-in industry dataset/metric and custom registration
* Preliminary input/output node auto-detection on TensorFlow models
* New INT8 quantization recipes: bias correction and label balance

Validated Configurations:
* Python 3.6 & 3.7
* Centos 7
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu
* MxNet 1.7.0
* ONNX Runtime 1.6.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.0

Intel® Low Precision Optimization Tool v1.0 release is featured by:

* Refined user facing APIs for best OOB.
* Add TPE tuning strategies (Experimental).
* Pruning POC support on PyTorch
* TensorBoard POC support for tuning analysis.
* Built-in INT8/Dummy dataloader Support.
* Built-in Benchmarking support.
* Tuning history for strategy finetune.
* Support TF Keras and checkpoint model type as input.

Validated Configurations:
* Python 3.6 & 3.7
* Centos 7
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15UP1
* PyTorch 1.5.0+cpu
* MxNet 1.7.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lp-opt-tool.git | $ git clone https://github.com/intel/lp-opt-tool.git
Binary | Pip | https://pypi.org/project/ilit | $ pip install ilit
Binary | Conda | https://anaconda.org/intel/ilit | $ conda install ilit -c intel

Contact:
Please feel free to contact ilit.maintainersintel.com, if you get any questions.

Page 7 of 8

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.