Neural-compressor

Latest version: v3.1.1

Safety actively analyzes 688365 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 7

1.5.1

Intel® Low Precision Optimization Tool v1.5.1 release is featured by:

* Gradient-sensitivity pruning for CNN model
* Static quantization support for ONNX NLP model
* Dynamic seq length support in NLP dataloader
* Enrich quantization statistics

Validated Configurations:
* Python 3.6 & 3.7 & 3.8 & 3.9
* Centos 8.3 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0 and 1.15.0 UP1 & UP2 & UP3
* PyTorch 1.5.0+cpu, 1.6.0+cpu, 1.8.0+cpu, ipex
* MxNet 1.6.0, 1.7.0
* ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.5

Intel® Low Precision Optimization Tool v1.5 release is featured by:

* Add pattern-lock sparsity algorithm for NLP fine-tuning tasks
- Up to 70% unstructured sparsity and 50% structured sparsity with <2% accuracy loss on 5 Bert finetuning tasks
* Add NLP head pruning algorithm for HuggingFace models
- Performance speedup up to 3.0X within 1.5% accuracy loss on HuggingFace BERT SST-2
* Support model optimization pipeline
* Integrate SigOPT with multi-metrics optimization
- Complementary as basic strategy to speed up the tuning
* Support TensorFlow 2.5, PyTorch 1.8, and ONNX Runtime 1.8

Validated Configurations:
* Python 3.6 & 3.7 & 3.8 & 3.9
* Centos 8.3 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0 and 1.15.0 UP1 & UP2 & UP3
* PyTorch 1.5.0+cpu, 1.6.0+cpu, 1.8.0+cpu, ipex
* MxNet 1.6.0, 1.7.0
* ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.4.1

Intel® Low Precision Optimization Tool v1.4.1 release is featured by:

1.4

Intel® Low Precision Optimization Tool v1.4 release is featured by:

Quantization
1. PyTorch FX-based quantization support
2. TensorFlow & ONNX RT quantization enhancement

Pruning
1. Pruning/sparsity API refinement
2. Magnitude-based pruning on PyTorch

Model Zoo
1. INT8 key models updated (BERT on TensorFlow, DLRM on PyTorch, etc.)
2. 20+ HuggingFace model quantization

User Experience
1. More comprehensive logging message
2. UI enhancement with FP32 optimization, auto-mixed precision (BF16/FP32), and graph visualization
3. Online document: https://intel.github.io/lpot

Extended Capabilities
1. Model conversion from QAT to Intel Optimized TensorFlow model

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0, 1.7.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.3.1

Intel® Low Precision Optimization Tool v1.3 release is featured by:

1. Improve graph optimization without explicit input/output setting

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0, 1.7.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.3

Intel® Low Precision Optimization Tool v1.3 release is featured by:

1. FP32 optimization & auto-mixed precision (BF16/FP32) for TensorFlow
2. Dynamic quantization support for PyTorch
3. ONNX Runtime v1.7 support
4. Configurable benchmarking support (multi-instances, warmup, etc.)
5. Multiple batch size calibration & mAP metrics for object detection models
6. Experimental user facing APIs for better usability
7. Various HuggingFace models support

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0, 1.7.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

Page 6 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.