Neural-compressor

Latest version: v3.1.1

Safety actively analyzes 688365 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 7

1.2.1

Intel® Low Precision Optimization Tool v1.2.1 release is featured by:
1. user-facing APIs backward compatibility with v1.1 and v1.0.
2. refined experimental user-facing APIs for better out-of-box experience.

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.2

Intel® Low Precision Optimization Tool v1.2 release is featured by:

* Broad TensorFlow model type support
* operator-wise quantization scheme for ONNX RT
* MSE driven tuning for metric-free use cases
* UX improvement, including UI web server preview support
* More key model supports

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.1

Intel® Low Precision Optimization Tool v1.1 release is featured by:

* New backends (PyTorch/IPEX, ONNX Runtime) backend preview support
* Add built-in industry dataset/metric and custom registration
* Preliminary input/output node auto-detection on TensorFlow models
* New INT8 quantization recipes: bias correction and label balance

Validated Configurations:
* Python 3.6 & 3.7
* Centos 7
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu
* MxNet 1.7.0
* ONNX Runtime 1.6.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.0

Intel® Low Precision Optimization Tool v1.0 release is featured by:

* Refined user facing APIs for best OOB.
* Add TPE tuning strategies (Experimental).
* Pruning POC support on PyTorch
* TensorBoard POC support for tuning analysis.
* Built-in INT8/Dummy dataloader Support.
* Built-in Benchmarking support.
* Tuning history for strategy finetune.
* Support TF Keras and checkpoint model type as input.

Validated Configurations:
* Python 3.6 & 3.7
* Centos 7
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15UP1
* PyTorch 1.5.0+cpu
* MxNet 1.7.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lp-opt-tool.git | $ git clone https://github.com/intel/lp-opt-tool.git
Binary | Pip | https://pypi.org/project/ilit | $ pip install ilit
Binary | Conda | https://anaconda.org/intel/ilit | $ conda install ilit -c intel

Contact:
Please feel free to contact ilit.maintainersintel.com, if you get any questions.

1.0b

Intel® Low Precision Optimization Tool v1.0 beta release is featured by:
* Built-in dataloaders and evaluators
* Add random and exhaustive tuning strategies
* Mix precision tuning support on TensorFlow (INT8/BF16/FP32)
* Quantization-aware training POC support on Pytorch
* TensorFlow mainstream version support, including 1.15.2, 1.15UP1 and 2.1.0
* 50+ models validated

Supported Models:
| TensorFlow Model | Category |
|---------------------------------------------------------------------|------------|
|[ResNet50 V1](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[ResNet50 V1.5](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[ResNet101](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[Inception V1](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[Inception V2](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[Inception V3](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[Inception V4](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[ResNetV2_50](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[ResNetV2_101](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[ResNetV2_152](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[Inception ResNet V2](examples/tensorflow/image_recognition/README.md)| Image Recognition |
|[SSD ResNet50 V1](examples/tensorflow/object_detection/README.md) | Object Detection |
|[Wide & Deep](examples/tensorflow/recommendation/wide_deep_large_ds/WND_README.md) | Recommendation |
|[VGG16](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[VGG19](examples/tensorflow/image_recognition/README.md) | Image Recognition |
|[Style_transfer](examples/tensorflow/style_transfer/README.md) | Style Transfer |


| PyTorch Model | Category |
|---------------------------------------------------------------------|------------|
|[BERT-Large RTE](examples/pytorch/language_translation/README.md) | Language Translation |
|[BERT-Large QNLI](examples/pytorch/language_translation/README.md) | Language Translation |
|[BERT-Large CoLA](examples/pytorch/language_translation/README.md) | Language Translation |
|[BERT-Base SST-2](examples/pytorch/language_translation/README.md) | Language Translation |
|[BERT-Base RTE](examples/pytorch/language_translation/README.md) | Language Translation |
|[BERT-Base STS-B](examples/pytorch/language_translation/README.md) | Language Translation |
|[BERT-Base CoLA](examples/pytorch/language_translation/README.md) | Language Translation |
|[BERT-Base MRPC](examples/pytorch/language_translation/README.md) | Language Translation |
|[DLRM](examples/pytorch/recommendation/README.md) | Recommendation |
|[BERT-Large MRPC](examples/pytorch/language_translation/README.md) | Language Translation |
|[ResNext101_32x8d](examples/pytorch/image_recognition/imagenet/README.md) | Image Recognition |
|[BERT-Large SQUAD](examples/pytorch/language_translation/README.md) | Language Translation |
|[ResNet50 V1.5](examples/pytorch/image_recognition/imagenet/README.md) | Image Recognition |
|[ResNet18](examples/pytorch/image_recognition/imagenet/README.md) | Image Recognition |
|[Inception V3](examples/pytorch/image_recognition/imagenet/README.md) | Image Recognition |
|[YOLO V3](examples/pytorch/object_detection/yolo_v3/README.md) | Object Detection |
|[Peleenet](examples/pytorch/image_recognition/peleenet/README.md) | Image Recognition |
|[ResNest50](examples/pytorch/image_recognition/resnest/README.md) | Image Recognition |
|[SE_ResNext50_32x4d](examples/pytorch/image_recognition/se_resnext/README.md) | Image Recognition |
|[ResNet50 V1.5 QAT](examples/pytorch/image_recognition/imagenet_qat/README.md) | Image Recognition |
|[ResNet18 QAT](examples/pytorch/image_recognition/imagenet_qat/README.md) | Image Recognition |

| MxNet Model | Category |
|---------------------------------------------------------------------|------------|
|[ResNet50 V1](examples/mxnet/image_recognition/README.md) | Image Recognition |
|[MobileNet V1](examples/mxnet/image_recognition/README.md) | Image Recognition |
|[MobileNet V2](examples/mxnet/image_recognition/README.md) | Image Recognition |
|[SSD-ResNet50](examples/mxnet/object_detection/README.md) | Object Detection |
|[SqueezeNet V1](examples/mxnet/image_recognition/README.md) | Image Recognition |
|[ResNet18](examples/mxnet/image_recognition/README.md) | Image Recognition |
|[Inception V3](examples/mxnet/image_recognition/README.md) | Image Recognition |

Known Issues:
* TensorFlow ResNet50 v1.5 int8 model will crash on TensorFlow 1.15 UP1 branch

Validated Configurations:
* Python 3.6 & 3.7
* Centos 7
* Intel TensorFlow 1.15.2, 2.1.0 and 1.15UP1
* PyTorch 1.5
* MxNet 1.6

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lp-opt-tool.git | $ git clone https://github.com/intel/lp-opt-tool.git
Binary | Pip | https://pypi.org/project/ilit | $ pip install ilit
Binary | Conda | https://anaconda.org/intel/ilit | $ conda config --add channels intel $ conda install ilit

Contact:
Please feel free to contact ilit.maintainersintel.com, if you get any questions.

1.0a

Intel® Low Precision Optimization Tool (iLiT) is an open-sourced python library which is intended to deliver a unified low-precision inference solution cross multiple Intel optimized DL frameworks on both CPU and GPU. It supports automatic accuracy-driven tuning strategies, along with additional objectives like performance, model size, or memory footprint. It also provides the easy extension capability for new backends, tuning strategies, metrics and objectives.

Feature List:

* Unified low precision quantization interface cross multiple Intel optimized frameworks (TensorFlow, PyTorch, and MXNet)
* Built-in tuning strategies, including Basic, Bayesian, and MSE
* Built-in evaluation metrics, including TopK (image classification), F1 (NLP), and CocoMAP (object detection)
* Built-in tuning objectives, including Performance, ModelSize, and Footprint
* Extensible API design to add new strategy, framework backend, metric, and objective
* KL-divergence calibration for TensorFlow and MXNet
* Tuning process resume from certain checkpoint

Supported Models:

Model | Framework | Model | Framework | Model | Framework
-- | -- | -- | -- | -- | --
ResNet50 V1 | MXNet | BERT-Large RTE | PyTorch | ResNet18 | PyTorch
MobileNet V1 | MXNet | BERT-Large QNLI | PyTorch | ResNet50 V1 | TensorFlow
MobileNet V2 | MXNet | BERT-Large CoLA | PyTorch | ResNet50 V1.5 | TensorFlow
SSD-ResNet50 | MXNet | BERT-Base SST-2 | PyTorch | ResNet101 | TensorFlow
SqueezeNet V1 | MXNet | BERT-Base RTE | PyTorch | Inception V1 | TensorFlow
ResNet18 | MXNet | BERT-Base STS-B | PyTorch | Inception V2 | TensorFlow
Inception V3 | MXNet | BERT-Base CoLA | PyTorch | Inception V3 | TensorFlow
DLRM | PyTorch | BERT-Base MRPC | PyTorch | Inception V4 | TensorFlow
BERT-Large MRPC | PyTorch | ResNet101 | PyTorch | Inception ResNet V2 | TensorFlow
BERT-Large SQUAD | PyTorch | ResNet50 V1.5 | PyTorch | SSD ResNet50 V1 | TensorFlow

Known Issues:
* Statistics collection for KL algorithm is slow in TensorFlow due to lack of tensor inspector APIs
* MSE tuning strategy is not supported in PyTorch

Validated Configurations:
* Python 3.6 & 3.7
* Centos 7
* TensorFlow 1.15, 2.0 and 2.1
* PyTorch 1.5
* MxNet 1.6

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lp-opt-tool.git | $ git clone https://github.com/intel/lp-opt-tool.git
Binary | Pip | https://pypi.org/project/ilit | $ pip install ilit
Binary | Conda | https://anaconda.org/intel/ilit | $ conda config --add channels intel $ conda install ilit

Contact:
Please feel free to contact ilit.maintainersintel.com, if you get any questions.

Page 7 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.