Intel® Low Precision Optimization Tool v1.3 release is featured by:
1. FP32 optimization & auto-mixed precision (BF16/FP32) for TensorFlow
2. Dynamic quantization support for PyTorch
3. ONNX Runtime v1.7 support
4. Configurable benchmarking support (multi-instances, warmup, etc.)
5. Multiple batch size calibration & mAP metrics for object detection models
6. Experimental user facing APIs for better usability
7. Various HuggingFace models support
Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0, 1.7.0
Distribution:
| Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel
Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.