Neural-compressor

Latest version: v3.1.1

Safety actively analyzes 688365 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 7

1.8.1

Not secure
Features
* Knowledge distillation
* Supported knowledge distillation on TensorFlow
* Pruning
* Support Multi-node training on TensorFlow
* Acceleration library
* Supported Hugging Face [minilm_l6_h384_uncased_sst2](https://huggingface.co/philschmid/MiniLM-L6-H384-uncased-sst2), [bert_base_cased_mrpc](https://huggingface.co/bert-base-cased-finetuned-mrpc), and [bert_base_nli_mean_tokens_stsb models](https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens)

Validated Configurations
* Python 3.6 & 3.7 & 3.8 & 3.9
* Centos 8.3 & Ubuntu 18.04
* TensorFlow 2.6.2 & 2.7
* Intel TensorFlow 2.4.0, 2.5.0 and 1.15.0 UP3
* PyTorch 1.8.0+cpu, 1.9.0+cpu, IPEX 1.8.0
* MxNet 1.6.0, 1.7.0, 1.8.0
* ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/neural-compressor.git | $ git clone https://github.com/intel/neural-compressor.git
Binary | Pip | https://pypi.org/project/neural-compressor | $ pip install neural-compressor
Binary | Conda | https://anaconda.org/intel/neural-compressor | $ conda install neural-compressor -c conda-forge -c intel

Contact:
Please feel free to contact inc.maintainersintel.com, if you get any questions.

1.8

Not secure
Features
* Knowledge distillation
* Implemented the algorithms of paper “Pruning Once For All” accepted by NeurIPS 2021 ENLSP workshop
* Supported optimization pipelines (knowledge distillation & quantization-aware training) on PyTorch
* Quantization
* Added the support of ONNX RT 1.7
* Added the support of TensorFlow 2.6.2 and 2.7
* Added the support of PyTorch 1.10
* Pruning
* Supported magnitude pruning on TensorFlow
* Acceleration library
* Supported Hugging Face top 10 downloaded NLP models

Productivity
* Added performance profiling feature to INC UI service.
* Improved ease-of-use user interface for quantization with few clicks

Ecosystem
* Added notebook of using HuggingFace optimization library (Optimum) to Transformers
* Enabled top 20 downloaded Hugging Face NLP models with Optimum
* Upstreamed more INC quantized models to ONNX Model Zoo

Validated Configurations
* Python 3.6 & 3.7 & 3.8 & 3.9
* Centos 8.3 & Ubuntu 18.04
* TensorFlow 2.6.2 & 2.7
* Intel TensorFlow 2.4.0, 2.5.0 and 1.15.0 UP3
* PyTorch 1.8.0+cpu, 1.9.0+cpu, IPEX 1.8.0
* MxNet 1.6.0, 1.7.0, 1.8.0
* ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/neural-compressor.git | $ git clone https://github.com/intel/neural-compressor.git
Binary | Pip | https://pypi.org/project/neural-compressor | $ pip install neural-compressor
Binary | Conda | https://anaconda.org/intel/neural-compressor | $ conda install neural-compressor -c conda-forge -c intel

Contact:
Please feel free to contact inc.maintainersintel.com, if you get any questions.

1.8.0

3. Support TensorFlow Object Detection YOLO-V3 model

Validated Configurations:
* Python 3.6 & 3.7 & 3.8
* Centos 7 & Ubuntu 18.04
* Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0 and 1.15.0 UP1 & UP2
* PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
* MxNet 1.7.0
* ONNX Runtime 1.6.0, 1.7.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.7.1

Not secure
Intel® Neural Compressor(formerly known as Intel® Low Precision Optimization Tool) v1.7 release is featured by:

Features
* Acceleration library
* Support unified buffer memory allocation policy

Ecosystem
* Upstreamed INC quantized models (alexnet/caffenet/googlenet/squeezenet) to ONNX Model Zoo

Documentation
* Performance and accuracy data update

Validated Configurations
* Python 3.6 & 3.7 & 3.8 & 3.9
* Centos 8.3 & Ubuntu 18.04
* TensorFlow 2.6.0
* Intel TensorFlow 2.4.0, 2.5.0 and 1.15.0 UP3
* PyTorch 1.8.0+cpu, 1.9.0+cpu, IPEX 1.8.0
* MxNet 1.6.0, 1.7.0, 1.8.0
* ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/neural-compressor.git | $ git clone https://github.com/intel/neural-compressor.git
Binary | Pip | https://pypi.org/project/neural-compressor | $ pip install neural-compressor
Binary | Conda | https://anaconda.org/intel/neural-compressor | $ conda install neural-compressor -c conda-forge -c intel

Contact:
Please feel free to contact [INC Maintainers](lpot.maintainersintel.com), if you get any questions.

1.7

Not secure
Intel® Neural Compressor(formerly known as Intel® Low Precision Optimization Tool) v1.7 release is featured by:

Features
* Quantization
* Improved quantization accuracy in SSD-Reset34 and MobileNet v3 on TensorFlow
* Pruning
* Supported magnitude pruning on TensorFlow
* Knowledge distillation
* Supported knowledge distillation on PyTorch
* Multi-node support
* Supported multi-node pruning with distributed dataloader on PyTorch
* Supported multi-node inference for benchmark on PyTorch
* Acceleration library
* Added a domain-specific acceleration library for NLP models

Productivity
* Supported the configuration-free (pure Python) quantization
* Improved ease-of-use user interface for quantization with few clicks

Ecosystem
* Integrated into HuggingFace optimization library (Optimum)
* Upstreamed INC quantized models (RN50, VGG16) to ONNX Model Zoo

Documentation
* Add tutorial and examples for knowledge distillation
* Add tutorial and examples for multi-node training
* Add tutorial and examples for acceleration library

Validated Configurations
* Python 3.6 & 3.7 & 3.8 & 3.9
* Centos 8.3 & Ubuntu 18.04
* TensorFlow 2.6.0
* Intel TensorFlow 2.4.0, 2.5.0 and 1.15.0 UP3
* PyTorch 1.8.0+cpu, 1.9.0+cpu, IPEX 1.8.0
* MxNet 1.6.0, 1.7.0, 1.8.0
* ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/neural-compressor.git | $ git clone https://github.com/intel/neural-compressor.git
Binary | Pip | https://pypi.org/project/neural-compressor | $ pip install neural-compressor
Binary | Conda | https://anaconda.org/intel/neural-compressor | $ conda install neural-compressor -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

1.6

Intel® Low Precision Optimization Tool v1.6 release is featured by:

Pruning:
* Support pruning and post-training quantization pipeline on PyTorch
* Support pruning during quantization-aware training on PyTorch

Quantization:
* Support post-training quantization on TensorFlow 2.6.0, PyTorch 1.9.0, IPEX 1.8.0, and MXNet 1.8.0
* Support quantization-aware training on TensorFlow 2.x (Keras API)

User Experience:
* Improve quantization productivity with new UI
* Support quantized model recovery from tuning history

New Models:
* Support ResNet50 on ONNX model zoo

Documentation:
* Add pruned models
* Add quantized MLPerf models

Validated Configurations:

* Python 3.6 & 3.7 & 3.8 & 3.9
* Centos 8.3 & Ubuntu 18.04
* TensorFlow 2.6.0
* Intel TensorFlow 2.4.0, 2.5.0 and 1.15.0 UP3
* PyTorch 1.8.0+cpu, 1.9.0+cpu, IPEX 1.8.0
* MxNet 1.6.0, 1.7.0, 1.8.0
* ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  | Channel | Links | Install Command
-- | -- | -- | --
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel

Contact:
Please feel free to contact lpot.maintainersintel.com, if you get any questions.

Page 5 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.