Neural-compressor

Latest version: v3.1.1

Safety actively analyzes 688365 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 7

2.1.1

Not secure
- Bug Fixes
- Examples

**Bug Fixes**
- Fix calibration max value issue for SmoothQuant (commit [b28bfd](https://github.com/intel/neural-compressor/commit/b28bfdfac66b9fe5409586d363dd34c1dc684a42))
- Fix exception for untraceable model during SmoothQuant (commit [b28bfd](https://github.com/intel/neural-compressor/commit/b28bfdfac66b9fe5409586d363dd34c1dc684a42))
- Fix depthwise conv issue for SmoothQuant (commit [0e5942](https://github.com/intel/neural-compressor/commit/0e594297bc7bbb3a9694cb344c916eb43164e36f))
- Fix Keras model mix precision convert issue (commit [997c57](https://github.com/intel/neural-compressor/commit/997c579d612a6de9d51908c3ee94d419f7b6ffb5))

**Examples**
- Add gpt-j alpha-tuning example (commit [3b7d28](https://github.com/intel/neural-compressor/commit/3b7d282c9106481141f2cd3f36c54a041493c63c))
- Migrate notebook example update to INC2.0 API (commit [54d2f5](https://github.com/intel/neural-compressor/commit/54d2f580066866d450c0dafc99a34da8bc8680e8))

**Validated Configurations**
- Centos 8.4 & Ubuntu 22.04
- Python 3.8
- TensorFlow 2.11.0
- ITEX 1.1.0
- PyTorch/IPEX 1.13.0+cpu
- ONNX Runtime 1.13.1
- MXNet 1.9.1

2.1

Not secure
- Highlights
- Features
- Improvement
- Bug Fixes
- Examples
- Documentations

**Highlights**
- Support and enhance SmoothQuant on popular large language models (LLMs) (e.g., BLOOM-176B, OPT-30B, GPT-J-6B, etc.)
- Support native Keras model quantization (Keras model as input, and quantized Keras model as output)
- Provide auto-tuning strategy to improve quantization productivity
- Support model conversion from TensorFlow INT8 to ONNX INT8 model
- Polish documentations to help the user be easier to get started

**Features**
- [Quantization] Support SmoothQuant and verify with LLMs (commit [cbb5cf](https://github.com/intel/neural-compressor/commit/cbb5cf53017540fb57087e6298befc4679f92a45)) (commit [08e255](https://github.com/intel/neural-compressor/commit/08e25516cb364f7d6bc69b49c6f3ff01269c4d31)) (commit [12c101](https://github.com/intel/neural-compressor/commit/12c101f3c6c2f4bf9822c0d5d75f88b11d7d62e7))
- [Quantization] Support Keras functional model quantization with Keras model in, quantized Keras model out (commit [efd737](https://github.com/intel/neural-compressor/commit/efd737956161af23c9cc14d25952063e63e7deda))
- [Strategy] Add auto quantization level as the default tuning process (commit [cdfb99](https://github.com/intel/neural-compressor/commit/cdfb994442e79c2ebcbc84d173d6f5a95854ac24))
- [Strategy] Integrate quantization recipes into tuning strategy (commit [44d176](https://github.com/intel/neural-compressor/commit/44d176115b6233ea6827a4de913c801c254326f9))
- [Strategy] Extend the strategy capability for adding the new data type (commit [d0059c](https://github.com/intel/neural-compressor/commit/d0059c4d0536bcffa96ef35c893731262e571cf6))
- [Strategy] Enable tuning strategy level multi-node distribute quantization (commit [e1fe50](https://github.com/intel/neural-compressor/commit/e1fe50e2f627575337ce10f566a89c2b1caddf45))
- [AMP] Support ONNX Runtime with FP16 (commit [108c24](https://github.com/intel/neural-compressor/commit/108c245d8c65ff62d76ab4403271dfa1d7788550))
- [Productivity] Export TensorFlow models into ONNX QDQ mode on both fp32 and int8 precision (commit [33a235](https://github.com/intel/neural-compressor/commit/33a2352f72a9ea73cbee4f73deba59baacf1e276))
- [Productivity] Support PT/IPEX v2.0 (commit [dbf138](https://github.com/intel/neural-compressor/commit/dbf1381fb306bff017f90bea67af481d87a11877))
- [Productivity] Support ONNX Runtime v1.14.1 (commit [146759](https://github.com/intel/neural-compressor/pull/579/commits/1467597c09ed50ad1d0d59a6d4d325e8d2c45c1c))
- [Productivity] GitHub IO docs support history versions

**Improvement**
- Remove the dependency on experimental API (commit [6e10ef](https://github.com/intel/neural-compressor/commit/6e10efdc5db7ed45c5abeedfce96ca724e807418))
- Enhance GUI diagnosis function on model graph and tensor histogram showing style (commit [9f0891](https://github.com/intel/neural-compressor/commit/9f08912dd90bc2613640086259ce9c8275cf253c))
- Optimize memory usage for PyTorch adaptor (commit [c295a7](https://github.com/intel/neural-compressor/commit/c295a7f17bac7abf70fe79d77944ed935a4edd68)), ONNX adaptor (commit [8cbf2e](https://github.com/intel/neural-compressor/commit/8cbf2e003881215c11161f5be2c39f2927546c07)), TensorFlow adaptor (commit [ad0f1e](https://github.com/intel/neural-compressor/commit/ad0f1e06bce0c7bec47e829c6cd44b28982cc6ab)), and tuning strategy (commit [c49300](https://github.com/intel/neural-compressor/commit/c493000bc199733d599e45248c3a6eb9159a680c)) to support LLM
- Refine ONNX Runtime QDQ quantization graph (commit [c64a5b](https://github.com/intel/neural-compressor/commit/c64a5baeece0b661c6df7c75f87ca4e72007b291))
- Enable ONNX model quantization with NVidia GPU TRT EP (commit [ba42d0](https://github.com/intel/neural-compressor/commit/ba42d0082cf47c9462519cd104b26344fac112a6))
- Improve code line coverage to 85%

**Bug Fixes**
- Fix mix precision config setting (commit [4b71a8](https://github.com/intel/neural-compressor/commit/4b71a82b3ddfd65e7883972b4af09a3717177396))
- Fix multi-instance benchmark on Windows (commit [1f89aa](https://github.com/intel/neural-compressor/commit/1f89aad33d3336e2fa4c0376d2c92f253c764532))
- Fix domain detection for large ONNX model (commit [70a566](https://github.com/intel/neural-compressor/commit/70a566239d443154821bbcd0cf51a984b41f1fb8))

**Examples**
- Migrate examples with INC v2.0 API
- Enable LLMs (e.g., GPT-NeoX, T5 Large, BLOOM-176B, OPT-30B, GPT-J-6B, etc.)
- Enable examples for Keras in Keras out (commit [efd737](https://github.com/intel/neural-compressor/commit/efd737956161af23c9cc14d25952063e63e7deda))
- Enable multi-node training examples on CPU (e.g., RN50 distillation, QAT, pruning examples)
- Add 15+ Huggingface (HF) examples with ONNX Runtime backend and update quantized models into HF (commit [a4228d](https://github.com/intel/neural-compressor/commit/a4228df1b9e2e896b6740a9098595d0663357bb1))
- Add 2 examples for PT2ONNX model export (commit [26db4a](https://github.com/intel/neural-compressor/commit/26db4ab5aa50c859fb20e2308227bb6b0180a8c8))

**Documentations**
- Polish documentations with simplified GitHub main page, easy to read IO Docs structure, hands on API migrate user guide, more detailed new API instruction, refreshed API docs template, etc.

**Validated Configurations**
- Centos 8.4 & Ubuntu 22.04
- Python 3.7, 3.8, 3.9, 3.10
- TensorFlow 2.10.1, 2.11.0, 2.12.0
- ITEX 1.0.0, 1.1.0
- PyTorch/IPEX 1.12.1+cpu, 1.13.0+cpu, 2.0.0+cpu
- ONNX Runtime 1.12.1, 1.13.1, 1.14.1
- MXNet 1.9.1

2.0

Not secure
- Highlights
- Features
- Bug Fixes
- Examples
- Documentations

**Highlights**
- Support the quantization for Intel® Xeon® Scalable Processors (e.g., Sapphire Rapids), Intel® Data Center GPU Flex Series, and Intel® Max Series CPUs & GPUs
- Provide the new unified APIs for post-training optimizations (static/dynamic quantization) and during-training optimizations (quantization-aware training, pruning/sparsity, distillation, etc.)
- Support the advanced fine-grained auto mixed precisions (AMP) upon all the supported precisions (e.g., INT8, BF16, and FP32)
- Improve the model conversion from PyTorch INT8 model to ONNX INT8 model
- Support the zero-code quantization in Visual Studio Code and JupyterLab with Neural Coder plugins
- Support the quantization for 10K+ transformer-based models including large language models (e.g., T5, GPT, Stable Diffusion, etc.)

**Features**
- [Quantization] Experimental Keras model in, quantized Keras model out (commit [4fa753](https://github.com/intel/neural-compressor/commit/4fa75310de6b84fcb1333ecdc0d2ac5302e445bd))
- [Quantization] Support quantization for ITEX v1.0 on Intel CPU and Intel GPU (commit [a2fcb2](https://github.com/intel/neural-compressor/commit/a2fcb29b676f5876729c03d9727ae8298329113f))
- [Quantization] Support hardware-neutral quantized ONNX QDQ models and validate on multiple devices (Intel CPU, NVidia GPU, AMD CPU, and ARM CPU) through ONNX Runtime
- [Quantization] Enhance TensorFlow QAT: remove TFMOT dependency (commit [1deb7d](https://github.com/intel/neural-compressor/commit/1deb7d2f80524714bd0c6c1192842fea9f0e340e))
- [Quantization] Distinguish frameworks, backends and output formats for OnnxRuntime backend (commit [2483a8](https://github.com/intel/neural-compressor/commit/2483a84b7d4aceaee5f4ece134fcb53481f58c6d))
- [Quantization] Support PyTorch/IPEX 1.13 and TensorFlow 2.11 (commit [b7a2ef](https://github.com/intel/neural-compressor/commit/b7a2ef2036f14ec53613cdebca83ff34fd8ae810))
- [AMP] Support more TensorFlow bf16 ops (commit [98d3c8](https://github.com/intel/neural-compressor/commit/98d3c83d11c7f50c86ab7c917e95e17f5d729ad1))
- [AMP] Add torch.amp bf16 support for IPEX backend (commit [2a361b](https://github.com/intel/neural-compressor/commit/2a361b848cd7d1fb39e531d897ef24419fc1de2a))
- [Strategy] Add accuracy-first tuning strategies: MSE_v2 (commit [80311f](https://github.com/intel/neural-compressor/commit/80311f60cdcf4d70ba59b250d60eda0833384ea8)) and HAWQ (commit [83018e](https://github.com/intel/neural-compressor/commit/83018ef28170f8d2659dd30bb28738857e5c0dec)) to solve the accuracy problem of specific models
- [Strategy] Refine the tuning strategy, add more data type, more op attributes like per tensor/per channel, dynamic/static, …etc
- [Pruning] Add progressive pruning and pattern lock pruning_type (commit [f46bb1](https://github.com/intel/neural-compressor/commit/f46bb127ab03e8b487dd668bce8aace202a08477))
- [Pruning] Add per_channel sparse pattern (commit [f46bb1](https://github.com/intel/neural-compressor/commit/f46bb127ab03e8b487dd668bce8aace202a08477))
- [Distillation] Support self-distillation towards efficient and compact neural networks (commit [acdd4c](https://github.com/intel/neural-compressor/commit/acdd4ca9b0ed77b796d7a08fffad25df953a72ca))
- [Distillation] Enhance API of intermediate layers knowledge distillation (commit [3183f6](https://github.com/intel/neural-compressor/commit/3183f68026801d3074ddf4378f30d019be0ff89b))
- [Neural Coder] Detect devices and ISA to adjust the optimization (commit [691d0b](https://github.com/intel/neural-compressor/commit/691d0b870b3d08bbab8b0a799b658d9637e18f17))
- [Neural Coder] Automatically quantize with ONNX Runtime backend (commit [f711b4](https://github.com/intel/neural-compressor/commit/f711b4c92798b5205a8f8665929fb10a89da7e71))
- [Neural Coder] Add Neural Coder Python Launcher (commit [7bb92d](https://github.com/intel/neural-compressor/commit/7bb92d0d05093b18695b192da7ce860ad80d623e))
- [Neural Coder] Add Visual Studio Plugin (commit [dd39ca](https://github.com/intel/neural-compressor/commit/dd39ca035c09604caf2425534a647e7dda9ad045))
- [Productivity] Support Pruning in GUI (commit [d24fea](https://github.com/intel/neural-compressor/commit/d24fea6698c0e238f8c4d8e64a21f7ba77497d8e))
- [Productivity] Use config-driven API to replace yaml
- [Productivity] Export ONNX QLinear to QDQ format (commit [e996a9](https://github.com/intel/neural-compressor/commit/e996a9359fbc24db4925f7bdb8b5529c3f97f283))
- [Productivity] Validate 10K+ transformer-based models including large language models (e.g., T5, GPT, Stable Diffusion, etc.)

**Bug Fixes**
- Fix quantization failed of Onnx models with over 2GB model size (commit [8d83cc](https://github.com/intel/neural-compressor/commit/8d83cc81edfa3712993225ebc7d1e438c07c601d))
- Fix bf16 disabled by default (commit [83825a](https://github.com/intel/neural-compressor/commit/83825afa84c479fca312859156aff0912eb12feb))
- Fix PyTorch DLRM quantization out of memory (commit [ff1725](https://github.com/intel/neural-compressor/commit/ff1725771587a691a9841641cfc24a7fe47ba234))
- Fix ITEX resnetv2_50 tuning accuracy (commit [ae1e05](https://github.com/intel/neural-compressor/commit/ae1e05d94b1b6d82fe59a55be2343bf00f88d9ba))
- Fix bf16 ops error in QAT when torch version < 1.11 (commit [eda8cb](https://github.com/intel/neural-compressor/commit/eda8cb77902df1e8ac9c3fedaa5508334d22533e))
- Fix the key comparison in the Bayesian strategy (commit [1e9c12](https://github.com/intel/neural-compressor/commit/1e9c12bb53d65fbec33e7ada68677100e7617745))
- Fix PyTorch T5 can’t do static quantization (commit [ee3ef0](https://github.com/intel/neural-compressor/commit/ee3ef0e5c28fb324d7fbd0ad4a4d99e3d3d1b84f))

**Examples**
- Add quantization examples of HuggingFace models with OnnxRuntime backend (commit [f4aeb5](https://github.com/intel/neural-compressor/commit/f4aeb5de7f0be514b5d3f6b0447bac4371729324))
- Add Big language model quantization example: GPT-J (commit [01899d](https://github.com/intel/neural-compressor/commit/01899d6d959635612ad24273c092f086dc5c7066))
- Add Distributed Distillation examples: MobileNetV2 (commit [d33ebe](https://github.com/intel/neural-compressor/commit/d33ebe6d8623963e8591e1debf0d30d6b103c838)) and CNN-2 (commit [ebe9e2](https://github.com/intel/neural-compressor/commit/ebe9e2af650d7310a6b28b713e54b711e5f89380))
- Update examples with INC v2.0 new API
- Add Stable Diffusion example

**Documentations**
- Update the accuracy of broad hardware (commit [71b056](https://github.com/intel/neural-compressor/commit/71b056b53015e05b7ab9289d31edaf4200dca628))
- Refine API helper and documents

**Validated Configurations**
- Centos 8.4 & Ubuntu 20.04
- Python 3.7, 3.8, 3.9, 3.10
- TensorFlow 2.9.3, 2.10.1, 2.11.0, ITEX 1.0
- PyTorch/IPEX 1.11.0+cpu, 1.12.1+cpu, 1.13.0+cpu
- ONNX Runtime 1.11.0, 1.12.1, 1.13.1
- MxNet 1.7.0, 1.8.0, 1.9.1

1.14.2

Not secure
- Highlights
- Features
- Bug Fixes
- Examples

**Highlights**
- We support experimental quantization support for ITEX v1.0 on Intel CPU and GPU, which is the first time to support the quantization on Intel GPU. We support hardware-neutral quantized ONNX models and validate on multiple devices (Intel CPU, NVidia GPU, AMD CPU, and ARM CPU) through ONNX Runtime.

**Features**
- Support quantization support on PyTorch v1.13 (commit [97c946](https://github.com/intel/neural-compressor/commit/97c9466ce4e5c9acaa55a727ed90e3d38bdf8bbc))
- Support experimental quantization support for ITEX v1.0 on Intel CPU and GPU (commit [a2fcb2](https://github.com/intel/neural-compressor/commit/a2fcb29b676f5876729c03d9727ae8298329113f))
- Support GUI on native Windows (commit [fe9923](https://github.com/intel/neural-compressor/commit/fe9923d3f3cdc5437d99de8408baeea798634691))
- Support INT8 model load and save API with IPEX backend (commit [23c585](https://github.com/intel/neural-compressor/commit/23c585e2e3559eddf1d30dcf1d8163e10a328abe))

**Bug Fixes**
- Fix GPT2 quantization failed with ONNX Runtime backend (commit [aea121](https://github.com/intel/neural-compressor/commit/aea12197bbf7f7a0d3e87fb4596c8f1c1eb050e3))

**Examples**
- Support personalized Stable Diffusion with few-shot fine-tuning (commit [4247fd](https://github.com/intel/neural-compressor/commit/4247fd373e7f9f97bd4347b628ff48088e3222fc))
- Add ITEX examples efficientnet_v2_b0, mobilenet_v1, mobilenet_v2, inception_resnet_v2, inception_v3, resnet101, resnet50, vgg16, xception, densenet121....etc. (commit [6ab557](https://github.com/intel/neural-compressor/commit/6ab5570987330e6b2adc0445c8fcac629ed215af))
- Validate quantized ONNX model on multiple devices (Intel CPU, NVIDIA GPU, AMD CPU, and ARM CPU) (commit [288340](https://github.com/intel/neural-compressor/commit/288340b80824153c0539c526286ec6efbd7b92a8))

**Validated Configurations**
- Centos 8.4
- Python 3.8
- TensorFlow 2.10, ITEX 1.0
- PyTorch 1.12.0+cpu, 1.13.0+cpu, IPEX 1.12.0
- ONNX Runtime 1.12
- MxNet 1.9

1.14.1

Not secure
- Bug Fixes
- Productivity
- Examples

**Bug Fixes**
- Fix name matching issue of scale and zero-point in PyTorch (commit [fd7a53](https://github.com/intel/neural-compressor/commit/fd7a53f2a3ac904c3cf8dbb388e9de50b3ea6bc2))
- Fix incorrect output quantization mode of MatMul + Relu fusion in TensorFlow (commit [9b5293](https://github.com/intel/neural-compressor/commit/9b529388bf3a6589e2a25cd4c6391c11d63b2b93))

**Productivity**
- Support Onnx model with Python3.10 (commit [2faf0b](https://github.com/intel-innersource/frameworks.ai.lpot.intel-lpot/releases/1.%09https:/github.com/intel/neural-compressor/commit/2faf0bc2be6f03f31bca1cc978f4feccea4abc5a))
- Using TensorFlow create_file_writer API to support histogram of Tensorboard (commit [f34852](https://github.com/intel/neural-compressor/commit/f348529429c32cd82b42970212f1283980876ac2))

**Examples**
- Add NAS notebooks (commit [5f0adf](https://github.com/intel-innersource/frameworks.ai.lpot.intel-lpot/releases/%E2%80%A2%09https:/github.com/intel/neural-compressor/commit/5f0adfee344abf60e6779d05bc77cebc27ab6aed))
- Add Bert mini 2:4, 1x4 and mixed examples with new Pruning API (commit [a52074](https://github.com/intel/neural-compressor/commit/a520746a5eceb1159b481c10a0ebd670226a8c47))
- Add keras in, saved_model out resnet101, inception_v3, mobilenetv2, xception, resnetv2 examples (commit [fdd40e](https://github.com/intel/neural-compressor/commit/fdd40e13626f9d4bc826dc281cc920ec1ae3ce2f))

**Validated Configurations**
- Python 3.7, 3.8, 3.9, 3.10
- Centos 8.3 & Ubuntu 18.04 & Win10
- TensorFlow 2.9, 2.10
- Intel TensorFlow 2.7, 2.8, 2.9
- PyTorch 1.10.0+cpu, 1.11.0+cpu, 1.12.0+cpu
- IPEX 1.10.0, 1.11.0, 1.12.0
- MxNet 1.7, 1.9
- ONNX Runtime 1.10, 1.11, 1.12

1.14

Not secure
- Highlights
- New Features
- Improvements
- Bug Fixes
- Productivity
- Examples

**Highlights**
We are excited to announce the release of Intel® Neural Compressor v1.14! We release new Pruning API for PyTorch, allowing users select better combinations of criteria, pattern and scheduler to achieve better pruning accuracy. This release also supports Keras input for TensorFlow quantization, and self-distilled quantization for better quantization accuracy.

**New Features**

- Pruning/Sparsity
- Support new structured sparse patterns N in M and NxM (commit [6cec70](https://github.com/intel/neural-compressor/commit/6cec70bb2c5fd3079e4d572e22a89b152a229941))
- Add pruning criteria snip and snip momentum (commit [6cec70](https://github.com/intel/neural-compressor/commit/6cec70bb2c5fd3079e4d572e22a89b152a229941))
- Add iterative pruning and decay types (commit [6cec70](https://github.com/intel/neural-compressor/commit/6cec70bb2c5fd3079e4d572e22a89b152a229941))
- Quantization
- Support different Keras formats (h5, keras, keras saved model) as input and output of TensorFlow saved model (commit [5a6f09](https://github.com/intel/neural-compressor/commit/5a6f092088e0deaa64601ab5aa88a572180cca8a))
- Enable Distillation for Quantization (commit [03f1f3](https://github.com/intel/neural-compressor/commit/03f1f3e049494192200c304e051a34d2ce654c18) & [e20c76](https://github.com/intel/neural-compressor/commit/e20c76a148b4aaf97492e297413795aacfdad987))
- GUI
- Add mixed precision (commit [26e902](https://github.com/intel/neural-compressor/commit/26e902d24e2993a43d8fb52373ab4841377d0efb))

**Improvement**
- Enhance tuning for Quantization with IPEX 1.12 to remove additional Quant/DeQuant (commit [192100](https://github.com/intel/neural-compressor/commit/1921007997d281121bf36d5356629b471800b101))
- Add upstream and download API for HuggingFace model hub, which can handle configuration files, tokenizer files and int8 model weights in the format of transformers (commit 46d945)
- Align with Intel PyTorch extension new API (commit [cc368a](https://github.com/intel/neural-compressor/commit/cc368a8f7433d98fedf699dfcde98b9b6ffe6cc7))
- Add load with yaml and pt to be compatible with older PyTorch model saving type (commit [a28705](https://github.com/intel/neural-compressor/commit/a28705c09f7be415fdd348a56cc1a300f9159a44))

**Bug Fixes**
- Quantization
- Fix data type of ONNX Runtime quantization from fp64 to fp32 (commit [cb7b48](https://github.com/intel/neural-compressor/commit/cb7b4859bf3c9c6b6ca6d4140c4d896d97364e74))
- Fix MXNET config issue with default config (commit [b75ff2](https://github.com/intel/neural-compressor/commit/b75ff270979f2612d82b509dbbb186dcc16e508c))
- Export
- Fix export_to_onnx API (commit [158c7f](https://github.com/intel/neural-compressor/commit/158c7f41f40c7b18ef0eb9f295e9f82b57491ebd))

**Productivity**
- Support TensorFlow 2.10.0 (commit [d6b6c9](https://github.com/intel/neural-compressor/commit/d6b6c9d2b59403fd40476361c0b1aa9f345bcdf8) & [8130e7](https://github.com/intel/neural-compressor/commit/8130e7fcdad97e6a098d59538316449b7a125d8e))
- Support OnnxRuntime 1.12 (commit [498ac4](https://github.com/intel/neural-compressor/commit/498ac48c67db61105e5c83322b2b737c7e7b3760))
- Export PyTorch QAT to Onnx (commit [029a63](https://github.com/intel/neural-compressor/commit/029a6325748210e102a566603ad7220a0fc70eea))
- Add Tensorflow and PyTorch container tpp file (commit [d245b5](https://github.com/intel/neural-compressor/commit/d245b51e369f51a0706d78803bc64089d03655a4))

**Examples**
- Add example of download from HuggingFace model hub and example of upstream models to the hub (commit [46d945](https://github.com/intel/neural-compressor/commit/46d945348c3144e20ab3f54854a9f4e6566220c4))
- Add notebooks for Neural Coder (commit [105db7](https://github.com/intel/neural-compressor/commit/105db7b1c141ef78ac98e83f9c42d37b9b3d6cce))
- Add 2 IPEX examples: bert_large (squad), distilbert_base (squad) (commit [192100](https://github.com/intel/neural-compressor/commit/1921007997d281121bf36d5356629b471800b101))
- ADD 2 DDP for prune once for all examples: roberta-base and Bert Base (commit [26a476](https://github.com/intel/neural-compressor/commit/26a47627895072d7d7bc1ecfa2537cdcf3917e10))

**Validated Configurations**
- Python 3.7, 3.8, 3.9, 3.10
- Centos 8.3 & Ubuntu 18.04 & Win10
- TensorFlow 2.9, 2.10
- Intel TensorFlow 2.7, 2.8, 2.9
- PyTorch 1.10.0+cpu, 1.11.0+cpu, 1.12.0+cpu
- IPEX 1.10.0, 1.11.0, 1.12.0
- MxNet 1.7, 1.9
- ONNX Runtime 1.10, 1.11, 1.12

Page 3 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.