Optimum-intel

Latest version: v1.21.0

Safety actively analyzes 687990 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 9

1.9.2

* Fix INC distillation to be compatible with `neural-compressor` v2.2.0 breaking changes by echarlaix in 338

1.9.1

* Fix inference for OpenVINO export for causal language models by echarlaix in https://github.com/huggingface/optimum-intel/pull/351

1.9.0

OpenVINO and NNCF

* Ensure compatibility for OpenVINO `v2023.0` by jiwaszki in https://github.com/huggingface/optimum-intel/pull/265
* Add Stable Diffusion quantization example by AlexKoff88 in https://github.com/huggingface/optimum-intel/pull/294 https://github.com/huggingface/optimum-intel/pull/304 https://github.com/huggingface/optimum-intel/pull/326
* Enable decoder quantized models export to leverage cache by echarlaix in https://github.com/huggingface/optimum-intel/pull/303
* Set height and width during inference for static models Stable Diffusion models by echarlaix in https://github.com/huggingface/optimum-intel/pull/308
* Set batch size to 1 by default for Wav2Vec2 for NNCF compatibility `v2.5.0` ljaljushkin in https://github.com/huggingface/optimum-intel/pull/312
* Ensure compatibility for NNCF `v2.5` by ljaljushkin in https://github.com/huggingface/optimum-intel/pull/314
* Fix OVModel for BLOOM architecture by echarlaix in https://github.com/huggingface/optimum-intel/pull/340
* Add SD OV model height and width attribute and fix export for `torch>=v2.0.0` by eaidova in https://github.com/huggingface/optimum-intel/pull/342

Intel Neural Compressor
* Add `TSModelForCausalLM` to enable TorchScript export, loading and inference for causal lm models by echarlaix in https://github.com/huggingface/optimum-intel/pull/283
* Remove INC deprecated classes by echarlaix in https://github.com/huggingface/optimum-intel/pull/293
* Enable IPEX model inference for text generation task by jiqing-feng in https://github.com/huggingface/optimum-intel/pull/227 https://github.com/huggingface/optimum-intel/pull/300
* Add `INCStableDiffusionPipeline` to enable INC quantized Stable Diffusion model loading by echarlaix in https://github.com/huggingface/optimum-intel/pull/305
* Enable the possibility to provide a quantization function and not a calibration dataset during INC static PTQ by PenghuiCheng in https://github.com/huggingface/optimum-intel/pull/309
* Fix `INCSeq2SeqTrainer` evaluation step by AbhishekSalian in https://github.com/huggingface/optimum-intel/pull/335
* Fix `INCSeq2SeqTrainer` padding step by echarlaix in https://github.com/huggingface/optimum-intel/pull/336

**Full Changelog**: https://github.com/huggingface/optimum-intel/commits/v1.9.0

1.8.1

* Fix OpenVINO Trainer for transformers >= v4.29.0 by echarlaix in https://github.com/huggingface/optimum-intel/pull/328

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.8.0...v1.8.1

1.8.0

Optimum INC CLI
Integration of the Intel Neural Compressor dynamic quantization to the Optimum command line interface. Example commands:
bash
optimum-cli inc --help
optimum-cli inc quantize --help
optimum-cli inc quantize --model distilbert-base-cased-distilled-squad --output int8_distilbert/

* Add Optimum INC CLI to apply dynamic quantization by echarlaix in https://github.com/huggingface/optimum-intel/pull/280

Levarage past key values for OpenVINO decoder models

Enable the possibility to use the pre-computed key / values in order to make inference faster. This will be enabled by default when exporting the model.

python
model = OVModelForCausalLM.from_pretrained(model_id, export=True)

To disable it, `use_cache` can be set to `False` when loading the model:
python
model = OVModelForCausalLM.from_pretrained(model_id, export=True, use_cache=False)

* Enable the possibility to use the pre-computed key / values for OpenVINO decoder models by echarlaix in https://github.com/huggingface/optimum-intel/pull/274

INC config summarizing optimizations details
* Add `INCConfig` by echarlaix in https://github.com/huggingface/optimum-intel/pull/263

Fixes

* Remove dynamic shapes restriction for GPU devices by helena-intel in https://github.com/huggingface/optimum-intel/pull/262
* Enable OpenVINO model caching for CPU devices by helena-intel in https://github.com/huggingface/optimum-intel/pull/281
* Fix the `.to()` method for causal langage models by helena-intel in https://github.com/huggingface/optimum-intel/pull/284
* Fix pytorch model saving for `transformers>=4.28.0` when optimized with `OVTrainer` echarlaix in https://github.com/huggingface/optimum-intel/pull/285
* Update for task name for ONNX and OpenVINO export for `optimum>=1.8.0` by echarlaix in https://github.com/huggingface/optimum-intel/pull/286

1.7.3

* Fix INC distillation to be compatible with `neural-compressor` v2.1 by echarlaix in https://github.com/huggingface/optimum-intel/pull/260

Page 6 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.