Optimum

Latest version: v1.23.3

Safety actively analyzes 679296 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 22

1.9.4

* Fix `OVDataLoader` for NNCF quantization aware training for `transformers` > v4.31.0 by echarlaix in 376

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.9.3...v1.9.4

1.9.3

* Improved performance of decoders by AlexKoff88 354
* Fix openvino model integration compatibility for optimum > v1.9.0 by echarlaix in 365

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.9.2...v1.9.3

1.9.2

* Fix INC distillation to be compatible with `neural-compressor` v2.2.0 breaking changes by echarlaix in 338

1.9.1

* Fix inference for OpenVINO export for causal language models by echarlaix in https://github.com/huggingface/optimum-intel/pull/351

1.9.0

OpenVINO and NNCF

* Ensure compatibility for OpenVINO `v2023.0` by jiwaszki in https://github.com/huggingface/optimum-intel/pull/265
* Add Stable Diffusion quantization example by AlexKoff88 in https://github.com/huggingface/optimum-intel/pull/294 https://github.com/huggingface/optimum-intel/pull/304 https://github.com/huggingface/optimum-intel/pull/326
* Enable decoder quantized models export to leverage cache by echarlaix in https://github.com/huggingface/optimum-intel/pull/303
* Set height and width during inference for static models Stable Diffusion models by echarlaix in https://github.com/huggingface/optimum-intel/pull/308
* Set batch size to 1 by default for Wav2Vec2 for NNCF compatibility `v2.5.0` ljaljushkin in https://github.com/huggingface/optimum-intel/pull/312
* Ensure compatibility for NNCF `v2.5` by ljaljushkin in https://github.com/huggingface/optimum-intel/pull/314
* Fix OVModel for BLOOM architecture by echarlaix in https://github.com/huggingface/optimum-intel/pull/340
* Add SD OV model height and width attribute and fix export for `torch>=v2.0.0` by eaidova in https://github.com/huggingface/optimum-intel/pull/342

Intel Neural Compressor
* Add `TSModelForCausalLM` to enable TorchScript export, loading and inference for causal lm models by echarlaix in https://github.com/huggingface/optimum-intel/pull/283
* Remove INC deprecated classes by echarlaix in https://github.com/huggingface/optimum-intel/pull/293
* Enable IPEX model inference for text generation task by jiqing-feng in https://github.com/huggingface/optimum-intel/pull/227 https://github.com/huggingface/optimum-intel/pull/300
* Add `INCStableDiffusionPipeline` to enable INC quantized Stable Diffusion model loading by echarlaix in https://github.com/huggingface/optimum-intel/pull/305
* Enable the possibility to provide a quantization function and not a calibration dataset during INC static PTQ by PenghuiCheng in https://github.com/huggingface/optimum-intel/pull/309
* Fix `INCSeq2SeqTrainer` evaluation step by AbhishekSalian in https://github.com/huggingface/optimum-intel/pull/335
* Fix `INCSeq2SeqTrainer` padding step by echarlaix in https://github.com/huggingface/optimum-intel/pull/336

**Full Changelog**: https://github.com/huggingface/optimum-intel/commits/v1.9.0

1.8.8

* Fix optimum model inference compatibility with `transformers>=v4.30.0` by echarlaix in https://github.com/huggingface/optimum/pull/1102
* Fix stable diffusion ONNX export following diffusers breaking change by fxmarty in https://github.com/huggingface/optimum/pull/1116


**Full Changelog**: https://github.com/huggingface/optimum/compare/v1.8.7...v1.8.8

Page 11 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.