Optimum-intel

Latest version: v1.21.0

Safety actively analyzes 687990 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 9

1.11.1

* Fix compatibility with `optimum` by echarlaix in https://github.com/huggingface/optimum-intel/commit/b4663b4d7e7139643623cc2d335d39b3c46a5a2c

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.11.0...v1.11.1

1.11.0

OpenVINO

* Fix SDXL model U-NET component static reshaping by eaidova in https://github.com/huggingface/optimum-intel/pull/390
* Allow changing pkv precision by AlexKoff88 in https://github.com/huggingface/optimum-intel/pull/393
* Removed pkv history from quantization statistics of decoders by AlexKoff88 in https://github.com/huggingface/optimum-intel/pull/394
* Add audio tasks for OpenVINO inference by helena-intel in https://github.com/huggingface/optimum-intel/pull/396
* Do not download ONNX model in SD pipeline if not needed by eaidova in https://github.com/huggingface/optimum-intel/pull/402
* Enable loading of Text Inversion at runtime for OpenVINO SD pipelines by sammysun0711 in https://github.com/huggingface/optimum-intel/pull/400
* Enable Timm models OpenVINO export and inference sawradip in https://github.com/huggingface/optimum-intel/pull/404
* Fix OpenVINO Timm models loading by echarlaix in https://github.com/huggingface/optimum-intel/pull/413
* Add VAE image processor by echarlaix in https://github.com/huggingface/optimum-intel/pull/421
* Enable MPT OpenVINO export and inference by echarlaix in https://github.com/huggingface/optimum-intel/pull/425

Neural Compressor
* Fixed ONNX export for `neural-compressor>=2.2.2` by PenghuiCheng in https://github.com/huggingface/optimum-intel/pull/409
* Enable ONNX export for INC PTQ model by echarlaix in https://github.com/huggingface/optimum-intel/pull/373
* Fix INC CLI by echarlaix in https://github.com/huggingface/optimum-intel/pull/426


**Full Changelog**: https://github.com/huggingface/optimum-intel/commits/v1.11.0

1.10.1

* Set minimum `optimum` version by echarlaix in 382
* Fix compilation step so that it can be performed before inference by echarlaix in 384

1.10.0

Stable Diffusion XL

Enable SD XL OpenVINO export and inference for **text-to-image** and **image-to-image** tasks by echarlaix in https://github.com/huggingface/optimum-intel/pull/377

python
from optimum.intel import OVStableDiffusionXLPipeline

model_id = "stabilityai/stable-diffusion-xl-base-0.9"
pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id, export=True)

prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
pipeline.save_pretrained("openvino-sd-xl-base-0.9")


More examples in [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl)


**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.9.0...v1.10.0

1.9.4

* Fix `OVDataLoader` for NNCF quantization aware training for `transformers` > v4.31.0 by echarlaix in 376

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.9.3...v1.9.4

1.9.3

* Improved performance of decoders by AlexKoff88 354
* Fix openvino model integration compatibility for optimum > v1.9.0 by echarlaix in 365

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.9.2...v1.9.3

Page 5 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.