Optimum

Latest version: v1.23.3

Safety actively analyzes 679296 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 22

1.11.0

OpenVINO

* Fix SDXL model U-NET component static reshaping by eaidova in https://github.com/huggingface/optimum-intel/pull/390
* Allow changing pkv precision by AlexKoff88 in https://github.com/huggingface/optimum-intel/pull/393
* Removed pkv history from quantization statistics of decoders by AlexKoff88 in https://github.com/huggingface/optimum-intel/pull/394
* Add audio tasks for OpenVINO inference by helena-intel in https://github.com/huggingface/optimum-intel/pull/396
* Do not download ONNX model in SD pipeline if not needed by eaidova in https://github.com/huggingface/optimum-intel/pull/402
* Enable loading of Text Inversion at runtime for OpenVINO SD pipelines by sammysun0711 in https://github.com/huggingface/optimum-intel/pull/400
* Enable Timm models OpenVINO export and inference sawradip in https://github.com/huggingface/optimum-intel/pull/404
* Fix OpenVINO Timm models loading by echarlaix in https://github.com/huggingface/optimum-intel/pull/413
* Add VAE image processor by echarlaix in https://github.com/huggingface/optimum-intel/pull/421
* Enable MPT OpenVINO export and inference by echarlaix in https://github.com/huggingface/optimum-intel/pull/425

Neural Compressor
* Fixed ONNX export for `neural-compressor>=2.2.2` by PenghuiCheng in https://github.com/huggingface/optimum-intel/pull/409
* Enable ONNX export for INC PTQ model by echarlaix in https://github.com/huggingface/optimum-intel/pull/373
* Fix INC CLI by echarlaix in https://github.com/huggingface/optimum-intel/pull/426


**Full Changelog**: https://github.com/huggingface/optimum-intel/commits/v1.11.0

1.10.4

Fix Llama memory issue with DeepSpeed ZeRO-3

- Fix Llama initialization 712

**Full Changelog**: https://github.com/huggingface/optimum-habana/compare/v1.10.2...v1.10.4

1.10.2

1.10.1

* Set minimum `optimum` version by echarlaix in 382
* Fix compilation step so that it can be performed before inference by echarlaix in 384

1.10

This release is fully compatible with [SynapseAI v1.10.0](https://docs.habana.ai/en/v1.10.0/).

- Upgrade to SynapseAI v1.10.0 255 regisss


HPU graphs for training

You can now use HPU graphs for training your models.

- Improve performance and scalability of BERT FT training 200 mlapinski-habana

Check out the [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/accelerate_training#hpu-graphs) for more information.


Various model optimizations

- Update BLOOM modeling for SynapseAI 1.10 277
- Optimize conv1d forward 231 ZhaiFeiyue
- Add static key-value cache for OPT, GPT-J, GPT-NeoX 246 248 249 ZhaiFeiyue
- Optimizations for running FLAN T5 with DeepSpeed ZeRO-3 257 libinta


Asynchronous data copy

You can now enable asynchronous data copy between the host and devices during training using `--non_blocking_data_copy`.

- Enable asynchronous data copy to get a better performance 211 jychen-habana

Check out the [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/accelerate_training#nonblocking-data-copy) for more information.


Profiling

It is now possible to profile your training relying on `GaudiTrainer`. You will need to pass [`--profiling_steps N`](https://huggingface.co/docs/optimum/habana/package_reference/trainer#optimum.habana.GaudiTrainingArguments.profiling_steps) and [`--profiling_warmup_steps K`](https://huggingface.co/docs/optimum/habana/package_reference/trainer#optimum.habana.GaudiTrainingArguments.profiling_warmup_steps).

- Enable profiling 250 ZhaiFeiyue


Adjusted throughput calculation

You can now let the `GaudiTrainer` compute the real throughput of your run (i.e. not counting the time spent while logging, evaluating and saving the model) with `--adjust_throughput`.

- Added an option to remove save checkpoint time from throughput calculation 237 libinta


Check SynapseAI version at import

A check is performed when importing `optimum.habana` to let you know if you are running the version of SynapseAI for which Optimum Habana has been tested.

- Check Synapse version when `optimum.habana` is used 225 regisss


Enhanced examples

Several examples have been added or improved. You can find them [here](https://github.com/huggingface/optimum-habana/tree/main/examples).

- the text-generation example now supports sampling and beam search decoding, and full bf16 generation 218 229 238 251 258 271
- the contrastive image-text example now supports HPU-accelerated data loading 256
- new Seq2Seq QA example 221
- new protein folding example with ESMFold 235 276

1.10.0

Stable Diffusion XL

Enable SD XL OpenVINO export and inference for **text-to-image** and **image-to-image** tasks by echarlaix in https://github.com/huggingface/optimum-intel/pull/377

python
from optimum.intel import OVStableDiffusionXLPipeline

model_id = "stabilityai/stable-diffusion-xl-base-0.9"
pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id, export=True)

prompt = "sailing ship in storm by Leonardo da Vinci"
image = pipeline(prompt).images[0]
pipeline.save_pretrained("openvino-sd-xl-base-0.9")


More examples in [documentation](https://huggingface.co/docs/optimum/main/en/intel/inference#stable-diffusion-xl)


**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.9.0...v1.10.0

Page 10 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.