Optimum-habana

Latest version: v1.15.0

Safety actively analyzes 710445 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 9

1.7.0

1.6.0

This release is fully compatible with SynapseAI 1.6.0.
- Update to SynapseAI 1.6.0 91

*It is recommended to use SynapseAI 1.6.0 for optimal performance.*


Documentation

Optimum Habana now has a dedicated documentation. you can find it [here](https://huggingface.co/docs/optimum/habana_index).

It shows how to quickly make a Transformers-based script work with the library. It also contains guides explaining how to do distributed training, how to use DeepSpeed or how to make the most of HPUs to accelerate training.


Masked Language Modeling

[A new example script](https://github.com/huggingface/optimum-habana/blob/main/examples/language-modeling/run_mlm.py) has been added to perform masked language modeling. This is especially useful if you want to pretrain models such as BERT or RoBERTa.
- Add run_mlm.py in language-modeling examples 83

1.5.0

BLOOM(Z)

BLOOM is introduced in this release with HPU-optimized tweaks to perform fast inference using DeepSpeed. A text-generation example is provided [here](https://github.com/huggingface/optimum-habana/tree/main/examples/text-generation) so that you can easily try it.

- Add text-generation example for BLOOM/BLOOMZ with DeepSpeed-inference 190 regisss

Check out [the blog post](https://huggingface.co/blog/habana-gaudi-2-bloom) we recently released for a benchmark comparing BLOOMZ performance on Gaudi2 and A100.

1.4.0

Multi-node training

This release adds support for multi-node training through DeepSpeed. This enables you to scale out up to thousands of nodes to speed up your trainings even more!

- Add support for multi-node training 116

Check out the [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/multi_node_training) to get started.


Inference through HPU graphs

You can now perform inference faster on Gaudi with [HPU graphs](https://docs.habana.ai/en/v1.8.0/PyTorch/Inference_on_PyTorch/Inference_Using_HPU_Graphs.html).

- Add support for inference through HPU graphs in GaudiTrainer 151

HPU graphs are currently only supported for single-device runs. Check out the [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/accelerate_inference) for more information.

1.3.0

Stable Diffusion

This release adds a new interface for the :hugs: Diffusers library which enables to support the Stable Diffusion pipeline for inference. Thus, you can now generate images from text on Gaudi relying on the user-friendliness of :hugs: Diffusers.

- Add support for Stable Diffusion 131

Check out the [documentation](https://huggingface.co/docs/optimum/habana/usage_guides/stable_diffusion) and [this example](https://github.com/huggingface/optimum-habana/tree/main/examples/stable-diffusion) for more information.


Wav2Vec2

After text and image models, a third modality is now supported with the addition of Wav2Vec2.

- Add suport for Wav2Vec2 120

Check out the [audio classification](https://github.com/huggingface/optimum-habana/tree/main/examples/audio-classification) and [speech recognition](https://github.com/huggingface/optimum-habana/tree/main/examples/speech-recognition) examples to see how to use it.

1.2.1

DeepSpeed

This release brings support for DeepSpeed. It is now possible to train bigger models on Gaudi with Optimum Habana!
- Add support for DeepSpeed 93

Check the documentation [here](https://huggingface.co/docs/optimum/habana_deepspeed) to know how to use it.


Computer Vision Models

Two computer-vision models have been validated for performing image classification in both single- and multi-cards configurations:
- ViT 80
- Swin

You can see how to use them [in this example](https://github.com/huggingface/optimum-habana/tree/main/examples/image-classification).

Page 7 of 9

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.