Optimum

Latest version: v1.23.3

Safety actively analyzes 679296 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 22

1.16.1

* Bump transformers version by echarlaix in https://github.com/huggingface/optimum-intel/pull/682

1.16

- Upgrade to SynapseAI v1.16 1043 regisss

1.16.0

Add hybrid quantization for Stable Diffusion pipelines by l-bat in 584

python
from optimum.intel import OVStableDiffusionPipeline, OVWeightQuantizationConfig

model_id = "echarlaix/stable-diffusion-v1-5-openvino"
quantization_config = OVWeightQuantizationConfig(bits=8, dataset="conceptual_captions")
model = OVStableDiffusionPipeline.from_pretrained(model_id, quantization_config=quantization_config)


Add openvino export configs by eaidova in 568

Enabling OpenVINO export for the following architectures enabled : Mixtral, ChatGLM, Baichuan, MiniCPM, Qwen, Qwen2, StableLM


Add support for export and inference for StarCoder2 models by eaidova in https://github.com/huggingface/optimum-intel/pull/619

1.15.2

* Fix compatibility for `transformers>=4.38.0` by echarlaix in https://github.com/huggingface/optimum-intel/pull/570

1.15.1

* Relax dependency on accelerate and datasets in OVQuantizer by eaidova in https://github.com/huggingface/optimum-intel/pull/547

* Disable compilation before applying 4-bit weight compression by AlexKoff88 in https://github.com/huggingface/optimum-intel/pull/569

* Update Transformers dependency requirements by echarlaix in https://github.com/huggingface/optimum-intel/pull/571

1.15

The codebase is fully validated for the latest version of Habana SDK, SynapseAI v1.15.0.

- Upgrade to SynapseAI 1.15.0 831 regisss


SDXL fine-tuning

- SDXL fine tuning 667 dsocek
- Mediapipe sdxl 787 ssarkar2


Whisper

- Support speech recognition with whisper models and seq2seq 704 emascarenhas


Phi

- Enable phi series models 732 lkk12014402


ControlNet

- Controlnet training 650 vidyasiv

Page 6 of 22

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.