Optimum-intel

Latest version: v1.21.0

Safety actively analyzes 687990 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 9

1.16.1

* Bump transformers version by echarlaix in https://github.com/huggingface/optimum-intel/pull/682

1.16.0

Add hybrid quantization for Stable Diffusion pipelines by l-bat in 584

python
from optimum.intel import OVStableDiffusionPipeline, OVWeightQuantizationConfig

model_id = "echarlaix/stable-diffusion-v1-5-openvino"
quantization_config = OVWeightQuantizationConfig(bits=8, dataset="conceptual_captions")
model = OVStableDiffusionPipeline.from_pretrained(model_id, quantization_config=quantization_config)


Add openvino export configs by eaidova in 568

Enabling OpenVINO export for the following architectures enabled : Mixtral, ChatGLM, Baichuan, MiniCPM, Qwen, Qwen2, StableLM


Add support for export and inference for StarCoder2 models by eaidova in https://github.com/huggingface/optimum-intel/pull/619

1.15.2

* Fix compatibility for `transformers>=4.38.0` by echarlaix in https://github.com/huggingface/optimum-intel/pull/570

1.15.1

* Relax dependency on accelerate and datasets in OVQuantizer by eaidova in https://github.com/huggingface/optimum-intel/pull/547

* Disable compilation before applying 4-bit weight compression by AlexKoff88 in https://github.com/huggingface/optimum-intel/pull/569

* Update Transformers dependency requirements by echarlaix in https://github.com/huggingface/optimum-intel/pull/571

1.15.0

* Add OpenVINO Tokenizers by apaniukov 513

* Introduce the OpenVINO quantization configuration by AlexKoff88 https://github.com/huggingface/optimum-intel/pull/538


* Enable model OpenVINO export by echarlaix in https://github.com/huggingface/optimum-intel/pull/557

python
from diffusers import StableDiffusionPipeline
from optimum.exporters.openvino import export_from_model

model_id = "runwayml/stable-diffusion-v1-5"
model = StableDiffusionPipeline.from_pretrained(model_id)

export_from_model(model, output="ov_model", task="stable-diffusion")

1.14.0

IPEX models


python
from optimum.intel import IPEXModelForCausalLM
from transformers import AutoTokenizer, pipeline

model_id = "Intel/q8_starcoder"
model = IPEXModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
results = pipe("He's a dreadful magician and")



* Add IPEX models by echarlaix in 516 / 534 / 536

* Add IPEX models by ofirzaf in 542 / 543 / 544





Fixes
* Fix position_ids initialization for first inference of stateful models by eaidova in https://github.com/huggingface/optimum-intel/pull/532
* Relax requirements to have registered normalized config for decoder models 537 by eaidova in https://github.com/huggingface/optimum-intel/pull/537

Page 3 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.