Add hybrid quantization for Stable Diffusion pipelines by l-bat in 584
python
from optimum.intel import OVStableDiffusionPipeline, OVWeightQuantizationConfig
model_id = "echarlaix/stable-diffusion-v1-5-openvino"
quantization_config = OVWeightQuantizationConfig(bits=8, dataset="conceptual_captions")
model = OVStableDiffusionPipeline.from_pretrained(model_id, quantization_config=quantization_config)
Add openvino export configs by eaidova in 568
Enabling OpenVINO export for the following architectures enabled : Mixtral, ChatGLM, Baichuan, MiniCPM, Qwen, Qwen2, StableLM
Add support for export and inference for StarCoder2 models by eaidova in https://github.com/huggingface/optimum-intel/pull/619