Optimum-intel

Latest version: v1.21.0

Safety actively analyzes 687990 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 9

4.45

Transformers v4.45 support by echarlaix in https://github.com/huggingface/optimum-intel/pull/902

Subfolder

Remove the restriction for the model's config to be in the model's subfolder by tomaarsen in https://github.com/huggingface/optimum-intel/pull/933

New Contributors
* jane-intel made their first contribution in https://github.com/huggingface/optimum-intel/pull/696
* andreyanufr made their first contribution in https://github.com/huggingface/optimum-intel/pull/903
* MaximProshin made their first contribution in https://github.com/huggingface/optimum-intel/pull/905
* tomaarsen made their first contribution in https://github.com/huggingface/optimum-intel/pull/931

1.21.0

What's Changed

OpenVINO

Diffusers
* SD3 and Flux pipelines support by eaidova in https://github.com/huggingface/optimum-intel/pull/916

VLMs Modeling
* MiniCPMv support by eaidova in https://github.com/huggingface/optimum-intel/pull/972
* NanoLlava support by eaidova in https://github.com/huggingface/optimum-intel/pull/969
* Phi3v support by eaidova in https://github.com/huggingface/optimum-intel/pull/977

NNCF
* Quantization support for CausalVisualLMs by nikita-savelyevv in https://github.com/huggingface/optimum-intel/pull/951
* NF4 data type support for OV weight compression by l-bat in https://github.com/huggingface/optimum-intel/pull/988
* NNCF 2.14 new features support by nikita-savelyevv in https://github.com/huggingface/optimum-intel/pull/997

IPEX
* Unified XPU/CPU modeling with custom PagedAttention cache for LLMs by sywangyi in https://github.com/huggingface/optimum-intel/pull/1009

INC
* Layer-wise quantization support by changwangss in https://github.com/huggingface/optimum-intel/pull/1040


New Contributors
* emmanuel-ferdman made their first contribution in https://github.com/huggingface/optimum-intel/pull/974
* mvafin made their first contribution in https://github.com/huggingface/optimum-intel/pull/1033

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.20.0...v1.21.0

1.20.1

* Fix lora unscaling in diffusion pipelines by eaidova in https://github.com/huggingface/optimum-intel/pull/937
* Fix compatibility with diffusers < 0.25.0 by eaidova in https://github.com/huggingface/optimum-intel/pull/952
* Allow to use SDPA in clip models by eaidova in https://github.com/huggingface/optimum-intel/pull/941
* Updated OVPipelinePart to have separate ov_config by e-ddykim in https://github.com/huggingface/optimum-intel/pull/957
* Symbol use in optimum: fix misprint by jane-intel in https://github.com/huggingface/optimum-intel/pull/948
* Fix temporary directory saving by eaidova in https://github.com/huggingface/optimum-intel/pull/959
* Disable warning about tokenizers version for ov tokenizers >= 2024.5 by eaidova in https://github.com/huggingface/optimum-intel/pull/962
* Restore original model_index.json after save_pretrained call by eaidova in https://github.com/huggingface/optimum-intel/pull/961
* Add v4.46 transformers support by echarlaix in https://github.com/huggingface/optimum-intel/pull/960

1.20.0

OpenVINO

Multi-modal models support

Adding `OVModelForVisionCausalLM` by eaidova in https://github.com/huggingface/optimum-intel/pull/883

OpenCLIP models support

Adding OpenCLIP models support by sbalandi in https://github.com/huggingface/optimum-intel/pull/857

python
from optimum.intel import OVModelCLIPVisual, OVModelCLIPText

visual_model = OVModelCLIPVisual.from_pretrained(model_name_or_path)
text_model = OVModelCLIPText.from_pretrained(model_name_or_path)
image = processor(image).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat"])
image_features = visual_model(image).image_features
text_features = text_model(text).text_features


Diffusion pipeline

Adding `OVDiffusionPipeline` to simplify diffusers model loading by IlyasMoutawwakil in https://github.com/huggingface/optimum-intel/pull/889

diff
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
- pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id)
+ pipeline = OVDiffusionPipeline.from_pretrained(model_id)
image = pipeline("sailing ship in storm by Leonardo da Vinci").images[0]

NNCF GPTQ support

GPTQ support by nikita-savelyevv in https://github.com/huggingface/optimum-intel/pull/912

1.19.0

* Support SentenceTransformers models inference by aleksandr-mokrov in https://github.com/huggingface/optimum-intel/pull/865



python
from optimum.intel import OVSentenceTransformer

model_id = "sentence-transformers/all-mpnet-base-v2"
model = OVSentenceTransformer.from_pretrained(model_id, export=True)
sentences = ["This is an example sentence", "Each sentence is converted"]
embeddings = model.encode(sentences)



* Infer if the model needs to be exported or not by echarlaix in https://github.com/huggingface/optimum-intel/pull/825

diff
from optimum.intel import OVModelForCausalLM

- model = OVModelForCausalLM.from_pretrained("gpt2", export=True)
+ model = OVModelForCausalLM.from_pretrained("gpt2")


Compatible with transformers>=4.36,<=4.44

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.18.0...v1.19.0

1.18.3

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.18.2...v1.18.3

Page 1 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.