Optimum-intel

Latest version: v1.21.0

Safety actively analyzes 687990 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 9

1.18.2

- Fix model patching for internlm2 by eaidova in 814
- Fix loading models from cache by eaidova in 820
- Disable tpp for un-verified models by jiqing-feng in 822
- Update default NNCF configurationsby KodiaqQ in 824
- Fix update causal mask for transformers 4.42 by eaidova in 852
- Fix bf16 inference accuracy for mistral, phi3, dbrx by eaidova in 833
- Revert rotary embedding patching for recovering gpu accuracy by eaidova in 855
- Support transformers 4.43 by IlyasMoutawwakil in 856

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.18.1...v1.18.2

1.18.1

* OV configurations alignment by KodiaqQ in https://github.com/huggingface/optimum-intel/pull/787
* Enable transformers v4.42.0 by echarlaix in https://github.com/huggingface/optimum-intel/pull/793
* Deprecate onnx/ort model export and quantization by IlyasMoutawwakil in https://github.com/huggingface/optimum-intel/pull/795
* Free memory after model export by eaidova in https://github.com/huggingface/optimum-intel/pull/800
* Update config import path for neural-compressor v2.6 by changwangss in https://github.com/huggingface/optimum-intel/pull/801
* Pin library name to transformers for feature extraction by IlyasMoutawwakil in https://github.com/huggingface/optimum-intel/pull/804

**Full Changelog**: https://github.com/huggingface/optimum-intel/compare/v1.18.0...v1.18.1

1.18.0

OpenVINO


* Enable Arctic, Jais export by eaidova in https://github.com/huggingface/optimum-intel/pull/726
* Enable GLM-4 export by eaidova in https://github.com/huggingface/optimum-intel/pull/776
* Move data-driven quantization after model export for text-generation models by nikita-savelyevv in 721
* Create default token_type_ids when needed for inference by echarlaix 757
* Resolve default int4 config for local models by eaidova in 760
* Update to NNCF 2.11 by nikita-savelyevv in 763
* Fix quantization config by echarlaix in 773
* Expose trust remote code argument when generating calibration dataset for datasets >= v2.20.0 by echarlaix 767
* Add pipelines by echarlaix in https://github.com/huggingface/optimum-intel/pull/740

python
from optimum.intel.pipelines import pipeline

Load openvino model
ov_pipe = pipeline("text-generation", "helenai/gpt2-ov", accelerator="openvino")
Load pytorch model and convert it to openvino before inference
pipe = pipeline("text-generation", "gpt2", accelerator="openvino")


IPEX

* Enable IPEX patching for llama for >= v2.3 by jiqing-feng in 725
* Refactor llama modeling for IPEX patching by faaany in 728
* Refactor model loading by jiqing-feng in 752

1.17.2

* Fix compatibility with transformers < v4.39.0 by echarlaix in https://github.com/huggingface/optimum-intel/pull/754

1.17.1

* Add setuptools to fix issue with Python 3.12 by helena-intel in https://github.com/huggingface/optimum-intel/pull/747
* Disable warnings by helena-intel in https://github.com/huggingface/optimum-intel/pull/748
* Fix Windows TemporaryDirectory issue by helena-intel in https://github.com/huggingface/optimum-intel/pull/749
* Fix generation config loading and saving by eaidova in https://github.com/huggingface/optimum-intel/pull/750

1.17.0

OpenVINO

* Enable Orion, InternLM2 export by eaidova in https://github.com/huggingface/optimum-intel/pull/628
* Enable OLMo export by eaidova in https://github.com/huggingface/optimum-intel/pull/678
* Enable Phi3 export by eaidova in https://github.com/huggingface/optimum-intel/pull/686
* Enable BioGPT, Cohere, Persimmon, XGLM export by eaidova in https://github.com/huggingface/optimum-intel/pull/709
* Enable Aquila, InternLM, XVERSE export by eaidova in https://github.com/huggingface/optimum-intel/pull/716
* Add OVModelForVision2Seq class by eaidova in https://github.com/huggingface/optimum-intel/pull/634

python
from optimum.intel import OVModelForVision2Seq

model = OVModelForVision2Seq.from_pretrained("nlpconnect/vit-gpt2-image-captioning", export=True)
gen_tokens = model.generate(**inputs)


* Introduce OVQuantizationConfig for NNCF quantization by nikita-savelyevv in https://github.com/huggingface/optimum-intel/pull/638
* Enable hybrid StableDiffusion models export via optimum-cli by l-bat in https://github.com/huggingface/optimum-intel/pull/618

optimum-cli export openvino --model SimianLuo/LCM_Dreamshaper_v7 --task latent-consistency --dataset conceptual_captions --weight-format int8 <output_dir>

* Convert Tokenizers by default by apaniukov in https://github.com/huggingface/optimum-intel/pull/580
* Custom tasks modeling by IlyasMoutawwakil in https://github.com/huggingface/optimum-intel/pull/669
* Add dynamic quantization config by echarlaix in https://github.com/huggingface/optimum-intel/pull/661

python
from optimum.intel import OVModelForCausalLM, OVDynamicQuantizationConfig

model_id = "meta-llama/Meta-Llama-3-8B"
q_config = OVDynamicQuantizationConfig(bits=8, activations_group_size=32)
model = OVModelForCausalLM.from_pretrained(model_id, export=True, quantization_config=q_config)

* Transition to a newer NNCF API for PyTorch model quantization by nikita-savelyevv in https://github.com/huggingface/optimum-intel/pull/630


ITREX

* Add ITREX weight-only quantization support by PenghuiCheng in https://github.com/huggingface/optimum-intel/pull/455

IPEX

* Add IPEX pipeline by jiqing-feng in https://github.com/huggingface/optimum-intel/pull/501

Page 2 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.