Optimum-intel

Latest version: v1.22.0

Safety actively analyzes 723445 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 9

1.6.0

Refactorization of the INC API for neural-compressor v2.0 (118)

The `INCQuantizer` should be used to apply post-training (dynamic or static) quantization.

python
from transformers import AutoModelForQuestionAnswering
from neural_compressor.config import PostTrainingQuantConfig
from optimum.intel.neural_compressor import INCQuantizer

model_name = "distilbert-base-cased-distilled-squad"
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
Load the quantization configuration detailing the quantization we wish to apply
quantization_config = PostTrainingQuantConfig(approach="dynamic")
quantizer = INCQuantizer.from_pretrained(model)
Apply dynamic quantization and save the resulting model in the given directory
quantizer.quantize(quantization_config=quantization_config, save_directory="quantized_model")

The `INCTrainer` should be used to apply and combine during training compression techniques such as pruning, quantization and distillation.

diff
from transformers import TrainingArguments, default_data_collator
-from transformers import Trainer
+from optimum.intel.neural_compressor import INCTrainer
+from neural_compressor import QuantizationAwareTrainingConfig

Load the quantization configuration detailing the quantization we wish to apply
+quantization_config = QuantizationAwareTrainingConfig()

-trainer = Trainer(
+trainer = INCTrainer(
model=model,
+ quantization_config=quantization_config,
args=TrainingArguments("quantized_model", num_train_epochs=3.0),
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=default_data_collator,
)
trainer.save_model()


To load a quantized model, you can just replace your `AutoModelForXxx` class with the corresponding `INCModelForXxx` class.
python
from optimum.intel.neural_compressor import INCModelForSequenceClassification

loaded_model_from_hub = INCModelForSequenceClassification.from_pretrained(
"Intel/distilbert-base-uncased-finetuned-sst-2-english-int8-dynamic"
)

1.5.5

* Fix FP16 models output results for dynamic input shapes (https://github.com/huggingface/optimum-intel/pull/139)
* Modify OpenVINO required version (https://github.com/huggingface/optimum-intel/pull/141)
* Improves latency performance for Seq2Seq models inference using OpenVINO runtime (https://github.com/huggingface/optimum-intel/pull/131)

1.5.4

Fix the IPEX context-manager enabling inference mode, by returning the original model when IPEX cannot optimize it (https://github.com/huggingface/optimum-intel/pull/132)

1.5.3

* Fix `GenerationMixin` import for `transformers` version >= 4.25.0 (127)
* Modify temporarily the maximum required `transformers` version until fix in OpenVINO export of GPT2 model (120)

1.5.2

* Fix `OVModel` model configuration loading for `optimum` `v1.5.0` (110)
* Add possibility to save the ONNX model resulting from the OpenVINO export (99)
* Add `OVModel` options for model compilation (108)
* Add default model loading for `IncQuantizedModel` (113)

1.5.1

* Add Neural Compressor torch 1.13 quantization support (95)
* Remove the `IncTrainer`'s deprecated `_load_state_dict_in_model` method (84)
* Add `ModelForVision2Seq` INC support (70)
* Rename `OVModel` device attribute for `transformers` v2.24.0 compatibility (94)
* Rename openvino model file name (93)

Page 8 of 9

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.