Intel-extension-for-pytorch

Latest version: v2.2.0

Safety actively analyzes 623439 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.12.300

Highlights

- Optimize BF16 MHA fusion to avoid transpose overhead to boost BERT-* BF16 performance [992](https://github.com/intel/intel-extension-for-pytorch/commit/7076524601f42a9b60402019af21b32782c2c203)
- Remove 64bytes alignment constraint for FP32 and BF16 AddLayerNorm fusion [992](https://github.com/intel/intel-extension-for-pytorch/commit/7076524601f42a9b60402019af21b32782c2c203)
- Fix INT8 RetinaNet accuracy issue [1032](https://github.com/intel/intel-extension-for-pytorch/commit/e0c719be8246041f8b7bc5feca9cf9c2f599210a)
- Fix `Cat.out` issue that does not update the `out` tensor (1053) [1074](https://github.com/intel/intel-extension-for-pytorch/commit/4381f9126bbb65aab2daf034299c3bf3d307e6e2)

**Full Changelog**: https://github.com/intel/intel-extension-for-pytorch/compare/v1.12.100...v1.12.300

1.12.100

This is a patch release to fix the AVX2 issue that blocks running on non-AVX512 platforms.

1.12.0

We are excited to bring you the release of Intel® Extension for PyTorch* [1.12.0-cpu](https://github.com/pytorch/pytorch/releases/tag/v1.12.0), by tightly following PyTorch 1.12 release. In this release, we matured the automatic int8 quantization and made it a stable feature. We stabilized runtime extension and brought about a MultiStreamModule feature to further boost throughput in offline inference scenario. We also brought about various enhancements in operation and graph which are positive for the performance of broad set of workloads.

- Automatic INT8 quantization became a stable feature baking into a well-tuned default quantization recipe, supporting both static and dynamic quantization and a wide range of calibration algorithms.
- Runtime Extension, featured MultiStreamModule, became a stable feature, could further enhance throughput in offline inference scenario.
- More optimizations in graph and operations to improve performance of broad set of models, examples include but not limited to wave2vec, T5, Albert etc.
- Pre-built experimental binary with oneDNN Graph Compiler tuned on would deliver additional performance gain for Bert, Albert, Roberta in INT8 inference.

Highlights
- Matured automatic INT8 quantization feature baking into a well-tuned default quantization recipe. We facilitated the user experience and provided a wide range of calibration algorithms like Histogram, MinMax, MovingAverageMinMax, etc. Meanwhile, We polished the static quantization with better flexibility and enabled dynamic quantization as well. Compared to the previous version, the brief changes are as follows. Refer to [tutorial page](features/int8.md) for more details.

<table align="center">
<tbody>
<tr>
<td>v1.11.0-cpu</td>
<td>v1.12.0-cpu</td>
</tr>
<tr>
<td valign="top">

python
import intel_extension_for_pytorch as ipex

Calibrate the model
qconfig = ipex.quantization.QuantConf(qscheme=torch.per_tensor_affine)
for data in calibration_data_set:
with ipex.quantization.calibrate(qconfig):
model_to_be_calibrated(x)
qconfig.save('qconfig.json')

Convert the model to jit model
conf = ipex.quantization.QuantConf('qconfig.json')
with torch.no_grad():
traced_model = ipex.quantization.convert(model, conf, example_input)


Do inference
y = traced_model(x)

</td>
<td valign="top">

python
import intel_extension_for_pytorch as ipex

Calibrate the model
qconfig = ipex.quantization.default_static_qconfig Histogram calibration algorithm and
calibrated_model = ipex.quantization.prepare(model_to_be_calibrated, qconfig, example_inputs=example_inputs)
for data in calibration_data_set:
calibrated_model(data)


Convert the model to jit model
quantized_model = ipex.quantization.convert(calibrated_model)
with torch.no_grad():
traced_model = torch.jit.trace(quantized_model, example_input)
traced_model = torch.jit.freeze(traced_model)

Do inference
y = traced_model(x)

</td>
</tr>
</tbody>
</table>

- Runtime Extension, featured MultiStreamModule, became a stable feature. In this release, we enhanced the heuristic rule to further enhance throughput in offline inference scenario. Meanwhile, we also provide the `ipex.cpu.runtime.MultiStreamModuleHint` to custom how to split the input into streams and concat the output for each steam.

<table align="center">
<tbody>
<tr>
<td>v1.11.0-cpu</td>
<td>v1.12.0-cpu</td>
</tr>
<tr>
<td valign="top">

python
import intel_extension_for_pytorch as ipex

Create CPU pool
cpu_pool = ipex.cpu.runtime.CPUPool(node_id=0)

Create multi-stream model
multi_Stream_model = ipex.cpu.runtime.MultiStreamModule(model, num_streams=2, cpu_pool=cpu_pool)

</td>
<td valign="top">

python
import intel_extension_for_pytorch as ipex

Create CPU pool
cpu_pool = ipex.cpu.runtime.CPUPool(node_id=0)

Optional
multi_stream_input_hint = ipex.cpu.runtime.MultiStreamModuleHint(0)
multi_stream_output_hint = ipex.cpu.runtime.MultiStreamModuleHint(0)

Create multi-stream model
multi_Stream_model = ipex.cpu.runtime.MultiStreamModule(model, num_streams=2, cpu_pool=cpu_pool,
multi_stream_input_hint, optional
multi_stream_output_hint ) optional

</td>
</tr>
</tbody>
</table>

- Polished the `ipex.optimize` to accept the input shape information which would conclude the optimal memory layout for better kernel efficiency.

<table align="center">
<tbody>
<tr>
<td>v1.11.0-cpu</td>
<td>v1.12.0-cpu</td>
</tr>
<tr>
<td valign="top">

python
import intel_extension_for_pytorch as ipex

model = ...
model.load_state_dict(torch.load(PATH))
model.eval()
optimized_model = ipex.optimize(model, dtype=torch.bfloat16)

</td>
<td valign="top">

python
import intel_extension_for_pytorch as ipex

model = ...
model.load_state_dict(torch.load(PATH))
model.eval()
optimized_model = ipex.optimize(model, dtype=torch.bfloat16, sample_input=input)

</td>
</tr>
</tbody>
</table>

- Provided a pre-built experimental binary with oneDNN Graph Compiler turned on, which would deliver additional performance gain for Bert, Albert, and Roberta in INT8 inference.

- Provided more optimizations in graph and operations
- Fuse Adam to improve training performance [822](https://github.com/intel/intel-extension-for-pytorch/commit/d3f714e54dc8946675259ea7a445b26a2460b523)
- Enable Normalization operators to support channels-last 3D [642](https://github.com/intel/intel-extension-for-pytorch/commit/ae268ac1760d598a29584de5c99bfba46c6554ae)
- Support Deconv3D to serve most models and implement most fusions like Conv
- Enable LSTM to support static and dynamic quantization [692](https://github.com/intel/intel-extension-for-pytorch/commit/2bf8dba0c380a26bbb385e253adbfaa2a033a785)
- Enable Linear to support dynamic quantization [787](https://github.com/intel/intel-extension-for-pytorch/commit/ff231fb55e33c37126a0ef7f0e739cd750d1ef6c)
- Fusions.
- Fuse `Add` + `Swish` to accelerate FSI Riskful model [551](https://github.com/intel/intel-extension-for-pytorch/commit/cc855ff2bafd245413a6111f3d21244d0bcbb6f6)
- Fuse `Conv` + `LeakyReLU` [589](https://github.com/intel/intel-extension-for-pytorch/commit/dc6ed1a5967c644b03874fd1f8a503f0b80be6bd)
- Fuse `BMM` + `Add` [407](https://github.com/intel/intel-extension-for-pytorch/commit/d1379aa565cc84b4a61b537ba2c9a046b7652f1a)
- Fuse `Concat` + `BN` + `ReLU` [647](https://github.com/intel/intel-extension-for-pytorch/commit/cad3f82f6b7efed0c08b2f0c11117a4720f58df4)
- Optimize `Convolution1D` to support channels last memory layout and fuse `GeLU` as its post operation. [657](https://github.com/intel/intel-extension-for-pytorch/commit/a0c063bdf4fd1a7e66f8a23750ac0c2fe471a559)
- Fuse `Einsum` + `Add` to boost Alphafold2 [674](https://github.com/intel/intel-extension-for-pytorch/commit/3094f346a67c81ad858ad2a80900fab4c3b4f4e9)
- Fuse `Linear` + `Tanh` [711](https://github.com/intel/intel-extension-for-pytorch/commit/b24cc530b1fd29cb161a76317891e361453333c9)

Known Issues
- `RuntimeError: Overflow when unpacking long` when a tensor's min max value exceeds int range while performing int8 calibration. Please customize QConfig to use min-max calibration method.
- Calibrating with quantize_per_tensor, when benchmarking with 1 OpenMP\* thread, results might be incorrect with large tensors (find more detailed info [here](https://github.com/pytorch/pytorch/issues/80501). Editing your code following the pseudocode below can workaround this issue, if you do need to explicitly set OMP_NUM_THREAEDS=1 for benchmarking. However, there could be a performance regression if oneDNN graph compiler prototype feature is utilized.

Workaround pseudocode:

perform convert/trace/freeze with omp_num_threads > 1(N)
torch.set_num_threads(N)
prepared_model = prepare(model, input)
converted_model = convert(prepared_model)
traced_model = torch.jit.trace(converted_model, input)
freezed_model = torch.jit.freeze(traced_model)
run freezed model to apply optimization pass
freezed_model(input)

benchmarking with omp_num_threads = 1
torch.set_num_threads(1)
run_benchmark(freezed_model, input)

- Low performance with INT8 support for dynamic shapes
The support for dynamic shapes in Intel® Extension for PyTorch\* INT8 integration is still work in progress. When the input shapes are dynamic, for example inputs of variable image sizes in an object detection task or of variable sequence lengths in NLP tasks, the Intel® Extension for PyTorch\* INT8 path may slow down the model inference. In this case, use stock PyTorch INT8 functionality.
**Note**: Using Runtime Extension feature if batch size cannot be divided by number of streams, because mini batch size on each stream are not equivalent, scripts run into this issues.
- BF16 AMP(auto-mixed-precision) runs abnormally with the extension on the AVX2-only machine if the topology contains `Conv`, `Matmul`, `Linear`, and `BatchNormalization`
- Runtime extension of MultiStreamModule doesn't support DLRM inference, since the input of DLRM (EmbeddingBag specifically) can't be simplely batch split.
- Runtime extension of MultiStreamModule has poor performance of RNNT Inference comparing with native throughput mode. Only part of the RNNT models (joint_net specifically) can be jit traced into graph. However, in one batch inference, `joint_net` is invoked multi times. It increases the overhead of MultiStreamModule as input batch split, thread synchronization and output concat.
- Incorrect Conv and Linear result if the number of OMP threads is changed at runtime
The oneDNN memory layout depends on the number of OMP threads, which requires the caller to detect the changes for the of OMP threads while this release has not implemented it yet.
- Low throughput with DLRM FP32 Train
A 'Sparse Add' [PR](https://github.com/pytorch/pytorch/pull/23057) is pending on review. The issue will be fixed when the PR is merged.
- If inference is done with a custom function, `conv+bn` folding feature of the `ipex.optimize()` function doesn't work.

import torch
import intel_pytorch_extension as ipex
class Module(torch.nn.Module):
def __init__(self):
super(Module, self).__init__()
self.conv = torch.nn.Conv2d(1, 10, 5, 1)
self.bn = torch.nn.BatchNorm2d(10)
self.relu = torch.nn.ReLU()
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
x = self.relu(x)
return x
def inference(self, x):
return self.forward(x)
if __name__ == '__main__':
m = Module()
m.eval()
m = ipex.optimize(m, dtype=torch.float32, level="O0")
d = torch.rand(1, 1, 112, 112)
with torch.no_grad():
m.inference(d)

This is a PyTorch FX limitation. You can avoid this error by calling `m = ipex.optimize(m, level="O0")`, which doesn't apply ipex optimization, or disable `conv+bn` folding by calling `m = ipex.optimize(m, level="O1", conv_bn_folding=False)`.

1.11.200

Highlights

- Enable more fused operators to accelerate particular models.
- Fuse `Convolution` and `LeakyReLU` ([648](https://github.com/intel/intel-extension-for-pytorch/commit/d7603133f37375b3aba7bf744f1095b923ba979e))
- Support [`torch.einsum`](https://pytorch.org/docs/stable/generated/torch.einsum.html) and fuse it with `add` ([#684](https://github.com/intel/intel-extension-for-pytorch/commit/b66d6d8d0c743db21e534d13be3ee75951a3771d))
- Fuse `Linear` and `Tanh` ([685](https://github.com/intel/intel-extension-for-pytorch/commit/f0f2bae96162747ed2a0002b274fe7226a8eb200))
- In addition to the existing installation methods, this release provides Docker installation from [DockerHub](https://hub.docker.com/).
- Provide the [evaluation wheel packages](https://intel.github.io/intel-extension-for-pytorch/1.11.200/tutorials/installation.html#installation_onednn_graph_compiler) that could boost performance for selective topologies on top of oneDNN graph compiler prototype feature.
***NOTE***: This is still at the early development stage and not fully mature yet, but feel free to reach out through GitHub tickets if you have any suggestions.

**Full Changelog**: https://github.com/intel/intel-extension-for-pytorch/compare/v1.11.0...v1.11.200

1.11.0

We are excited to announce Intel® Extension for PyTorch* 1.11.0-cpu release by tightly following PyTorch 1.11 release. Along with extension 1.11, we focused on continually improving OOB user experience and performance. Highlights include:

* Support a single binary with runtime dynamic dispatch based on AVX2/AVX512 hardware ISA detection
* Support install binary from `pip` with package name only (without the need of specifying the URL)
* Provide the C++ SDK installation to facilitate ease of C++ app development and deployment
* Add more optimizations, including graph fusions for speeding up Transformer-based models and CNN, etc
* Reduce the binary size for both the PIP wheel and C++ SDK (2X to 5X reduction from the previous version)

Highlights
- Combine the AVX2 and AVX512 binary as a single binary and automatically dispatch to different implementations based on hardware ISA detection at runtime. The typical case is to serve the data center that mixtures AVX2-only and AVX512 platforms. It does not need to deploy the different ISA binary now compared to the previous version

***NOTE***: The extension uses the oneDNN library as the backend. However, the BF16 and INT8 operator sets and features are different between AVX2 and AVX512. Please refer to [oneDNN document](https://oneapi-src.github.io/oneDNN/dev_guide_int8_computations.html#processors-with-the-intel-avx2-or-intel-avx-512-support) for more details.
> When one input is of type u8, and the other one is of type s8, oneDNN assumes that it is the user’s responsibility to choose the quantization parameters so that no overflow/saturation occurs. For instance, a user can use u7 [0, 127] instead of u8 for the unsigned input, or s7 [-64, 63] instead of the s8 one. It is worth mentioning that this is required only when the Intel AVX2 or Intel AVX512 Instruction Set is used.

- The extension wheel packages have been uploaded to [pypi.org](https://pypi.org/project/intel-extension-for-pytorch/). The user could directly install the extension by `pip/pip3` without explicitly specifying the binary location URL.

<table align="center">
<tbody>
<tr>
<td>v1.10.100-cpu</td>
<td>v1.11.0-cpu</td>
</tr>
<tr>
<td>

python
python -m pip install intel_extension_for_pytorch==1.10.100 -f https://software.intel.com/ipex-whl-stable

</td>
<td>

python
pip install intel_extension_for_pytorch

</td>
</tr>
</tbody>
</table>

- Compared to the previous version, this release provides a dedicated installation file for the C++ SDK. The installation file automatically detects the PyTorch C++ SDK location and installs the extension C++ SDK files to the PyTorch C++ SDK. The user does not need to manually add the extension C++ SDK source files and CMake to the PyTorch SDK. In addition to that, the installation file reduces the C++ SDK binary size from ~220MB to ~13.5MB.

<table align="center">
<tbody>
<tr>
<td>v1.10.100-cpu</td>
<td>v1.11.0-cpu</td>
</tr>
<tr>
<td>

python
intel-ext-pt-cpu-libtorch-shared-with-deps-1.10.0+cpu.zip (220M)
intel-ext-pt-cpu-libtorch-cxx11-abi-shared-with-deps-1.10.0+cpu.zip (224M)

</td>
<td>

python
libintel-ext-pt-1.11.0+cpu.run (13.7M)
libintel-ext-pt-cxx11-abi-1.11.0+cpu.run (13.5M)

</td>
</tr>
</tbody>
</table>

- Add more optimizations, including more custom operators and fusions.

- Fuse the QKV linear operators as a single Linear to accelerate the Transformer*(BERT-*) encoder part - [278](https://github.com/intel/intel-extension-for-pytorch/commit/0f27c269cae0f902973412dc39c9a7aae940e07b).
- Remove Multi-Head-Attention fusion limitations to support the 64bytes unaligned tensor shape. [531](https://github.com/intel/intel-extension-for-pytorch/commit/dbb10fedb00c6ead0f5b48252146ae9d005a0fad)
- Fold the binary operator to Convolution and Linear operator to reduce computation. [432](https://github.com/intel/intel-extension-for-pytorch/commit/564588561fa5d45b8b63e490336d151ff1fc9cbc) [#438](https://github.com/intel/intel-extension-for-pytorch/commit/b4e7dacf08acd849cecf8d143a11dc4581a3857f) [#602](https://github.com/intel/intel-extension-for-pytorch/commit/74aa21262938b923d3ed1e6929e7d2b629b3ff27)
- Replace the outplace operators with their corresponding in-place version to reduce memory footprint. The extension currently supports the operators including `sliu`, `sigmoid`, `tanh`, `hardsigmoid`, `hardswish`, `relu6`, `relu`, `selu`, `softmax`. [524](https://github.com/intel/intel-extension-for-pytorch/commit/38647677e8186a235769ea519f4db65925eca33c)
- Fuse the Concat + BN + ReLU as a single operator. [452](https://github.com/intel/intel-extension-for-pytorch/commit/275ff503aea780a6b741f04db5323d9529ee1081)
- Optimize Conv3D for both imperative and JIT by enabling NHWC and pre-packing the weight. [425](https://github.com/intel/intel-extension-for-pytorch/commit/ae33faf62bb63b204b0ee63acb8e29e24f6076f3)

- Reduce the binary size. C++ SDK is reduced from ~220MB to ~13.5MB while the wheel packaged is reduced from ~100MB to ~40MB.

- Update oneDNN and oneDNN graph to [2.5.2](https://github.com/oneapi-src/oneDNN/releases/tag/v2.5.2) and [0.4.2](https://github.com/oneapi-src/oneDNN/releases/tag/graph-v0.4.2) respectively.

Known Issues
- BF16 AMP(auto-mixed-precision) runs abnormally with the extension on the AVX2-only machine if the topology contains `Conv`, `Matmul`, `Linear`, and `BatchNormalization`

- Runtime extension does not support the scenario that the BS is not divisible by the stream number

- Incorrect Conv and Linear result if the number of OMP threads is changed at runtime

The oneDNN memory layout depends on the number of OMP threads, which requires the caller to detect the changes for the of OMP threads while this release has not implemented it yet.

- INT8 performance of EfficientNet and DenseNet with the extension is slower than that of FP32

- Low performance with INT8 support for dynamic shapes

The support for dynamic shapes in Intel® Extension for PyTorch* INT8 integration is still working in progress. For the use cases where the input shapes are dynamic, for example, inputs of variable image sizes in an object detection task or of variable sequence lengths in NLP tasks, the Intel® Extension for PyTorch* INT8 path may slow down the model inference. In this case, please utilize stock PyTorch INT8 functionality.

- Low throughput with DLRM FP32 Train

A ‘Sparse Add’ [PR](https://github.com/pytorch/pytorch/pull/23057) is pending on review. The issue will be fixed when the PR is merged.

- If the inference is done with a custom function, conv+bn folding feature of the `ipex.optimize()` function doesn’t work.

python
import torch
import intel_pytorch_extension as ipex

class Module(torch.nn.Module):
def __init__(self):
super(Module, self).__init__()
self.conv = torch.nn.Conv2d(1, 10, 5, 1)
self.bn = torch.nn.BatchNorm2d(10)
self.relu = torch.nn.ReLU()

def forward(self, x):
x = self.conv(x)
x = self.bn(x)
x = self.relu(x)
return x

def inference(self, x):
return self.forward(x)

if __name__ == '__main__':
m = Module()
m.eval()
m = ipex.optimize(m, dtype=torch.float32, level="O0")
d = torch.rand(1, 1, 112, 112)
with torch.no_grad():
m.inference(d)

This is PyTorch FX limitation, user can avoid this error by calling `m = ipex.optimize(m, level="O0")`, which doesn't apply the extension optimization, or disable `conv+bn` folding by calling `m = ipex.optimize(m, level="O1", conv_bn_folding=False)`.

What's Changed
**Full Changelog**: https://github.com/intel/intel-extension-for-pytorch/compare/v1.10.100...v1.11.0

1.10.100

This release is meant to fix the following issues:

- Resolve the issue that the PyTorch Tensor Expression(TE) did not work after importing the extension.
- Wrap the BatchNorm(BN) as another operator to break the TE's BN-related fusions. Because the BatchNorm performance of PyTorch Tensor Expression can not achieve the same performance as PyTorch ATen BN.
- Update the [documentation](https://intel.github.io/intel-extension-for-pytorch/)
- Fix the INT8 quantization example issue 205
- Polish the installation guide

**Full Changelog**: https://github.com/intel/intel-extension-for-pytorch/compare/v1.10.0...v1.10.100

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.