Onednn

Latest version: v2024.1.1

Safety actively analyzes 622389 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 24

3.4.1

This is a patch release containing the following changes to v3.4:
* Fixed an issue with caching and serialization of primitives in deterministic mode (7ed604a1e5688022a59444059e53a6a7967f679a)
* Introduced memory descriptor serialization API (4cad420e673f4cd49568ea7c4dd6a55e6f55794e, 929a27ae0412a0851629da70916eee360a39baac, 9b848c859a6b1d046dd63cf20f817aa9428fb483)
* Fixed incorrect results in fp64 convolution and deconvolution on Intel GPUs based on Xe-LPG architecture (ebe77b566bb1cd273e9bda99cc62063b7c2a7e45, 0b399ac42740a9c6ed458aacafdb31ce16205cbd, d748d642d7871608e09f5cee5d964ddcfc8a42ef, 9f4f3d510ddc9d639db052302be579621d46bb1f, 21a8caebb34a85074f3f8a5cef35ed85532a5bbe)
* Fixed incorrect results in reorder with large sizes on Intel CPUs and GPUs (69a111e6d835f8632ea571f3ea0e273b22488d37, 4b7236134bde1c1a71859a844eae860a71670b97, 74a343bf66a1c8f113fa8e025391aba5015c6e48)
* Reduced creation time for deconvolution primitive on Intel CPUs (bec487e4ae16b3e88382adf9574e9c62cc76d1bd, 1eab00586881f4fb6966a16f71216528ec549c11)
* Fixed performance regression in deconvolution on Intel CPUs (fbe5b97c966696a3f5be2240c0eb4592ed548036, 1dd3c6af03addefcf92ac45eddeb8becf63d6a6e)
* Removed dangling symblols from static builds (e92c4041b12e55837452327c3ebd9411dbc2e861, 6f5621aed75226b93f07879fafa6fb799a36f042)
* Fixed crash during platform detection on some AArch64-based systems (406a0798c1c5b939726a892ad5a96e20298396ca)
* Fixed performance regression in int8 deconvolution on Intel CPUs (7e50e152f21a79978b8910260e042b43941b601c)
* Fixed handling of zero points for matmul in verbose logs converter (15c791686f94291eddda7a2e24835ba1113c530a)

3.4

Performance Optimizations

* Intel Architecture Processors:
* Improved performance for 4th generation Intel Xeon Scalable processors (formerly Sapphire Rapids).
* Improved performance for the future Intel Xeon Scalable processors (code-named Sierra Forest and Granite Rapids). These optimizations are now included by default on compatible processors.
* Improved RNN primitive performance with LBR_GRU cell.
* Improved softmax performance on processors with Intel AVX2 or Intel AVX-512 instruction set support.
* Improved fp32 inner product performance on processors with Intel AVX2 instruction set support.
* Improved fp32, fp16, bf16 matmul primitive performance on processors with Intel AVX-512 and Intel AMX instruction set support.
* Improved int8 matmul performance with transposed A tensor.
* Improved performance of resampling primitive on processors with Intel AVX2 instruction set support.
* Improved performance of int8 convolution with post-ops.
* Optimized batch matmul with binary post-op and broadcast mask `1` and `14`.
* Improved the Scaled Dot Product Attention (SDPA) subgraph performance with Graph API.
* Improved performance of subgraphs including `matmul` and `add` operations and mixed int8 and bfloat16 data types with Graph API.
* **[experimental]** Improved performance of `reduction`, `softmax` and `layernorm` operations with experimental Graph Compiler backend.
* **[experimental]** Improved performance for llama2 MLP subgraph with experimental Graph Compiler backend.

* Intel Graphics Products:
* Introduced initial optimizations for Processor Graphics based on Xe2 architecture.
* Improved performance for the Intel Data Center GPU Max Series (formerly Ponte Vecchio).
* Improved performance for Intel Arc graphics (formerly Alchemist and DG2) and the Intel Data Center GPU Flex Series (formerly Arctic Sound).
* Improved matmul performance for cases relevant to Large Language Models (LLMs) and Transformer-like models.
* Improved convolution performance for cases relevant to the Stable Diffusion model.
* Improved RNN primitive performance.
* Improved pooling forward propagation performance.
* Improved batched matmul performance for cases with 5 dimensions or more.

* AArch64-based Processors:
* Added an option to build oneDNN with macOS Accelerate library to improve performance on Apple silicon.
* Improved reorder primitive performance with Compute Library for the Arm architecture (ACL).
* Improved bf16 inner product product primitive performance with ACL.

Functionality
* Introduced [GPT-Q support](https://github.com/igorsafo/oneDNN/tree/rfcs-gpt-quantization/rfcs/20231108-gpt-quantization) to improve Large Language Models (LLMs) performance with compressed weights. Optimized implementation is available for Intel Graphics Products and support [matmul with int8 wight compression](https://oneapi-src.github.io/oneDNN/page_weights_decompression_matmul_cpp.html#doxid-weights-decompression-matmul-cpp).
* Introduced [fp8 data type](https://oneapi-src.github.io/oneDNN/dev_guide_data_types.html) support in primitives and Graph API. Optimized implementation is available for Intel Data Center GPU Max Series (formerly Ponte Vecchio).
* Introduced support for fp16 and bf16 scale and shift arguments for layer normalization. Optimized implementation is available for Intel Graphics Products.
* **[experimental]** Introduced unstructured sparsity support for processors with Intel AMX support relying on VCOMPRESS/VPEXPAND instructions.
* Intel Graphics Products
* Introduced support for Intel Data Center GPU Max 1550VG
* Introduced PReLU post-op support for inner product and matmul primitives.

Usability
* Added opt-in [deterministic mode](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_deterministic.html) support. Deterministic mode guarantees that results are bitwise identical between runs in a fixed environment.
* Introduced [accumulation mode](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_accumulation_mode.html) control.
* Extended oneDNN verbose diagnostics with information on dispatching decisions in convolution and matmul implementations.
* Extended verbose diagnostics for Graph API with information for operation schema check results and pattern matching results.
* Reduced RNN primitive memory consumption on GPUs.
* Added examples demonstrating use of oneDNN Graph API in eager mode use cases.
* Extended tensor constructor in Graph API to support memory allocation and management by the library.
* Introduced new API and environment variable to manage [Graph API constant tensor cache capacity](https://oneapi-src.github.io/oneDNN/dev_guide_constant_tensor_cache.html).
* Improved the efficiency of pattern matching in Graph API by optimizing pattern registration, reducing pattern numbers, and skipping patterns more wisely.
* Changed default optimization flags for AArch64 builds to `-mcpu=generic` to improve portability.

Validation
* Improved benchdnn performance by optimizing bottlenecks in validation code.
* Introduced `--num-streams` knob in benchdnn to support benchmarking in multi-stream scenarios.

Known Limitations
* Intel Datacenter GPU Flex Series driver for Windows has an issue resulting in program hangs or crashes when oneDNN primitives are created concurrently.
* int8 concat primitive may produce incorrect results on integrated GPUs with current GPU driver.
* fp32 pooling primitive may produce incorrect results in rare conditions on Intel Datacenter GPU Max Series with current GPU driver.
* reorder primitive causes segmentation fault for prime sizes exceeding 2^31 on Intel CPUs.
* fp64 convolution and deconvolution produces incorrect results on integrated graphics in future Intel Core processors (code name Arrow Lake)
* int8 matmul primitive creation with fp32 bias fails on Intel GPU Flex Series and Intel Arc Graphics.

Breaking Changes
* Updated minimal supported ACL version to 23.11 (was 23.02.1).

Thanks to these Contributors
This release contains contributions from the project core team as well as Alexander Grund Flamefire, David Svantesson davsva01, Fadi Arafeh fadara01, Hugh Delaney hdelan, Ilya Lavrenov ilya-lavrenov, Jacob Kahn jacobkahn, Nathan John Sircombe nSircombe, Renato Barros Arantes renato-arantes, Sergey Shalnov shssf, Sunita Nadampalli snadampal, and Svetlozar Georgiev sgeor255. We would also like to thank everyone who asked questions and reported issues.

3.4rc

Performance Optimizations

* Intel Architecture Processors:
* Improved performance for 4th generation Intel Xeon Scalable processors (formerly Sapphire Rapids).
* Improved performance for the future Intel Xeon Scalable processors (code-named Sierra Forest and Granite Rapids). These optimizations are now included by default on compatible processors.
* Improved RNN primitive performance with LBR_GRU cell.
* Improved softmax performance on processors with Intel AVX2 or Intel AVX-512 instruction set support.
* Improved fp32 inner product performance on processors with Intel AVX2 instruction set support.
* Improved fp32, fp16, bf16 matmul primitive performance on processors with Intel AVX-512 and Intel AMX instruction set support.
* Improved int8 matmul performance with transposed A tensor.
* Improved performance of resampling primitive on processors with Intel AVX2 instruction set support.
* Improved performance of int8 convolution with post-ops.
* Optimized batch matmul with binary post-op and broadcast mask `1` and `14`.
* Improved the Scaled Dot Product Attention (SDPA) subgraph performance with Graph API.
* Improved performance of subgraphs including `matmul` and `add` operations and mixed int8 and bfloat16 data types with Graph API.
* **[experimental]** Improved performance of `reduction`, `softmax` and `layernorm` operations with experimental Graph Compiler backend.
* **[experimental]** Improved performance for llama2 MLP subgraph with experimental Graph Compiler backend.

* Intel Graphics Products:
* Introduced initial optimizations for Processor Graphics based on Xe2 architecture.
* Improved performance for the Intel Data Center GPU Max Series (formerly Ponte Vecchio).
* Improved performance for Intel Arc graphics (formerly Alchemist and DG2) and the Intel Data Center GPU Flex Series (formerly Arctic Sound).
* Improved matmul performance for cases relevant to Large Language Models (LLMs) and Transformer-like models.
* Improved convolution performance for cases relevant to the Stable Diffusion model.
* Improved RNN primitive performance.
* Improved pooling forward propagation performance.
* Improved batched matmul performance for cases with 5 dimensions or more.

* AArch64-based Processors:
* Added an option to build oneDNN with macOS Accelerate library to improve performance on Apple silicon.
* Improved reorder primitive performance with Compute Library for the Arm architecture (ACL).
* Improved bf16 inner product product primitive performance with ACL.

Functionality
* Introduced [GPT-Q support](https://github.com/igorsafo/oneDNN/tree/rfcs-gpt-quantization/rfcs/20231108-gpt-quantization) to improve Large Language Models (LLMs) performance with compressed weights. Optimized implementation is available for Intel Graphics Products and support [matmul with int8 wight compression](https://oneapi-src.github.io/oneDNN/page_weights_decompression_matmul_cpp.html#doxid-weights-decompression-matmul-cpp).
* Introduced [fp8 data type](https://oneapi-src.github.io/oneDNN/dev_guide_data_types.html) support in primitives and Graph API. Optimized implementation is available for Intel Data Center GPU Max Series (formerly Ponte Vecchio).
* Introduced support for fp16 and bf16 scale and shift arguments for layer normalization. Optimized implementation is available for Intel Graphics Products.
* **[experimental]** Introduced unstructured sparsity support for processors with Intel AMX support relying on VCOMPRESS/VPEXPAND instructions.
* Intel Graphics Products
* Introduced PReLU post-op support for inner product and matmul primitives.

Usability
* Added opt-in [deterministic mode](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_deterministic.html) support. Deterministic mode guarantees that results are bitwise identical between runs in a fixed environment.
* Introduced [accumulation mode](https://oneapi-src.github.io/oneDNN/dev_guide_attributes_accumulation_mode.html) control.
* Extended oneDNN verbose diagnostics with information on dispatching decisions in convolution and matmul implementations.
* Extended verbose diagnostics for Graph API with information for operation schema check results and pattern matching results.
* Reduced RNN primitive memory consumption on GPUs.
* Added examples demonstrating use of oneDNN Graph API in eager mode use cases.
* Extended tensor constructor in Graph API to support memory allocation and management by the library.
* Introduced new API and environment variable to manage [Graph API constant tensor cache capacity](https://oneapi-src.github.io/oneDNN/dev_guide_constant_tensor_cache.html).
* Improved the efficiency of pattern matching in Graph API by optimizing pattern registration, reducing pattern numbers, and skipping patterns more wisely.
* Changed default optimization flags for AArch64 builds to `-mcpu=generic` to improve portability.

Validation
* Improved benchdnn performance by optimizing bottlenecks in validation code.
* Introduced `--num-streams` knob in benchdnn to support benchmarking in multi-stream scenarios.

Breaking Changes
* Updated minimal supported ACL version to 23.11 (was 23.02.1).

Thanks to these Contributors
This release contains contributions from the project core team as well as Alexander Grund Flamefire, David Svantesson davsva01, Fadi Arafeh fadara01, Hugh Delaney hdelan, Ilya Lavrenov ilya-lavrenov, Jacob Kahn jacobkahn, Nathan John Sircombe nSircombe, Renato Barros Arantes renato-arantes, Sergey Shalnov shssf, Sunita Nadampalli snadampal, and Svetlozar Georgiev sgeor255. We would also like to thank everyone who asked questions and reported issues.

3.3.6

This is a patch release containing the following changes to v3.3.5:
* Fixed crash during platform detection on some AArch64-based systems (3e0e69b21ba0694db95bd2af0877f936dcc86dd2)
* Improved inner product performance with Arm Compute Library (ACL) (e7abee2d883d41613cf243c135037fc68d2dacd0, 214fb9e14227880097729ffffac3b666a0febcd7, 8aacc8ff0dfefddfae30681d056757dba1fb0815)
* Fixed incorrect results in int8 depthwise convolution with post-ops on processors with Intel AVX2 instruction set support (0c922e04df62cf3042ebdc578a72883bde35079a)
* Fixed performance regression in fp32 convolution on processors with Intel AVX2 instruction set support (4efc0ad7234741459bab6afc21f571ddb645bcae)

3.3.5

This is a patch release containing the following changes to v3.3.4:
* Fixed undefined behavior in 3D depthwise convolution on Intel CPUs (bbaec145f8c64818fd5c3ed2cb9e2ae69daef887)
* Added warning for ACL versions newer than maximum supported (7473012743ae3227dbfa208cad260d29d86d5080)
* Added citation file (fea9f88fa7f8056a5addedfdebdb2dda35ee7a9d)
* Fixed `SEGFAULT` in int8 convolution on processors with Intel AMX support (2a8e122b63b55f897c470d23f21003bb70f0e839)

3.3.4

This is a patch release containing the following changes to v3.3.3:
* Fixed performance regression in convolution, matmul and inner product primitives with post-ops on Intel CPUs (2e3c94c5aeb6be1ce992d799943fdc4f3123905f)
* Fixed performance regression in bfloat16 matmul on processors with Intel AMX instruction set support (c0ae38cdf1201caf8ffd2906077defdfe7f4aaa3, fa4364057891fdec528d9442c88d0715306bff2d)
* Fixed `SEGFAULT` in 3D convolutions with different `h` and `w` parameters on Intel CPUs (b5f916ec068f783dbba2cd4f04a673e996f9efba)
* Fixed performance regression in fp32 convolution backpropagation on Intel CPUs (ee3b12d5388d7d749a120cf8522efd6f5aeecc09)
* Reduced benchdnn memory consumption on Intel GPUs (84a8f57d45f215cf89d0f80a57a66b78eaf9b440)

Page 1 of 24

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.