Onednn

Latest version: v2025.0.0

Safety actively analyzes 688803 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 16 of 26

1.6

Performance optimizations

Intel Architecture processors
* Introduced initial int8 optimizations for future Intel Xeon Scalable processor (code name Sapphire Rapids). The functionality is disabled by default and should be enabled via [CPU dispatcher control](https://oneapi-src.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html).
* Improved matmul and inner product performance with bfloat16 data type.
* Improved performance of `tanh` algorithm for eltwise primitive and LSTM cells.

Intel Processor Graphics and Xe architecture-based Graphics
* Improved performance of Convolution, RNN, Inner Product and Matmul functionality for all supported GPUs.
* Improved performance of int8 convolutions with activations in NHWC format for Xe architecture-based Graphics (code named DG1 and Tiger Lake).

AArch64-based processors
* Added support for ArmPL library to improve performance of functionality relying on GEMM (matmul, inner product, convolutions).

New Functionality
* Introduced support for processors based on IBM POWER architecture.
* Introduced Linear-Before-Reset GRU for GPU.
* Extended [eltwise primitive](https://oneapi-src.github.io/oneDNN/group__dnnl__api__eltwise.html) with support for `round` operation.

Usability
* Reduced primitives creation time due to enabled OpenCL pre-compiled headers feature in recent versions of OpenCL driver.
* Reduced entitlement required on macOS with hardened runtime to `allow-jit`.
* Extended documentation on runtime and build time controls for JIT profilers support, primitive cache, CPU dispatcher controls, and verbose mode.

Validation
* Introduced validation mode for out of memory situations.

Thanks to the contributors
This release contains contributions from the project core team as well as Alberto Gonzalez Palomo AlbertoGP, Arthur Mitrano aaraujom, Ilia Taraban itaraban, Nathan John Sircombe nSircombe, Peter Caday petercad, Tsao Zhong CaoZhongZ. We would also like to thank everyone who asked questions and reported issues.

1.6rc

This is a release candidate for oneDNN v1.6. Please provide feedback and report bugs in [Github issues](https://github.com/intel/mkl-dnn/issues).

1.5.1

This is a patch release containing following changes to v1.5:
* Fixed potential crash related to primtive cache (95eff24e7adae32fab844b3fb7dfb9f111441693, 00205d3816349826f72faf3280faeae9a818e563)
* Fixed correctness issue for Winograd convolution implementation on Intel Xeon Phi processors (f310ded959d009f9ffe70d2c8611da4fb272abc8)
* Fixed issue with tail processing in channel dimension for depthwise convolution (24eda67cd31fbfea4dd184a32577991ba6b9ea05)

1.5

Performance optimizations

Intel Architecture processors
* Improved performance of convolutional neural networks (CNN) related functionality with NHWC activations on all supported processors
* Improved binary primitive performance for the broadcast case
* Improved performance of eltwise primitive backpropagation and corresponding post-ops
* Improved performance of pooling, resampling, LRN primitives
* Improved performance of bfloat16 and fp32 weights gradient convolutions with groups
* Improved performance of int8 convolutions with 1x1 kernel and spatial strides

Intel Processor Graphics and Xe architecture-based Graphics
* Introduced initial optimizations for Xe architecture-based Graphics (code named DG1 and Tiger Lake).
* Improved performance of convolutional neural networks (CNN) related functionality with NHWC activations.

Usability
* Introduced support for [Arm* 64-bit Architecture (AArch64) and other non-x86 processors](https://github.com/oneapi-src/oneDNN/blob/master/src/cpu/README.md).
* Separated primitive cache state from engine making it persistent.
* Introduced [API for managing primitive cache state](https://oneapi-src.github.io/oneDNN/group__dnnl__api__primitive__cache.html).

Validation
* Introduced validation mode to detect out of bounds access.

Thanks to the contributors
This release contains contributions from the project core team as well as Anuj Mittal anujm1, Arthur Mitrano aaraujom, Benjamin Fitch, Ilia Taraban itaraban, Leona C. indie, Nathan John Sircombe nSircombe, Sergey Nesterov cepera, Tsao Zhong CaoZhongZ, yuriFreeBSD yurivict. We would also like to thank everyone who asked questions and reported issues.

1.5rc

This is a release candidate for oneDNN v1.5. Please provide feedback and report bugs in [Github issues](https://github.com/intel/mkl-dnn/issues).

1.4

Performance optimizations
* Intel Architecture processors:
* Improved performance of int8 GEMM, RNN, inner product, matmul and GEMM-based convolution for systems with Intel SSE4.1 and Intel AVX support.
* Improved performance of eltwise backpropagation on all supported processors.
* Improved performance of bfloat16 inner product for processors with Intel DL Boost support.
* Intel Processor Graphics
* Improved performance of the following functionality with NHWC activations:
* f32 convolution forward propagation
* f32 and f16 pooling
* f32 and f16 batch normalization forward propagation.
* Improved performance of f32 and f16 batch normalization forward propagation and binary primitives

New functionality
* Introduced support for [LSTM cell with projection](https://oneapi-src.github.io/oneDNN/dev_guide_rnn.html) (LSTMP). The functionality is not implemented for Intel Processor Graphics.
* Introduced bfloat16 data type support for Softmax and LogSoftmax primitives.

Usability improvements
* Introduced [threadpool CPU runtime](https://oneapi-src.github.io/oneDNN/dev_guide_threadpool.html). New runtime allows to run multi-thread computations with user-provided threadpool implementation, for instance Eigen threadpool.
* Extended set of examples to cover all primitives supported by the library. New examples are included into corresponding sections of the [Developer Guide](http://intel.github.io/mkl-dnn/index.html).

Thanks to the contributors
This release contains contributions from the project core team as well as Araujo Mitrano, Arthur aaraujom, Ilya Taraban itaraban, Nathan Sircombe nSircombe, and Sergey Nesterov cepera. We would also like to thank everyone who asked questions and reported issues.

Page 16 of 26

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.