Onednn

Latest version: v2025.0.0

Safety actively analyzes 688823 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 13 of 26

2.0beta10

This is a preview release for oneDNN v2.0. The release is based on [oneDNN v1.7](https://github.com/oneapi-src/oneDNN/releases/tag/v1.7).

Binary distribution of this software is available as [Intel(R) oneAPI Deep Neural Network Library](https://software.intel.com/en-us/oneapi/onednn) in [Intel(R) oneAPI]( https://software.intel.com/en-us/oneapi).

Performance optimizations
* Intel Processor Graphics and Xe architecture-based Graphics:
* Improved performance of convolutions and matmul primitives.
* Improved performance of int8 convolutions for NHWC activations format.
* Intel Architecture processors:
* Improved performance of primitives for NHWC activations format.
* Improved fp32 GEMM performance for small N
* Improved performance of int8 primitives for processors with Intel SSE4.1 instruction set support.
* AArch64-based processors
* Added support for Arm Performance Library (ArmPL). ArmPL provides optimized GEMM implementation for aarch64.
* Added support for (Arm Compute Library (ArmCL))[https://github.com/arm-software/ComputeLibrary]. ArmCL provides optimized convolution implementation for aarch64.

New Functionality
* Added support for IBMz (s390x) and IBM POWER (powerpc64) architectures
* Introduced RNN GRU for GPU.
* Introduced int8 RNN GRU for CPU
* Introduced asymmetric quantization support for convolutions, matmul, and inner product
* Introduced [dilated pooling support]( https://oneapi-src.github.io/oneDNN/group__dnnl__api__pooling.html).
* Extended matmul primitive to support multiple dimensions in batch and broadcast on CPU.
* (preview) Introduced [binary post-op]( https://oneapi-src.github.io/oneDNN/dev_guide_attributes_post_ops.html) for (de)-convolution, pooling, eltwise, binary, inner product, and matmul.
* (preview) Extended the number of supported post-ops for primitives to 20.
* (preview) Introduced [reduction primitive]( https://oneapi-src.github.io/oneDNN/dev_guide_reduction.html) for CPU. Together with post-ops this functionality allows to implement normalization.

Thanks to the contributors
This release contains contributions from the project core team as well as Ben Fitch, Brian Shi, David Edelsohn edelsohn, Diana Bite diaena, Moaz Reyad moazreyad, Nathan John Sircombe nSircombe, Niels Dekker N-Dekker, Peter Caday petercad, Pinzhen Xu pinzhenx, pkubaj pkubaj, Tsao Zhong CaoZhongZ. We would also like to thank everyone who asked questions and reported issues.

Known Issues and Limitations
* f32 convolutions may hang sporadically on Intel Processor Graphics Gen11. No workaround available.
* Pooling, batch normalization, and binary primitives may segfault when executed on Xe architecture-based graphics. No workaround available.
* oneDNN functionality may corrupt memory and lead to application crash on GPU with Level Zero runtime in USM mode on all GPU platforms. As a workaround use SYCL buffers or OpenCL runtime:
export SYCL_BE=PI_OPENCL
* Matmul function may hang on GPU with Level Zero runtime on Windows. As a workaround use OpenCL runtime:
export SYCL_BE=PI_OPENCL
* Convolution may hang on GPU for shapes with 3 input channels. No workaround available.
* Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
* When running GPU kernels that take longer than a certain time (it depends on OS and system settings), you may face a situation resulting in apparent hang of the application. There are ways to configure driver or system settings to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including oneDNN examples:
o On Linux* (See more details at OpenCL™ Driver for Intel® HD, Iris™, and Iris™ Pro Graphics for Linux):
$ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'
o On Windows* (See more details at Timeout Detection and Recovery (TDR) Registry Keys):
Increase TdrDelay and TdrDdiDelay values in registry
* See DPC++ limitations that impact the library as well.

2.0beta09

This is a preview release for oneDNN v2.0. This is a patch release based on v2.0-beta08.

Binary distribution of this software is available as [Intel(R) oneAPI Deep Neural Network Library](https://software.intel.com/en-us/oneapi/onednn) in [Intel(R) oneAPI]( https://software.intel.com/en-us/oneapi).

Known Issues and Limitations
* int8 LSTM cell may produce incorrect results when dimensions exceed 16.
* oneDNN functions executed on GPU with Level Zero driver in Remote Desktop Connection session on Windows may produce incorrect results or hang up an application. As a workaround switch Intel oneAPI DPC++ Runtime to OpenCL backend by setting environment variable `SYCL_BE=PI_OPENCL`.
* Average pooling backpropagation may produce incorrect results for 1D spatial on Intel® Processor Graphics Gen9.
* Optimized primitives can crash or fail for huge spatial sizes on CPU.
* f32 convolutions may fail sporadically on Intel® Processor Graphics Gen11 due to a known issue in Intel Graphics Compiler.
* Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
* When running GPU kernels that take longer than a certain time (it depends on OS and system settings), you may face a situation resulting in apparent hang of the application. There are ways to configure driver or system settings to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including oneDNN examples:
o On Linux* (See more details at OpenCL™ Driver for Intel® HD, Iris™, and Iris™ Pro Graphics for Linux):
$ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'
o On Windows* (See more details at Timeout Detection and Recovery (TDR) Registry Keys):
Increase TdrDelay and TdrDdiDelay values in registry
* See DPC++ limitations that impact the library as well.

2.0beta08

This is a preview release for oneDNN v2.0. The release is based on [oneDNN v1.6](https://github.com/oneapi-src/oneDNN/releases/tag/v1.6).

Binary distribution of this software is available as [Intel(R) oneAPI Deep Neural Network Library](https://software.intel.com/en-us/oneapi/onednn) in [Intel(R) oneAPI]( https://software.intel.com/en-us/oneapi).

Performance Optimizations

Intel Architecture processors
* Introduced initial int8 optimizations for future Intel Xeon Scalable processor (code name Sapphire Rapids). The functionality is disable by default and should be enabled via [CPU dispatcher control](https://oneapi-src.github.io/oneDNN/dev_guide_cpu_dispatcher_control.html).
* Improved matmul and inner product performance with bfloat16 data type.
* Improved performance of `tanh` algorithm for eltwise primitive and LSTM cells.

Intel Processor Graphics and Xe architecture-based Graphics
* Improved performance of Convolution, RNN, Inner Product and Matmul functionality for all supported GPUs.
* Improved performance of int8 convolutions with activations in NHWC format for Xe architecture-based Graphics (code named DG1 and Tiger Lake).

New Functionality
* Introduced support for processors based on IBM POWER architecture.
* Introduced Linear-Before-Reset GRU for GPU.
* Extended [eltwise primitive](https://oneapi-src.github.io/oneDNN/group__dnnl__api__eltwise.html) with support for `round` operation.

Usability
* Reduced primitives creation time due to enabled OpenCL pre-compiled headers feature in recent versions of OpenCL driver.
* Reduced entitlement required on macOS with hardened runtime to `allow-jit`.
* Extended documentation on runtime and build time controls for JIT profilers support, primitive cache, CPU dispatcher controls, and verbose mode.


Validation
* Introduced validation mode for out of memory situations.

Known Issues and Limitations
* RNN functionality is not functional with Level Zero GPU runtime. The workaround is to use OpenCL GPU runtime via setting SYCL_BE=PI_OPENCL before running a DPC++ program.
* Optimized primitives can crash or fail for huge spatial sizes on CPU.
* f32 convolutions may fail sporadically on Intel® Processor Graphics Gen11 due to a known issue in Intel Graphics Compiler.
* Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
* When running GPU kernels that take longer than a certain time (it depends on OS and system settings), you may face a situation resulting in apparent hang of the application. There are ways to configure driver or system settings to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including oneDNN examples:
o On Linux* (See more details at OpenCL™ Driver for Intel® HD, Iris™, and Iris™ Pro Graphics for Linux):
$ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'
o On Windows* (See more details at Timeout Detection and Recovery (TDR) Registry Keys):
Increase TdrDelay and TdrDdiDelay values in registry
* See DPC++ limitations that impact the library as well.

2.0beta07

This is a preview release for oneDNN v2.0. The release is based on [oneDNN v1.5](https://github.com/oneapi-src/oneDNN/releases/tag/v1.5).

Binary distribution of this software is available as [Intel(R) oneAPI Deep Neural Network Library](https://software.intel.com/en-us/oneapi/onednn) in [Intel(R) oneAPI]( https://software.intel.com/en-us/oneapi).

Performance optimizations

Intel Architecture processors
* Improved performance of convolutional neural networks (CNN) related functionality with NHWC activations on all supported processors
* Improved binary primitive performance for the broadcast case
* Improved performance of eltwise primitive backpropagation and corresponding post-ops
* Improved performance of pooling, resampling, LRN primitives
* Improved performance of bfloat16 and fp32 weights gradient convolutions with groups
* Improved performance of int8 convolutions with 1x1 kernel and spatial strides

Intel Processor Graphics and Xe architecture-based Graphics
* Introduced initial optimizations for Xe architecture-based Graphics (code named DG1 and Tiger Lake).
* Improved performance of convolutional neural networks (CNN) related functionality with NHWC activations.

New Functionality
* Level Zero (L0) GPU runtime is used by default on Windows* operating system. OpenCL GPU runtime still can be used if SYCL_BE environment variable is set to PI_OPENCL before running a DPC++ program.

Usability
* Introduced support for [Arm* 64-bit Architecture (AArch64) and other non-x86 processors](https://github.com/oneapi-src/oneDNN/blob/master/src/cpu/README.md).
* Separated primitive cache state from engine making it persistent.
* Introduced [API for managing primitive cache state](https://oneapi-src.github.io/oneDNN/group__dnnl__api__primitive__cache.html).

Validation
* Introduced validation mode to detect out of bounds access.

Known Limitations
* RNN functionality is not functional with Level Zero GPU runtime. The workaround is to use OpenCL GPU runtime via setting SYCL_BE=PI_OPENCL before running a DPC++ program.
* Optimized primitives can crash or fail for huge spatial sizes on CPU.
* f32 convolutions may fail sporadically on Intel® Processor Graphics Gen11 due to a known issue in Intel Graphics Compiler.
* Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
* When running GPU kernels that take longer than a certain time (it depends on OS and system settings) you may face a situation resulting in apparent hang of the application. Configure driver to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including DNNL examples.

On Linux:

$ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'

On Windows increase TdrDelay and TdrDdiDelay values using registry.

2.0beta06

This is a preview release for oneDNN v2.0. The release is based on [oneDNN v1.4](https://github.com/oneapi-src/oneDNN/releases/tag/v1.4).

Binary distribution of this software is available as [Intel(R) oneAPI Deep Neural Network Library](https://software.intel.com/en-us/oneapi/onednn) in [Intel(R) oneAPI]( https://software.intel.com/en-us/oneapi).

New Functionality
* Level Zero (L0) GPU runtime is used by default on Linux. OpenCL GPU runtime still can be used if SYCL_BE environment variable is set to PI_OPENCL before running a DPC++ program.

Known Limitations
* Level Zero GPU runtime is not supported on Windows OS.
* RNN functionality is not functional with Level Zero GPU runtime. The workaround is to use OpenCL GPU runtime via setting SYCL_BE=PI_OPENCL before running a DPC++ program.
* Zero Level runtime is enabled by default. Please make sure proper installation of zero level driver including level-zero-devel package following [installation guide](https://software.intel.com/content/www/us/en/develop/articles/installation-guide-for-intel-oneapi-toolkits.html). If users still encounter runtime issue, please apply workaround to set SYCL_BE=PI_OPENCL before running a DPC++ program.
* Optimized primitives can crash or fail for huge spatial sizes on CPU.
* dnnl_sgemm, dnnl_gemm_u8s8u32, and inner product functionality does not support sizes exceeding 2^32.
* f32 convolutions may fail sporadically on Intel® Processor Graphics Gen11 due to a known issue in Intel Graphics Compiler.
* Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
* When running GPU kernels that take longer than a certain time (it depends on OS and system settings) you may face a situation resulting in apparent hang of the application. Configure driver to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including DNNL examples.

On Linux:

$ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'

On Windows increase TdrDelay and TdrDdiDelay values using registry.

2.0beta05

This is a preview release for oneDNN v2.0. The release is a patch release based on [DNNL v2.0-beta04](https://github.com/intel/mkl-dnn/releases/tag/v2.0-beta04).

Binary distribution of this software is available as [Intel(R) oneAPI Deep Neural Network Library](https://software.intel.com/en-us/oneapi/onednn) in [Intel(R) oneAPI]( https://software.intel.com/en-us/oneapi).

Known Limitations
* Weight gradient convolution for bfloat16 datatype with 1d spatial tensor and dilation may produce incorrect result on CPU.
* Weight gradient convolution for bfloat16 datatype with 2d spatial tensor and dilation may crash on Intel AVX512 systems.
* Optimized primitives can crash or fail for huge spatial sizes on CPU.
* dnnl_sgemm, dnnl_gemm_u8s8u32, and inner product functionality does not support sizes exceeding 2^32.
* Non-Intel GPUs are not supported. The library API allows to create a DNNL engine by index (the order of devices is determined by the SYCL runtime), and there is no check for GPU devices being non-Intel. To have more control, users can create a DNNL engine passing SYCL device and context explicitly.
* Intel Processor Graphics Gen11 is not supported.
* When running GPU kernels that take longer than a certain time (it depends on OS and system settings) you may face a situation resulting in apparent hang of the application. Configure driver to disable this timeout and avoid hanging of DPC++ or OpenCL programs, including DNNL examples.

On Linux:

$ sudo bash -c 'echo N > /sys/module/i915/parameters/enable_hangcheck'

On Windows increase TdrDelay and TdrDdiDelay values using registry.

Page 13 of 26

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.