Onednn

Latest version: v2025.0.0

Safety actively analyzes 679296 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 18 of 26

1.2rc

This is a release candidate for DNNL v1.2. Please provide feedback and report bugs in [Github issues](https://github.com/intel/mkl-dnn/issues).

1.1.3

This is a patch release containing following changes to v1.1.2:
* Fixed the mean and variance memory descriptors in layer normalization (65f19088b5ca2804699b0c73440c9949ebca6ffd)
* Fixed the layer normalization formula (c176cebaa1793718720b254613adac83a937710e)

1.1.2

This is a patch release containing following changes to v1.1.1:
* Fixed threading over the spatial in bfloat16 batched normalization (017b6c93dc10b2f0e53199d29cf6c26daafc5417)
* Fixed read past end-of-buffer error for int8 convolution (7d6f45e7e72882d2c0d9041e65fd64f132ec321b)
* Fixed condition for dispatching optimized channel blocking in fp32 backward convolution on Intel Xeon Phi(TM) processor (846eba1c230a66a664ded18ba25e9468aaadd4bf)
* Fixed fp32 backward convolution for shapes with spatial strides over the depth dimension (002e3ab561556e12119be6b223b59fb1563908b5)
* Fixed softmax with zero sizes on GPU (936bff4803a0743ec6e956b0bd459a0f2c01b378)
* Fixed int8 deconvolution with dilation when ih <= dh (3e3bacba51f51e03bc2ae758800aead7d6876e79)
* Enabled back fp32 -> u8 reorder for RNN (a2c2507617edc06af45359670f11a353406342bf)
* Fixed segmentation fault in bfloat16 backward convolution from kd_padding=0 computation (52d476c04bd8fe453d07934ca2a9834c87f6aafe)
* Fixed segmentation fault in bfloat16 forward convolution due to push/pop imbalance (4f6e3d57af9ba7501f65df768d0b2f91765582fc)
* Fixed library version for OS X build (0d850053c8b78728393f069593610c1d321444cf)
* Fixed padding by channels in concat (a265c7dad34dc0dd72089f4fa8a6cd1c55f75f8)
* Added full text of third party licenses and copyright notices to LICENSE file (79f204c76bc5c72f32a858285ae5fda593def0fb)
* Added separate README for binary packages (28f4c96d2626e36e7196eb53d9299f9b6bd70961)
* Fixed computing per-oc mask in RNN (ff3ffaba8c2739766ff44f8563d918673ebad994)
* Added workaround for number of cores calculation in Xbyak (301b088c106c844e5c7592ba183a361698e54208)

1.1.1

This is a patch release containing following changes to v1.1:
* Fixed zero padding for memory formats with rank 3 and below (f97e1748552d36e8f35e1ad5a5d50bf1751c43cf)
* Fixed 'deprecated std::copy' warning with Microsoft C++ Compiler (ee276af2d13ead05458d55f6ddc1771d52516397)
* Fixed tail scaling for int8 inner product (f2b68c7d66be60dd4fc13af78a3b2cece1cd61a3)
* Fixed correctness issue for int8 GEMM with `N=1` (0dd5c13ff7d8efac73818952a4fc143fa2d4371e)
* Sum does not override the data type for destination memory descriptor when used with `any` (53019818512939394cf919c3b3bfe333c488a15c)
* Addressed following corner cases in CPU convolution implementation:
* Fixed tail processing in int8 depthwise convolution (7711b77f9ad990d1e68a6f3076aadf9952b81c3d)
* Fixed bias padding in bfloat16 depthwise convolution (0696ba6340ba4bdf1cd616a9613336da857cc7ca)
* Fixed correctness issue in s8s8 flavor of depthwise convolution (b614482db20248d974034bf66631b26924a15dbe)
* Fixed correctness issue in dilated convolution weight gradient implementation (c6ec0f95a29141112195e99173b4b83f8a3ab6d1)

1.1

Performance optimizations
* Improved functionality performance with TBB threading achieving comparable performance with OpenMP threading.
* Improved int8 and fp32 GEMM performance on system with Intel AVX-512 and Intel VNNI support.
* Improved softmax performance for NHWC and corresponding blocked layouts.
* Improved RNN cell performance and decreased dependency of RNN performance from the compiler vectorization capabilities.
* Improved reorders performance for some shapes.

New functionality
* Introduced [layer normalization](http://intel.github.io/mkl-dnn/dev_guide_layer_normalization.html) and [binary](http://intel.github.io/mkl-dnn/dev_guide_binary.html) elementwise primitives support (CPU engine).
* Introduced swish (CPU and GPU engines) and gelu (GPU engine) activation support in [elementwise primitive](http://intel.github.io/mkl-dnn/dev_guide_eltwise.html).
* Introduced bfloat16 data type support in RNN cells (CPU engine).
* Introduced initial int8 and bfloat16 data types support for GPU functionality.

Usability improvements
* TBB threading support is promoted to production quality.
* Introduced support for memory format `any` for [memory-bound primitives backpropagation](http://intel.github.io/mkl-dnn/memory_format_propagation_cpp.html). This mechanism allows to match gradient memory format with source and destination memory formats from forward pass.
* Changed default compiler flags to target Intel SSE4.1 instruction set to make builds portable.
* (experimental) Introduced [caching mechanism](http://intel.github.io/mkl-dnn/dev_guide_primitive_cache.html) that reduces primitive creation time for repeated primitive creation. The functionality is disabled by default and has to be enabled in compile time.

Validation improvements
* Extended [benchdnn]( https://github.com/intel/mkl-dnn/blob/master/tests/benchdnn/README.md) to cover all supported primitives.
* Introduced robust validation method for RNN cells in benchdnn. The approach allows to replace activations with linear function to make error accumulation more predictable and decrease the number of false positives.
* Extended convolution test coverage.

Thanks to the contributors
This release contains contributions from many Intel Performance Libraries developers as well as Ilia Taraban, Jacek Czaja jczaja, William Tambellini WilliamTambellini, Tomasz Kalina, Mateusz Guziak, Daniel Haidachuk, Konstantin Basargin basargin, Aaron Johnson aaronjohnson, and Jeremy Wong jrmwng. We would also like to thank everyone who asked questions and reported issues.

1.1rc

This is a release candidate for DNNL v1.1. Please provide feedback and report bugs in [Github issues](https://github.com/intel/mkl-dnn/issues).

Page 18 of 26

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.