Onednn

Latest version: v2025.0.0

Safety actively analyzes 679296 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 22 of 26

0.20.1

This is a patch release containing following changes to Intel MKL-DNN v0.20.0:
* Addressed static initialization order issue in bf16 converters (aef88b7c233f48f8b945da310f1b973da31ad033)
* Fixed out of bound memory access in LRN implementation for Intel AVX2 (1a5eca7bf7ca913d874421912370fc852ddfe986)

0.20

**Performance optimizations**
* Improved GEMM-based convolutions performance.
* Improved softmax performance.
* Added arbitrary eltwise fusion support in GEMM-based convolutions and inner product.

**New functionality**
* Introduced bfloat16 data type support in reorders, (de-)convolution, pooling, batch normalization, local response normalization, eltwise, inner product, shuffle, sum, and concat. The implementation relies on new instructions targeting future Intel Xeon Scalable processor (codename Cooper Lake). On the processors with Intel AVX512 support bfloat16 arithmetic is emulated.

**Thanks to the contributors**
This release contains contributions from many Intel Performance Libraries developers. We would also like to thank everyone who asked questions and reported issues.

0.20rc

This is a release candidate for Intel MKL-DNN v0.20. Please provide feedback and report bugs in [Github issues](https://github.com/intel/mkl-dnn/issues).

0.19

Performance optimizations
* Improved int8 convolutions performance for small batch cases.
* Improved performance of grouped convolutions with the number of channels in a group being multiple of 4.
* Improved GEMM-based convolutions performance.
* Improved performance of RNN cells.
* Improved SGEMM performance for Intel® AVX2 and Intel® AVX512 instruction sets.

New functionality
* Introduced int8 support in 1D convolution, deconvolution, inner product, and batch normalization

Usability improvements
* Added CMake package configuration file

Thanks to the contributors
This release contains contributions from many Intel Performance Libraries developers as well as Haitao Feng fenghaitao, Klein Guillaume guillaumekln, Alexander Grund Flamefire, Rui Xia harrysummer, and Shigeo Mitsunari herumi. We would also like to thank everyone who asked questions and reported issues.

0.19rc

This is a release candidate for MKL-DNN v0.19. Please provide feedback and report bugs in [Github issues](https://github.com/intel/mkl-dnn/issues).

v1.0-pc2
This is preview candidate 2 for Intel MKL-DNN v1.0.
It introduces support for Intel(R) Processor Graphics and implements changes announced in [v1.0 RFC](https://github.com/intel/mkl-dnn/tree/rfc-api-changes-v1.0/doc/rfc/api-v1.0). Please provide feedback and report bugs in [Github issues](https://github.com/intel/mkl-dnn/issues).

0.18.1

This is a patch release containing following changes to Intel MKL-DNN v0.18.0:
* Fix bug in build system to do not break transitive linking when being used as a subproject (245b331e5ef4962f6bffdff2d207b185e362a58a)
* Fix fix bias conversion in int8 gemm-based convolution (9670998b82b3e5e1ddb1bf052654b39a890b28ca)


v1.0-pc
This is preview candidate for MKL-DNN v1.0.
The preview candidate implements changes announced in [v1.0 RFC](https://github.com/intel/mkl-dnn/tree/rfc-api-changes-v1.0/doc/rfc/api-v1.0). Please provide feedback and report bugs in [Github issues](https://github.com/intel/mkl-dnn/issues).

Page 22 of 26

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.