Performance optimizations
* Improved RNN functionality performance.
* Improved performance of GEMM-based convolutions
* Improved performance of backpropagation for stided convolutions on processors with Intel® AVX2 support.
* Improved performance of the gemm_s8u8s32 and gemm_s8s8s32 functions on processors with Intel® AVX512 and Intel® AVX512-DL Boost instruction sets.
* Improved inner product performance on processors with Intel AVX512 and Intel AVX512-DL Boost instruction sets.
* Improved performance of int8 convolutions and deconvolutions on processors with Intel AVX512 and Intel AVX512-DL Boost instruction sets.
New functionality
* Convolutions support arbitrary elementwise operations in postops.
* Introduced support of signed int8 data for the inner product primitive.
* Introduced int8 LSTM cell support.
* Introduced automatic dispatching between the direct and Winograd convolution algorithms.
API deprecations and breaking changes
* Previously deprecated APIs were removed:
- `relu` function
- `convolution_relu` function
- double precision scales support in sum
- `negative_slope` parameter in eltwise
- `omit_stats` flag in batch normalization
Usability improvements
* Added library version information to verbose output and to headers.
* Added information about detected instruction set to verbose output.
* Introduced `mkldnn_version` function.
* Added APIs to override behaviors controlled via environment variables, including verbose mode and JIT dump.
Thanks to the contributors
This release contains contributions from many Intel Performance Libraries developers as well as Ruslan Baratov ruslo, Konstantin Basargin basargin, Jacek Czaja jczaja, Eugene Zhulenev ezhulenev, Haitao Feng fenghaitao, Yinghai Liu yinghai, Masahiro Sakai msakai, and Alexander Grund Flamefire. We would also like to thank everyone who asked questions and reported issues.