Performance optimizations
* Improved functionality performance with TBB threading achieving comparable performance with OpenMP threading.
* Improved int8 and fp32 GEMM performance on system with Intel AVX-512 and Intel VNNI support.
* Improved softmax performance for NHWC and corresponding blocked layouts.
* Improved RNN cell performance and decreased dependency of RNN performance from the compiler vectorization capabilities.
* Improved reorders performance for some shapes.
New functionality
* Introduced [layer normalization](http://intel.github.io/mkl-dnn/dev_guide_layer_normalization.html) and [binary](http://intel.github.io/mkl-dnn/dev_guide_binary.html) elementwise primitives support (CPU engine).
* Introduced swish (CPU and GPU engines) and gelu (GPU engine) activation support in [elementwise primitive](http://intel.github.io/mkl-dnn/dev_guide_eltwise.html).
* Introduced bfloat16 data type support in RNN cells (CPU engine).
* Introduced initial int8 and bfloat16 data types support for GPU functionality.
Usability improvements
* TBB threading support is promoted to production quality.
* Introduced support for memory format `any` for [memory-bound primitives backpropagation](http://intel.github.io/mkl-dnn/memory_format_propagation_cpp.html). This mechanism allows to match gradient memory format with source and destination memory formats from forward pass.
* Changed default compiler flags to target Intel SSE4.1 instruction set to make builds portable.
* (experimental) Introduced [caching mechanism](http://intel.github.io/mkl-dnn/dev_guide_primitive_cache.html) that reduces primitive creation time for repeated primitive creation. The functionality is disabled by default and has to be enabled in compile time.
Validation improvements
* Extended [benchdnn]( https://github.com/intel/mkl-dnn/blob/master/tests/benchdnn/README.md) to cover all supported primitives.
* Introduced robust validation method for RNN cells in benchdnn. The approach allows to replace activations with linear function to make error accumulation more predictable and decrease the number of false positives.
* Extended convolution test coverage.
Thanks to the contributors
This release contains contributions from many Intel Performance Libraries developers as well as Ilia Taraban, Jacek Czaja jczaja, William Tambellini WilliamTambellini, Tomasz Kalina, Mateusz Guziak, Daniel Haidachuk, Konstantin Basargin basargin, Aaron Johnson aaronjohnson, and Jeremy Wong jrmwng. We would also like to thank everyone who asked questions and reported issues.