Announcements
* OpenMP will be disabled in future official builds (build option will still be available). A NoOpenMP version of ONNX Runtime is now available with this release on [Nuget](http://nuget.org/packages/Microsoft.ML.OnnxRuntime.NoOpenMP) and [PyPi](https://pypi.org/project/onnxruntime/) for C/C++/C#/Python users.
* In the next release, *MKL-ML*, *openblas*, and *jemallac* build options will be removed, and the [Microsoft.ML.OnnxRuntime.MKLML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.MKLML/) Nuget package will no longer be published. Users of *MKL-ML* are recommended to use the Intel EPs. If you are using these options and identify issues switching to an alternative build, please [file an issue](https://github.com/microsoft/onnxruntime/issues) with details.
Key Feature Updates
General
* [ONNX 1.8](https://github.com/onnx/onnx/releases/tag/v1.8.0) support / opset 13
* New contrib ops: BiasSoftmax, MatMulIntegerToFloat, QLinearSigmoid, Trilu
* ORT Mobile now compatible with NNAPI for accelerating model execution on Android devices
* Build support for Mac with Apple Silicon (CPU only)
* New dependency: flatbuffers
* Support for loading sparse tensor initializers in pruned models
* Support for setting the execution priority of a node
* Support for selection of cuDNN conv algorithms
* [BERT Model profiling tool](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/profiler.py)
Performance
* New session option to disable denormal floating numbers on sse3 supporting CPUs
* Eliminates unexpected performance degradation due to denormals without needing to retrain the model
* Option to share initializers between sessions to improve memory utilization
* Useful when several models that use the same set of initializers except the last few layers of the model are loaded in the same process
* Eliminates wasteful memory usage when every model (session) creates a separate instance of the same initializer
* Exposed by the AddInitializer API
* Transformer model optimizations
* Longformer: LongformerAttention CUDA operator added
* Support for BERT models exported from Tensorflow with 1 or 2 inputs
* Python optimizer supports additional models: openai-GPT, ALBERT and FlauBERT
* Quantization
* Support of per-channel QuantizeLinear and DeQuantizeLinear
* Support of LSTM quantization
* Quantization performance improvement on ARM
* CNN quantization perf optimizations, including u8s8 support and NHWC transformer in QLinearConv
* ThreadPool
* Use `_mm_pause()` for spin loop to improve performance and power consumption
APIs and Packages
* Python - I/O Binding enhancements
* [Usage Documentation](https://www.onnxruntime.ai/python/api_summary.html) (OrtValue and IOBinding sections)
* Python binding for the `OrtValue` data structure
* An interface is exposed to allocate memory on a CUDA-supported device and define the contents of this memory. No longer need to use allocators provided by other libraries to allocate and manage CUDA memory to be used with ORT.
* Allows consuming ORT allocated device memory as an `OrtValue` (check Scenario 4 in the IOBinding section of the documentation for an example)
* `OrtValue` instances can be used to bind inputs/outputs. This is in addition to existing interfaces that allows binding a piece of memory directly/using numpy arrays that can be bound and may be particularly useful when binding ORT allocated device memory.
* C - float16 and bfloat16 support
* Windows ML
* NuGet package now supports UWP applications targeting Windows Store deployment for both CPU and GPU
* Minor API Improvements:
* Able to bind IIterable<Buffers> as inputs and outputs
* Able to create Tensor* via multiple buffers
* WindowsAI Redist now includes a statically linked C-Runtime package for additional deployment options
Execution Providers
* DNNL EP Updates
* DNNL updated from 1.1.1 to 1.7
* NNAPI EP Updates
* Support for CNN models
* Additional operator support - Resize/Flatten/Clip
* TensorRT EP Updates
* Int8 quantization support (experimental)
* Engine cache refactoring and improvements
* General fixes and performance improvements
* OpenVINO EP Updates
* OpenVINO 2021.1 support
* OpenVINO EP builds as shared library
* Multi-threaded inferencing support
* fp16 input type support
* Multi-device plugin support
* Hetero plugin support
* Enable build on ARM64
* DirectML EP Updates (1.3.0 -> 1.4.0)
* Utilizing the first public standalone release of the DirectML API through the [DirectML NuGet package](https://www.nuget.org/packages/Microsoft.AI.DirectML/) release
* General fixes and improvements
* nGraph EP is removed. Recommend to use OpenVINO instead
Additional notes
* VCRuntime2019 with OpenMP: pinning a process to NUMA node 1 forces the execution to be single threaded. Fix is in progress in VC++.
* Workaround: place the VS2017 vcomp DLL side-by-side so that ORT uses the VS2017 version
* Pip version >=20.3 is required for use on macOS Big Sur (11.x)
* The destructor of OrtEnv is now non-trivial and may do [DLL unloading](https://github.com/microsoft/onnxruntime/blob/rel-1.6.0/onnxruntime/core/session/ort_env.cc#L45) Do not call `ReleaseEnv` from DLLMain or put OrtEnv in global variables. It is not safe to call FreeLibrary from DllMain. - [reference](https://docs.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-freelibrary)
* Some unit tests fail on Pascal GPUs. See: https://github.com/microsoft/onnxruntime/issues/5914
* If using the default CPU package (built with OpenMP), consider tuning the OpenMP settings to improve performance. By default the number of threads to use for openmp parallel regions is set to the number of logical CPUs. This may not be optimal for machines with hyper-threading; when CPUs are oversubscribed the 99-percentile latency could be 10x greater. Setting the OMP_NUM_THREADS environment variable to the number of **physical** cores is a good starting point. As noted in Announcements, future official builds of ORT will be published without OpenMP
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[gwang-msft](https://github.com/gwang-msft), [snnn](https://github.com/snnn), [skottmckay](https://github.com/skottmckay), [edgchen1](https://github.com/edgchen1), [hariharans29](https://github.com/hariharans29), [wangyems](https://github.com/wangyems), [yufenglee](https://github.com/yufenglee), [yuslepukhin](https://github.com/yuslepukhin), [tianleiwu](https://github.com/tianleiwu), [SherlockNoMad](https://github.com/SherlockNoMad), [tracysh](https://github.com/tracysh), [ryanlai2](https://github.com/ryanlai2), [askhade](https://github.com/askhade), [xadupre](https://github.com/xadupre), [liqunfu](https://github.com/liqunfu), [RandySheriffH](https://github.com/RandySheriffH), [jywu-msft](https://github.com/jywu-msft), [KeDengMS](https://github.com/KeDengMS), [pranavsharma](https://github.com/pranavsharma), [mrry](https://github.com/mrry), [ashbhandare](https://github.com/ashbhandare), [iK1D](https://github.com/iK1D), [RyanUnderhill](https://github.com/RyanUnderhill), [MaajidKhan](https://github.com/MaajidKhan), [wenbingl](https://github.com/wenbingl), [kit1980](https://github.com/kit1980), [weixingzhang](https://github.com/weixingzhang), [tlh20](https://github.com/tlh20), [suffiank](https://github.com/suffiank), [Craigacp](https://github.com/Craigacp), [smkarlap](https://github.com/smkarlap), [stevenlix](https://github.com/stevenlix), [zhanghuanrong](https://github.com/zhanghuanrong), [sfatimar](https://github.com/sfatimar), [ytaous](https://github.com/ytaous), [tiagoshibata](https://github.com/tiagoshibata), [fdwr](https://github.com/fdwr), [oliviajain](https://github.com/oliviajain), [alberto-magni](https://github.com/alberto-magni), [jcwchen](https://github.com/jcwchen), [mosdav](https://github.com/mosdav), [xzhu1900](https://github.com/xzhu1900), [wschin](https://github.com/wschin), [codemzs](https://github.com/codemzs), [duli2012](https://github.com/duli2012), [smk2007](https://github.com/smk2007), [natke](https://github.com/natke), [zhijxu-MS](https://github.com/zhijxu-MS), [manashgoswami](https://github.com/manashgoswami), [zhangxiang1993](https://github.com/zhangxiang1993), [faxu](https://github.com/faxu), [HectorSVC](https://github.com/HectorSVC), [take-cheeze](https://github.com/take-cheeze), [jingyanwangms](https://github.com/jingyanwangms), [chilo-ms](https://github.com/chilo-ms), [YUNQIUGUO](https://github.com/YUNQIUGUO), [jgbradley1](https://github.com/jgbradley1), [jessebenson](https://github.com/jessebenson), [martinb35](https://github.com/martinb35), [Andrews548](https://github.com/Andrews548), [souptc](https://github.com/souptc), [pengwa](https://github.com/pengwa), [liuziyue](https://github.com/liuziyue), [orilevari](https://github.com/orilevari), [BowenBao](https://github.com/BowenBao), [thiagocrepaldi](https://github.com/thiagocrepaldi), [jeffbloo](https://github.com/jeffbloo)