Onnxruntime

Latest version: v1.19.0

Safety actively analyzes 675360 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 9

1.11.0

Not secure
* OpenCL _(in preview)_
* Introduced the EP for OpenCL to use with Mobile GPUs
* Available in `experimental/opencl` branch for users to try. Provide feedback through Issues and Discussions in the repo.
* README is available [here](https://github.com/microsoft/onnxruntime/blob/experimental/opencl/onnxruntime/core/providers/opencl/README.md).

Mobile
* Added general support for converting a model to NHWC layout at runtime
* Execution provider sets preferred layout and shared infrastructure in ORT will ensure the nodes the execution provider is assigned will be in that layout
* Added support for runtime optimization with minimal binary size impact
* Relevant optimizations are saved in the ORT format model for replay at runtime if applicable
* Added support for QDQ format models to the NNAPI EP
* Will fall back to CPU EP’s QDQ handling if NNAPI is not available using runtime optimizations
* Includes updates to the ORT QDQ optimizers so they work better with mobile scenarios
* Added helpers to:
* Analyze if a model can be used with the pre-built ORT Mobile package
* Update ONNX opset so model can be used with the pre-built package
* Convert dynamic inputs into fixed size inputs so that the model can be used with NNAPI/CoreML
* Optimize a QDQ format model for use with ORT
* Added Android and iOS packages with full ORT builds
* These packages have additional support for the full set of opsets and ops for ONNX models at the cost of a larger binary size.

Web
* Build option to create ONNX Runtime WebAssembly static library
* Support for concurrent creation of multiple inference sessions
* Upgraded emsdk version to 3.1.3 for more stable multi-threads and enables LTO with multi-threads build on WebAssembly.

Known issues
* When using tensor sequences/sparse tensors, the generated profile is not valid JSON. (Fixed in https://github.com/microsoft/onnxruntime/pull/10974)
* There is a bug in the quantization tool for calibration when choosing percentile algorithm (Fixed in https://github.com/microsoft/onnxruntime/pull/10940). To fix this, please apply the typo fix in the python file.
* Mac M

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[snnn](https://github.com/snnn), [edgchen1](https://github.com/edgchen1), [skottmckay](https://github.com/skottmckay), [yufenglee](https://github.com/yufenglee), [wangyems](https://github.com/wangyems), [yuslepukhin](https://github.com/yuslepukhin), [gwang-msft](https://github.com/gwang-msft), [iK1D](https://github.com/iK1D), [chilo-ms](https://github.com/chilo-ms), [fdwr](https://github.com/fdwr), [ytaous](https://github.com/ytaous), [RandySheriffH](https://github.com/RandySheriffH), [hanbitmyths](https://github.com/hanbitmyths), [chenfucn](https://github.com/chenfucn), [yihonglyu](https://github.com/yihonglyu), [ajindal1](https://github.com/ajindal1), [fs-eire](https://github.com/fs-eire), [souptc](https://github.com/souptc), [tianleiwu](https://github.com/tianleiwu), [YUNQIUGUO](https://github.com/YUNQIUGUO), [hariharans29](https://github.com/hariharans29), [oliviajain](https://github.com/oliviajain), [xadupre](https://github.com/xadupre), [ashari4](https://github.com/ashari4), [RyanUnderhill](https://github.com/RyanUnderhill), [jywu-msft](https://github.com/jywu-msft), [weixingzhang](https://github.com/weixingzhang), [baijumeswani](https://github.com/baijumeswani), [georgen117](https://github.com/georgen117), [natke](https://github.com/natke), [Craigacp](https://github.com/Craigacp), [jeffdaily](https://github.com/jeffdaily), [JingqiaoFu](https://github.com/JingqiaoFu), [zhanghuanrong](https://github.com/zhanghuanrong), [satyajandhyala](https://github.com/satyajandhyala), [smk2007](https://github.com/smk2007), [ryanlai2](https://github.com/ryanlai2), [askhade](https://github.com/askhade), [thiagocrepaldi](https://github.com/thiagocrepaldi), [jingyanwangms](https://github.com/jingyanwangms), [pengwa](https://github.com/pengwa), [scxiao](https://github.com/scxiao), [ashbhandare](https://github.com/ashbhandare), [BowenBao](https://github.com/BowenBao), [SherlockNoMad](https://github.com/SherlockNoMad), [sumitsays](https://github.com/sumitsays), [sfatimar](https://github.com/sfatimar), [mosdav](https://github.com/mosdav), [harshithapv](https://github.com/harshithapv), [liqunfu](https://github.com/liqunfu), [tiagoshibata](https://github.com/tiagoshibata), [gineshidalgo99](https://github.com/gineshidalgo99), [pranavsharma](https://github.com/pranavsharma), [jcwchen](https://github.com/jcwchen), [nkreeger](https://github.com/nkreeger), [xkszltl](https://github.com/xkszltl), [faxu](https://github.com/faxu), [suffiank](https://github.com/suffiank), [stevenlix](https://github.com/stevenlix), [jeffbloo](https://github.com/stevenlix), [feihugis](https://github.com/feihugis)

1.10.0

Not secure
Announcements
* As noted in the [deprecation notice](https://github.com/microsoft/onnxruntime/blob/4daa14bc74b5378d5fcb0d6de063a9fa8bd42eac/onnxruntime/python/onnxruntime_inference_collection.py#L350) in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
* Python 3.6 support removed for Mac builds. Since 3.6 is end-of-life in December 2021, it will no longer be supported from next release (ORT 1.11) onwards
* Removed dependency on [optional-lite](https://github.com/martinmoene/optional-lite)
* Removed experimental Featurizers code

General

* Support for plug-in custom thread creation and join functions to enable usage of external threads
* Optional type support from op set 15

Performance
* Introduced indirect Convolution method for QLinearConv which has symmetrically quantized filter, i.e., filter type is int8 and zero point of filter is 0. The method leverages in-direct buffer instead of memcpy'ing the original data and doesn’t need to compute the sum of each pixel of output image for quantized Conv.
* X64: new kernels - including avx2, avxvnni, avx512 and avx 512 vnni, for general and depthwise quantized Conv.
* ARM64: new kernels for depthwise quantized Conv.
* Tensor shape optimization to avoid allocating heap memory in most cases - [9542](https://github.com/microsoft/onnxruntime/pull/9542)
* Added transpose optimizer to push and cancel transpose ops, significantly improving perf for models requiring layout transformation

API
* Python
* Following through on the [deprecation notice](https://github.com/microsoft/onnxruntime/blob/4daa14bc74b5378d5fcb0d6de063a9fa8bd42eac/onnxruntime/python/onnxruntime_inference_collection.py#L350) in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
* C/C++
* New API to query CUDA stream to launch a custom kernel for scenarios where custom ops compiled into shared libraries need implicit synchronization with ORT CUDA kernels - [9141](https://github.com/microsoft/onnxruntime/pull/9141)
* Updated Invalid -> OrtInvalidAllocator
* Updated every item in OrtCudnnConvAlgoSearch to a safer global name
* WinML
* New APIs to create OrtValues from Windows platform specific ID3D12Resources by exposing DirectML Execution Provider specific APIs. These APIs allow DML to extend the C-API and provide EP specific extensions.
* OrtSessionOptionsAppendExecutionProviderEx_DML
* DmlCreateGPUAllocationFromD3DResource
* DmlFreeGPUAllocation
* DmlGetD3D12ResourceFromAllocation
* Bug fix: LearningModel::LoadFromFilePath in UWP apps

Packages
* Added Mac M1 Universal2 build support for a single binary that runs natively on both Apple silicon and Intel-based Macs. These are included in the official Nuget packages. ([build instructions](https://onnxruntime.ai/docs/build/inferencing.html#macos))
* Windows C API Symbols are now uploaded to [Microsoft symbol server](https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/microsoft-public-symbols)
* [Nuget package](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) now supports ARM64 Linux C#
* [Python GPU package](https://pypi.org/project/onnxruntime-gpu/) now includes both TensorRT and CUDA EPs. *Note: EPs need to be explicitly registered to ensure the correct provider is used. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']). Please also ensure you have appropriate [TensorRT dependencies](https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements) and [CUDA dependencies](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) installed.*

Execution Providers
* TensorRT EP
* Python GPU release packages now include support for TensorRT 8.0. Enable TensorrtExecutionProvider by explicitly setting providers parameter when creating an InferenceSession. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
* Published [quantized BERT model example](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/quantization/nlp/bert/trt)
* OpenVINO EP
* Add support for OpenVINO 2021.4.x
* Auto Plugin support
* IO Buffer/Copy Avoidance Optimizations for GPU plugin
* Misc fixes
* DNNL EP
* Add Softmaxgrad op
* Add Transpose, Reshape, Pow and LeakyRelu ops
* Add DynamicQuantizeLinear op
* Add squeeze/unsqueeze ops
* DirectML EP
* [Update](https://github.com/microsoft/onnxruntime/pull/9765) DirectML.dll from [1.5.1](https://www.nuget.org/packages/Microsoft.AI.DirectML/1.5.1) to [1.8.0](https://www.nuget.org/packages/Microsoft.AI.DirectML/1.8.0)
* Support full precision uint64/int64 for [48](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-180) operators
* Add 8D for [7](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-160) more existing operators
* Add DynamicQuantizeLinear op
* Accept ID3DResource's via [C API](https://github.com/microsoft/onnxruntime/pull/9686)

Mobile
* Added Xamarin support to the ORT C Nuget packages
* Updated target frameworks in native package
* iOS and Android binaries now included in native package
* ORT format models now have backwards compatibility guarantee

Web
* Support WebAssembly SIMD for qgemm kernel to accelerate the performance of quantized models
* Upgraded existing WebGL kernels to the latest opset
* Optimized bundle size to support various production scenarios, such as WebAssembly only or WebGL only
---
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[snnn](https://github.com/snnn), [gineshidalgo99](https://github.com/gineshidalgo99), [fs-eire](https://github.com/fs-eire), [gwang-msft](https://github.com/gwang-msft), [edgchen1](https://github.com/edgchen1), [hariharans29](https://github.com/hariharans29), [skottmckay](https://github.com/skottmckay), [jeffdaily](https://github.com/jeffdaily), [baijumeswani](https://github.com/baijumeswani), [fdwr](https://github.com/fdwr), [smk2007](https://github.com/smk2007), [suffiank](https://github.com/suffiank), [souptc](https://github.com/souptc), [RyanUnderhill](https://github.com/RyanUnderhill), [iK1D](https://github.com/iK1D), [yuslepukhin](https://github.com/yuslepukhin), [chilo-ms](https://github.com/chilo-ms), [satyajandhyala](https://github.com/satyajandhyala), [hanbitmyths](https://github.com/hanbitmyths), [thiagocrepaldi](https://github.com/thiagocrepaldi), [wschin](https://github.com/wschin), [tianleiwu](https://github.com/tianleiwu), [pengwa](https://github.com/pengwa), [xadupre](https://github.com/xadupre), [zhanghuanrong](https://github.com/zhanghuanrong), [SherlockNoMad](https://github.com/SherlockNoMad), [wangyems](https://github.com/wangyems), [RandySheriffH](https://github.com/RandySheriffH), [ashbhandare](https://github.com/ashbhandare), [tiagoshibata](https://github.com/tiagoshibata), [yufenglee](https://github.com/yufenglee), [mindest](https://github.com/mindest), [sumitsays](https://github.com/sumitsays), [MaajidKhan](https://github.com/MaajidKhan), [gramalingam](https://github.com/gramalingam), [tracysh](https://github.com/tracysh), [georgen117](https://github.com/georgen117), [jywu-msft](https://github.com/jywu-msft), [sfatimar](https://github.com/sfatimar), [martinb35](https://github.com/martinb35), [nkreeger](https://github.com/nkreeger), [ytaous](https://github.com/ytaous), [ashari4](https://github.com/ashari4), [stevenlix](https://github.com/stevenlix), [chandru-r](https://github.com/chandru-r), [jingyanwangms](https://github.com/jingyanwangms), [mosdav](https://github.com/mosdav), [raviskolli](https://github.com/raviskolli), [faxu](https://github.com/faxu), [liqunfu](https://github.com/liqunfu), [kit1980](https://github.com/kit1980), [weixingzhang](https://github.com/weixingzhang), [pranavsharma](https://github.com/pranavsharma), [jcwchen](https://github.com/jcwchen), [chenfucn](https://github.com/chenfucn), [BowenBao](https://github.com/BowenBao), [jeffbloo](https://github.com/jeffbloo)

1.9.1

This is a patch release on 1.9.0 with the following fixes:

- Microsoft.AI.MachineLearning NuGet Package Fixes
- Bug fix for the issue that fails GPU execution if the executable is on the path that contained the unicode characters - [9229](https://github.com/microsoft/onnxruntime/pull/9229).
- Bug fix for the NuGet package to be installed on UWP apps with 1.9 - [9182](https://github.com/microsoft/onnxruntime/pull/9182).
- Bug fix for OpenVino EP Python API- [9166](https://github.com/microsoft/onnxruntime/pull/9166).
- Bump up TVM version for NUPHAR EP - [9159](https://github.com/microsoft/onnxruntime/pull/9159).
- Fixed build issue for iOS 11 and earlier versions - [9036](https://github.com/microsoft/onnxruntime/pull/9036).

1.9

* Builds will require C++ 17 compiler
* GPU build will be updated to CUDA 11.1

General
* ONNX opset 14 support - new and updated operators from the [ONNX 1.9 release](https://github.com/onnx/onnx/releases/tag/v1.9.0)
* Dynamically loadable CUDA execution provider
* Allows a single build to work for both CPU and GPU (excludes Python packages)
* [Profiler tool](http://www.onnxruntime.ai/docs/how-to/tune-performance.html#profiling-and-performance-report) now includes information on threadpool usage
* multi-threading preparation time
* multi-threading run time
* multi-threading wait time
* *[Experimental]* [onnxruntime-extensions package](http://pypi.org/project/onnxruntime-extensions)
* Crowd-sourced library of common/shareable custom operator implementations that can be loaded and run with ONNX Runtime; community contributions are welcome! - [microsoft/onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions)
* Currently includes mostly ops and tokenizers for string operations (full list [here](https://github.com/microsoft/onnxruntime-extensions/tree/main/operators))
* Tutorials to export and load custom ops from onnxruntime-extensions: [TensorFlow](https://github.com/microsoft/onnxruntime-extensions/blob/main/tutorials/tf2onnx_custom_ops_tutorial.ipynb), [PyTorch](https://github.com/microsoft/onnxruntime-extensions/blob/main/tutorials/pytorch_custom_ops_tutorial.ipynb)

Training
* [torch-ort](https://pypi.org/project/torch-ort/) package released as the ONNX Runtime backend in PyTorch
* [onnxruntime-training-gpu](https://pypi.org/project/onnxruntime-training) and [onnxruntime-training-rocm](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_rocm42.html) packages now available for distributed training on NVIDIA and AMD GPUs

Mobile
* Official package now available
* [Pre-built Android and iOS packages](https://onnxruntime.ai/docs/how-to/mobile/overview.html#pre-built-package) with support for selected operators and data types
* Objective-C API for iOS in preview
* Expanded operators supported by NNAPI (Android) and CoreML (iOS) execution providers
* All operators in the ai.onnx domain now support type reduction
* Create ORT format model with `--enable_type_reduction` flag, and perform minimal build ` --enable_reduced_operator_type_support` flag

ORT Web
* New [ONNX Runtime Javascript API](https://github.com/microsoft/onnxruntime/tree/master/js#onnxruntime-web)
* ONNX Runtime Web package
* Support WebAssembly and WebGL for CPU and GPU
* Support Web Worker based multi-threaded WebAssembly backend
* Supports ORT model format
* Improved WebGL performance

Performance
* Memory footprint reduction through shared pre-packed weights for shared initializers
* Pre-packing refers to weights that are pre-processed at model load time
* Allows pre-packed weights of shared initializers to also be shared between sessions, preserving memory savings from using shared initializers
* Memory footprint reduction through arena shrinkage
* By default, the memory arena doesn't shrink and it holds onto any allocated memory forever. This feature exposes a RunOption that scans the arena and potentially returns unused memory back to the system after the end of a Run. This feature is particularly useful while running a dynamic shape model that may occasionally process an outlier inference request that requires a large amount of memory. If the shrinkage option if invoked as part of these Runs, the memory that was required for that Run is not held on forever by the memory arena.

* Quantization
* Native support of Quantize-Dequantize (QDQ) format for CPU
* Support for Concat, Transpose, GlobalAveragePool, AveragePool, Resize, Squeeze
* Improved performance on high-end ARM devices by leveraging dot-product instructions
* Improved performance for batched quant GEMM with optimized multi-threading logic
* Per-column quantization for MatMul
* Transformers
* GPT-2 and beam search integration ([example](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/Inference_GPT2-OneStepSearch_OnnxRuntime_CPU.ipynb))

APIs
* WinML
* New native WinML API SetIntraOpThreadSpinning for toggling Intra Op thread spin behavior. When enabled, and when there is no current workload, IntraOp threads will continue to spin for some additional time as it waits for any additional work. This can result in better performance for the current workload but may impact performance of other unrelated workloads. This toggle is enabled by default.
* ORT Inferencing
* The following APIs have been added to this release. Please check the [API documentation](http://www.onnxruntime.ai/docs/reference/api/c-api.html#api-reference) for information.
* KernelInfoGetAttributeArray_float
* KernelInfoGetAttributeArray_int64
* CreateArenaCfgV2
* AddRunConfigEntry
* CreatePrepackedWeightsContainer
* PrepackedWeightsContainer
* CreateSessionWithPrepackedWeightsContainer
* CreateSessionFromArrayWithPrepackedWeightsContainer
Execution Providers
* TensorRT
* Added support for TensorRT EP configuration using session options instead of environment variables.
* Added support for DLA on Jetson Xavier (AGX, NX)
* General bug fixes and quality improvements.
* OpenVINO
* Added support for OpenVINO 2021.3
* Removed support for OpenVINO 2020.4
* Added support for Loading/Saving of Blobs on MyriadX devices to avoid expensive model blob compilation at runtime.
* DirectML
• Supports ARM/ARM64 architectures now in WinML and ONNX RunTime NuGet packages.
• Support for 8-dimensional tensors to: BatchNormalization, Cast, Join, LpNormalization, MeanVarianceNormalization, Padding, Tile, TopK.
• Substantial performance improvements for several operators.
• Resize nearest_mode “floor” and “round_prefer_ceil”.
• Fusion activations for: Conv, ConvTranspose, BatchNormalization, MeanVarianceNormalization, Gemm, MatMul.
• Decomposes unsupported QLinearSigmoid operation.
• Removes strided 64-bit emulation in Cast.
• Allows empty shapes on constant CPU inputs.


Known issues

* This release has an issue that may result in segmentation faults when deployed on Intel 12th Gen processors with hybrid architecture capabilities with Performance and Efficient-cores (P-core and E-core). **This has been fixed in ORT 1.9.**
* The CUDA build of this release has a regression in that the memory utilization increases significantly compared to the previous releases. A fix for this will be released shortly as part of 1.8.1 patch. Here is an incomplete list of issues where this was reported - 8287, 8171, 8147.
* GPU part of source code is not compatible with
- Visual Studio 2019 16.10.0 ( which was just released on May 25, 2021). 16.9.x is fine.
- clang 12
* CPU part of source code is not compatible with
- VS 2017 (https://github.com/microsoft/onnxruntime/issues/7936). Before we fix it please use VS 2019 instead.
- GCC 11. See 7918
* C OpenVino EP is broken. 7951
* Python and Windows only: if your CUDNN DLLs are not in CUDA's installation dir, then you need to set manually "CUDNN_HOME" variable. Just putting them in %PATH% is not enough. 7965
* onnxruntime-win-gpu-x64-1.8.0.zip on this page misses important DLLs, please don't use it.

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:

[snnn](https://github.com/snnn), [gwang-msft](https://github.com/gwang-msft), [baijumeswani](https://github.com/baijumeswani), [fs-eire](https://github.com/fs-eire), [edgchen1](https://github.com/edgchen1), [zhanghuanrong](https://github.com/zhanghuanrong), [yufenglee](https://github.com/yufenglee), [thiagocrepaldi](https://github.com/thiagocrepaldi), [hariharans29](https://github.com/hariharans29), [skottmckay](https://github.com/skottmckay), [weixingzhang](https://github.com/weixingzhang), [tianleiwu](https://github.com/tianleiwu), [SherlockNoMad](https://github.com/SherlockNoMad), [ashbhandare](https://github.com/ashbhandare), [tracysh](https://github.com/tracysh), [satyajandhyala](https://github.com/satyajandhyala), [liqunfu](https://github.com/liqunfu), [iK1D](https://github.com/iK1D), [RandySheriffH](https://github.com/RandySheriffH), [suffiank](https://github.com/suffiank), [hanbitmyths](https://github.com/hanbitmyths), [wangyems](https://github.com/wangyems), [askhade](https://github.com/askhade), [stevenlix](https://github.com/stevenlix), [chilo-ms](https://github.com/chilo-ms), [smk2007](https://github.com/smk2007), [kit1980](https://github.com/kit1980), [codemzs](https://github.com/codemzs), [raviskolli](https://github.com/raviskolli), [pranav-prakash](https://github.com/pranav-prakash), [chenfucn](https://github.com/chenfucn), [xadupre](https://github.com/xadupre), [gramalingam](https://github.com/gramalingam), [harshithapv](https://github.com/harshithapv), [oliviajain](https://github.com/oliviajain), [xzhu1900](https://github.com/xzhu1900), [ytaous](https://github.com/ytaous), [MaajidKhan](https://github.com/MaajidKhan), [RyanUnderhill](https://github.com/RyanUnderhill), [mrry](https://github.com/mrry), [orilevari](https://github.com/orilevari), [jingyanwangms](https://github.com/jingyanwangms), [sfatimar](https://github.com/sfatimar), [KeDengMS](https://github.com/KeDengMS), [jywu-msft](https://github.com/jywu-msft), [souptc](https://github.com/souptc), [adtsai](https://github.com/adtsai), [tlh20](https://github.com/tlh20), [yuslepukhin](https://github.com/yuslepukhin), [duli2012](https://github.com/duli2012), [pranavsharma](https://github.com/pranavsharma), [faxu](https://github.com/faxu), [georgen117](https://github.com/georgen117), [jeffbloo](https://github.com/jeffbloo), [Tixxx](https://github.com/Tixxx), [wschin](https://github.com/wschin), [YUNQIUGUO](https://github.com/YUNQIUGUO), [tiagoshibata](https://github.com/tiagoshibata), [martinb35](https://github.com/martinb35), [alberto-magni](https://github.com/alberto-magni), [ryanlai2](https://github.com/ryanlai2), [Craigacp](https://github.com/Craigacp), [suryasidd](https://github.com/suryasidd), [fdwr](https://github.com/fdwr), [jcwchen](https://github.com/jcwchen), [neginraoof](https://github.com/neginraoof), [natke](https://github.com/natke), [BowenBao](https://github.com/BowenBao)

1.9.0

Not secure
Announcements
* GCC version < 7 is no longer supported
* CMAKE_SYSTEM_PROCESSOR needs be set when cross-compiling on Linux because pytorch cpuinfo was introduced as a dependency for ARM big.LITTLE support. Set it to the value of `uname -m` output of your target device.

General
* ONNX 1.10 support
* opset 15
* ONNX IR 8 (SparseTensor type, model local functionprotos, Optional type not yet fully supported this release)
* Improved documentation of [C/C++ APIs](https://onnxruntime.ai/docs/api/c/)
* IBM Power support
* WinML - DLL dependency fix supports learning models on Windows 8.1
* Support for sub-building [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions) and statically linking into onnxruntime binary for custom builds
* Add `--_use_extensions` option to run models with custom operators implemented in onnxruntime-extensions


APIs
* Registration of a custom allocator for sharing between multiple sessions. (See RegisterAllocator and UnregisterAllocator APIs in onnxruntime_c_api.h)
* SessionOptionsAppendExecutionProvider_TensorRT API is deprecated; use SessionOptionsAppendExecutionProvider_TensorRT_V2
* New APIs: SessionOptionsAppendExecutionProvider_TensorRT_V2, CreateTensorRTProviderOptions, UpdateTensorRTProviderOptions, GetTensorRTProviderOptionsAsString, ReleaseTensorRTProviderOptions, EnableOrtCustomOps, RegisterAllocator, UnregisterAllocator, IsSparseTensor, CreateSparseTensorAsOrtValue, FillSparseTensorCoo, FillSparseTensorCsr, FillSparseTensorBlockSparse, CreateSparseTensorWithValuesAsOrtValue, UseCooIndices, UseCsrIndices, UseBlockSparseIndices, GetSparseTensorFormat, GetSparseTensorValuesTypeAndShape, GetSparseTensorValues, GetSparseTensorIndicesTypeShape, GetSparseTensorIndices,

Performance and quantization
* Performance improvement on ARM
* Added S8S8 (signed int8, signed int8) matmul kernel. This avoids extending uin8 to int16 for better performance on ARM64 without dot-product instruction
* Expanded GEMM udot kernel to 8x8 accumulator
* Added sgemm and qgemm optimized kernels for ARM64EC
* Operator improvements
* Improved performance for quantized operators: DynamicQuantizeLSTM, QLinearAvgPool
* Added new quantized operator QGemm for quantizing Gemm directly
* Fused HardSigmoid and Conv
* Quantization tool - subgraph support
* Transformers tool improvements
* Fused Attention for BART encoder and Megatron GPT-2
* Integrated mixed precision ONNX conversion and parity test for GPT-2
* Updated graph fusion for embed layer normalization for BERT
* Improved symbolic shape inference for operators: Attention, EmbedLayerNormalization, Einsum and Reciprocal

Packages
* Official ORT GPU packages (except Python) now include both CUDA and TensorRT Execution Providers.
* Python packages will be updated next release. Please note that EPs should be explicitly registered to ensure the correct provider is used.
* GPU packages are built with CUDA 11.4 and should be compatible with 11.x on systems with the minimum required driver version. See: [CUDA minor version compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/#minor-version-compatibility)
* Pypi
* ORT + DirectML Python packages now available: [onnxruntime-directml](https://pypi.org/project/onnxruntime-directml/)
* GPU package can be used on both CPU-only and GPU machines
* Nuget
* C: Added support for using netstandard2.0 as a target framework
* Windows symbol (PDB) files are no longer included in the Nuget package, reducing size of the binary Nuget package by 85%. To download, please see the artifacts below in Github.

Execution Providers
* CUDA EP
* Framework improvements that boost CUDA performance of subgraph heavy models (8642, 8702)
* Support for sequence ops for improved performance for models using sequence type
* Kernel perf improvements for Pad and Upsample (up to 4.5x faster)

* TensorRT EP
* Added support for TensorRT 8.0 (x64 Windows/Linux, ARM Jetson), which includes new TensorRT explicit-quantization features (ONNX Q/DQ support)
* General fixes and quality improvements
* OpenVINO EP
* Added support for OpenVINO 2021.4
* DirectML EP
* Bug fix for Identity with non-float inputs affecting DynamicQuantizeLinear ONNX backend test

ORT Web
* WebAssembly
* SIMD (Single Instruction, Multiple Data) support
* Option to load WebAssembly from worker thread to avoid blocking main UI thread
* wasm file path override
* WebGL
* Simpler workflow for WebGL kernel implementation
* Improved performance with Conv kernel enhancement

ORT Mobile
* Added more [example mobile apps](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/mobile)
* CoreML and NNAPI EP enhancements
* Reduced peak memory usage when initializing session with ORT format model as bytes
* Enhanced partitioning to improve performance when using NNAPI and CoreML
* Reduce number of NNAPI/CoreML partitions required
* Add ability to force usage of CPU for post-processing in SSD models
* Improves performance by avoiding expensive device copy to/from NPU for cheap post-processing section of the model
* Changed to using xcframework in the iOS package
* Supports usage of arm64 iPhone simulator on Mac with Apple silicon

ORT Training
* Expanding input formats supported to include dictionaries and lists.
* Enable user defined autograd functions
* Support for fallback to PyTorch for execution
* Added support for deterministic compute to enable reproducibility with ORTModule
* Add DebugOptions and LogLevels to ORTModule API* to improve debuggability
* Improvements additions to kernels/gradients: Concat, Split, MatMul, ReluGrad, PadOp, Tile, BatchNormInternal
* Support for ROCm 4.3.1 on AMD GPU

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[edgchen1](https://github.com/edgchen1), [gwang-msft](https://github.com/gwang-msft), [tianleiwu](https://github.com/tianleiwu), [fs-eire](https://github.com/fs-eire), [hariharans29](https://github.com/hariharans29), [skottmckay](https://github.com/skottmckay), [baijumeswani](https://github.com/baijumeswani), [RyanUnderhill](https://github.com/RyanUnderhill), [iK1D](https://github.com/iK1D), [souptc](https://github.com/souptc), [nkreeger](https://github.com/nkreeger), [liqunfu](https://github.com/liqunfu), [pengwa](https://github.com/pengwa), [SherlockNoMad](https://github.com/SherlockNoMad), [wangyems](https://github.com/wangyems), [chilo-ms](https://github.com/chilo-ms), [thiagocrepaldi](https://github.com/thiagocrepaldi), [KeDengMS](https://github.com/KeDengMS), [suffiank](https://github.com/suffiank), [oliviajain](https://github.com/oliviajain), [chenfucn](https://github.com/chenfucn), [satyajandhyala](https://github.com/satyajandhyala), [yuslepukhin](https://github.com/yuslepukhin), [pranavsharma](https://github.com/pranavsharma), [tracysh](https://github.com/tracysh), [yufenglee](https://github.com/yufenglee), [hanbitmyths](https://github.com/hanbitmyths), [ytaous](https://github.com/ytaous), [YUNQIUGUO](https://github.com/YUNQIUGUO), [zhanghuanrong](https://github.com/zhanghuanrong), [stevenlix](https://github.com/stevenlix), [jywu-msft](https://github.com/jywu-msft), [chandru-r](https://github.com/chandru-r), [duli2012](https://github.com/duli2012), [smk2007](https://github.com/smk2007), [wschin](https://github.com/wschin), [MaajidKhan](https://github.com/MaajidKhan), [tiagoshibata](https://github.com/tiagoshibata), [xadupre](https://github.com/xadupre), [RandySheriffH](https://github.com/RandySheriffH), [ashbhandare](https://github.com/ashbhandare), [georgen117](https://github.com/georgen117), [Tixxx](https://github.com/Tixxx), [harshithapv](https://github.com/harshithapv), [Craigacp](https://github.com/Craigacp), [BowenBao](https://github.com/BowenBao), [askhade](https://github.com/askhade), [zhangxiang1993](https://github.com/zhangxiang1993), [gramalingam](https://github.com/gramalingam), [weixingzhang](https://github.com/weixingzhang), [natke](https://github.com/natke), [tlh20](https://github.com/tlh20), [codemzs](https://github.com/codemzs), [ryanlai2](https://github.com/ryanlai2), [raviskolli](https://github.com/raviskolli), [pranav-prakash](https://github.com/pranav-prakash), [faxu](https://github.com/faxu), [adtsai](https://github.com/adtsai), [fdwr](https://github.com/fdwr), [wenbingl](https://github.com/wenbingl), [jcwchen](https://github.com/jcwchen), [neginraoof](https://github.com/neginraoof), [cschreib-ibex](https://github.com/cschreib-ibex)

1.8.2

This is a minor patch release on [1.8.1](https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1) with the following changes:

Inference
* Fix a crash issue when optimizing `Conv->Add->Relu` for CUDA EP
* ORT Mobile updates
* Change [Pre-built iOS package](https://onnxruntime.ai/docs/how-to/mobile/overview.html#pre-built-package) to static framework to fix App Store submission issue
* Support for metadata in ORT format models
* Additional operators
* Bug fixes

Known issues
* cudnn 8.0.5 causes memory leaks on T4 GPU as indicated by the [issue](https://github.com/microsoft/onnxruntime/issues/9643), an upgrade to later version solves the problem.

Page 4 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.