Onnxruntime

Latest version: v1.20.1

Safety actively analyzes 682244 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 9

1.16.0

General
* Support for serialization of models >=2GB

APIs
* New session option to disable default CPU EP fallback `session.disable_cpu_ep_fallback`
* Java
* Support for fp16 and bf16 tensors as inputs and outputs, along with utilities to convert between these and fp32 data. On JDK 20 and newer the fp16 conversion methods use the JDK's Float.float16ToFloat and Float.floatToFloat16 methods which can be hardware accelerated and vectorized on some platforms.
* Support for external initializers so that large models that can be instantiated without filesystem access
* C
* Expose OrtValue API as the new preferred API to run inference in C. This reduces garbage and exposes direct native memory access via Slice like interfaces.
* Make Float16 and BFloat16 full featured fp16 interfaces that support conversion and expose floating properties (e.g. IsNaN, IsInfinity, etc)
* C++
* Make Float16_t and BFloat16_t full featured fp16 interfaces that support conversion and expose floating properties (e.g. IsNaN, IsInfinity, etc)


Performance
* Improve LLM quantization accuracy with smoothquant
* Support 4-bit quantization on CPU
* Optimize BeamScore to improve BeamSearch performance
* Add FlashAttention v2 support for Attention, MultiHeadAttention and PackedMultiHeadAttention ops

Execution Providers
* CUDA EP
* Initial fp8 support (QDQ, Cast, MatMul)
* Relax CUDA Graph constraints to allow more models to utilize
* Allow CUDA allocator to be registered with ONNX Runtime externally
* Fixed a build issue with CUDA 12.2 (16713)
* TensorRT EP
* CUDA Graph support
* Support user provided cuda compute stream
* Misc bug fixes and improvements
* OpenVINO EP
* Support OpenVINO 2023.1
* QNN EP
* Enable context binary cache to reduce initialization time
* Support QNN 2.12
* Support for resize with asymmetric transformation mode on HTP backend
* Ops support: Equal, Less, LessOrEqual, Greater, GreaterOrEqual, LayerNorm, Asin, Sign, DepthToSpace, SpaceToDepth
* Support 1D Conv/ConvTranspose
* Misc bug fixes and improvements

Mobile
* Initial support for [Azure EP](https://onnxruntime.ai/docs/execution-providers/Azure-ExecutionProvider.html)
* Dynamic shape support for CoreML
* Improve React Native performance with JSI
* Mobile support for CLIPImageProcessor pre-processing and CLIP scenario
* Swift Package Manager support for ONNX Runtime inference and ONNX Runtime extensions via [onnxruntime-swift-package-manager](https://github.com/microsoft/onnxruntime-swift-package-manager)

Web
* webgpu ops coverage improvements (SAM, T5, Whisper)
* webnn ops coverage improvements (SAM, Stable Diffusion)
* Stability/usability improvements for webgpu

Large model training
* ORTModule + OpenAI Triton Integration now available. [See details here](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#6-use-openai-triton-to-compute-onnx-sub-graph)
* [Label Sparsity compute optimization](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#ortmodule_enable_compute_optimizer) support complete and enabled by default starting release 1.16
* **New experimental** embedding [sparsity related optimizations](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#ortmodule_enable_embedding_sparse_optimizer) available (disabled by default).
* Improves training performance of Roberta in Transformers by 20-30%
* Other compute optimizations like Gather/Slice/Reshape upstream support enabled.
* Optimizations for [LLaMAv2 (~10% acceleration)](https://github.com/huggingface/optimum/tree/main/examples/onnxruntime/training/text-classification#text-classification) and OpenAI Whisper
* Improvements to logging and metrics (initialization overhead, memory usage, statistics convergence tool, etc) system improvements.
* PythonOp enhancement: bool and tuple[bool] constants, materialize grads, empty inputs, save in context, customized shape inference, use full qualified name for export.
* SCELossInternal/SCELossGradInternal CUDA kernels can handle elements more than std::numeric_limits<int32_t>::max.
* Improvements to LayerNorm fusion
* [Model cache](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#ortmodule_cache_dir) for exported onnx model is introduced to avoid repeatedly exporting a model that is not changed across.

On-Device Training
* iOS support available starting this release
* Minimal build now available for On-Device Training. Basic binary size ~1.5 MB
* ORT-Extensions custom op support enabled through onnxblock for on-device training scenarios

ORT Extensions
This ORT release is accompanied by updates to [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions/). Features include:
* New Python API gen_processing_models to export ONNX data processing model from Huggingface Tokenizers such as LLaMA , CLIP, XLM-Roberta, Falcon, BERT, etc.
* New TrieTokenizer operator for RWKV-like LLM models, and other tokenizer operator enhancements.
* New operators for Azure EP compatibility: AzureAudioToText, AzureTextToText, AzureTritonInvoker for Python and NuGet packages.
* Processing operators have been migrated to the new [Lite Custom Op API](https://github.com/microsoft/onnxruntime/blob/gh-pages/docs/reference/operators/add-custom-op.md#define-and-register-a-custom-operator)

---
Known Issues
* ORT CPU Python package requires execution provider to be explicitly provided. See 17631. Fix is in progress to be patched.
---
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[fs-eire](https://github.com/fs-eire), [edgchen1](https://github.com/edgchen1), [snnn](https://github.com/snnn), [pengwa](https://github.com/pengwa), [mszhanyi](https://github.com/mszhanyi), [PeixuanZuo](https://github.com/PeixuanZuo), [tianleiwu](https://github.com/tianleiwu), [adrianlizarraga](https://github.com/adrianlizarraga), [baijumeswani](https://github.com/baijumeswani), [cloudhan](https://github.com/cloudhan), [satyajandhyala](https://github.com/satyajandhyala), [yuslepukhin](https://github.com/yuslepukhin), [RandyShuai](https://github.com/RandyShuai), [RandySheriffH](https://github.com/RandySheriffH), [skottmckay](https://github.com/skottmckay), [Honry](https://github.com/Honry), [dependabot[bot]](https://github.com/dependabot[bot]), [HectorSVC](https://github.com/HectorSVC), [jchen351](https://github.com/jchen351), [chilo-ms](https://github.com/chilo-ms), [YUNQIUGUO](https://github.com/YUNQIUGUO), [justinchuby](https://github.com/justinchuby), [PatriceVignola](https://github.com/PatriceVignola), [guschmue](https://github.com/guschmue), [yf711](https://github.com/yf711), [Craigacp](https://github.com/Craigacp), [smk2007](https://github.com/smk2007), [RyanUnderhill](https://github.com/RyanUnderhill), [jslhcl](https://github.com/jslhcl), [wschin](https://github.com/wschin), [kunal-vaishnavi](https://github.com/kunal-vaishnavi), [mindest](https://github.com/mindest), [xadupre](https://github.com/xadupre), [fdwr](https://github.com/fdwr), [hariharans29](https://github.com/hariharans29), [AdamLouly](https://github.com/AdamLouly), [wejoncy](https://github.com/wejoncy), [chenfucn](https://github.com/chenfucn), [pranavsharma](https://github.com/pranavsharma), [yufenglee](https://github.com/yufenglee), [zhijxu-MS](https://github.com/zhijxu-MS), [jeffdaily](https://github.com/jeffdaily), [natke](https://github.com/natke), [jeffbloo](https://github.com/jeffbloo), [liqunfu](https://github.com/liqunfu), [wangyems](https://github.com/wangyems), [er3x3](https://github.com/er3x3), [nums11](https://github.com/nums11), [yihonglyu](https://github.com/yihonglyu), [sumitsays](https://github.com/sumitsays), [zhanghuanrong](https://github.com/zhanghuanrong), [askhade](https://github.com/askhade), [wenbingl](https://github.com/wenbingl), [jingyanwangms](https://github.com/jingyanwangms), [ashari4](https://github.com/ashari4), [gramalingam](https://github.com/gramalingam), [georgen117](https://github.com/georgen117), [sfatimar](https://github.com/sfatimar), [BowenBao](https://github.com/BowenBao), [hanbitmyths](https://github.com/hanbitmyths), [stevenlix](https://github.com/stevenlix), [jywu-msft](https://github.com/jywu-msft)

1.15.1

This release fixed the following issues:

1. A coding problem in test/shared_lib/test_inference.cc that it should use ASSERT_NEAR to test float values instead of ASSERT_EQ. Without this change, some DNNL/OpenVino tests would fail on some AMD CPUs.
2. A misaligned error in cublasGemmBatchedHelper function. The error only occurs when CUDA version = 11.8 and the GPU's CUDA Compute capability >=80. (In other words: with TensorFloat-32 support) (15981)
3. A build issue that build with onnxruntime_ENABLE_MEMORY_PROFILE was broken in 1.15.0 release. (16124)
4. Native onnxruntime library not loading in Azure App Service. It is because in 1.15.0 we introduced a Windows API call to SetThreadDescription. Though the API is available in all Windows 10 versions, some sandbox environments block using the API. (15375)
5. An alignment problem for xnnpack EP on Intel/AMD CPUs on PC platforms.
6. Some training header files were missing in the 1.15.0 training nuget package.
7. Some fields in OrtCUDAProviderOptionsV2 struct are not initialized
8. The *.dylib files in ONNX Runtime nuget package are not signed. (16168)

Known issue

1. Segfaults when loading model with local functions, works fine if model is inlined by ONNX (16170)
2. Cross building for iOS requires manually downloading protoc (16238)

1.15.0

Announcements

Starting from the next release(ONNX Runtime 1.16.0), at operating system level we will drop the support for
- iOS 11 and below. iOS 12 will be the minimum supported version.
- CentOS 7, Ubuntu 18.04, and any Linux distro without glibc version >=2.28.

At compiler level we will drop the support for
- GCC version <= 9
- Visual Studio 2019

Also, we will remove the onnxruntime_DISABLE_ABSEIL build option since we will upgrade protobuf and the new protobuf version will need abseil.

General
- [Added support for ONNX Optional type in C API](https://github.com/microsoft/onnxruntime/pull/15314)
- [Added collectives to support multi-GPU inferencing](https://github.com/microsoft/onnxruntime/pull/14399)
- Updated macOS build machines to macOS-12, which comes with Xcode 14.2 and we should stop using Xcode 12.4
- Added Python 3.11 support (deprecate 3.7, support 3.8-3.11) in packages for Onnxruntime CPU, Onnxruntime-GPU, Onnxruntime-directml, and onnxruntime-training.
- Updated to CUDA 11.8. ONNX Runtime source code is still compatible with CUDA 11.4 and 12.x.
- Dropped the support for Windows 8.1 and below
- Eager mode code and onnxruntime_ENABLE_EAGER_MODE cmake option are deleted.
- Upgraded Mimalloc version from 2.0.3 to 2.1.1
- Upgraded protobuf version from 3.18.3 to 21.12
- New dependency: cutlass, which is only used in CUDA/TensorRT packages.
- Upgraded DNNL from 2.7.1 to 3.0

Build System

- On POSIX systems by default we disallow using "root" user to build the code. If needed, you can append "--allow_running_as_root" to your build command to bypass the check.
- Add the support for building the source natively on Windows ARM64 with Visual Studio 2022.
- Added a Gradle wrapper and updated Gradle version from 6.8.3 to 8.0.1. (Gradle is the tool for building ORT Java package)
- When doing cross-compiling, the build scripts will try to download a prebuit protoc from Github instead of building the binary from source. Because now protobuf has many dependencies. It is not easy to setup a build environment for protobuf.

Performance

- [Improved string marshalling and reduce GC pressure](https://github.com/microsoft/onnxruntime/pull/15545)
- [Added a build option to allow using a lock-free queue in threadpool for improved CPU utilization](https://github.com/microsoft/onnxruntime/pull/14834)
- [Fix CPU memory leak due to external weights](https://github.com/microsoft/onnxruntime/pull/15040)
- Added fused decoder multi-head attention kernel to improve GPT and decoder models(like T5, Whisper)
- Added packing mode to improve encoder models with inputs of large padding ratio
- Improved generation algorithm (BeamSearch, TopSampling, GreedySearch)
- Improved performance for StableDiffusion, ViT, GPT, whisper models



Execution Providers

Two new execution providers: JS EP and QNN EP.

TensorRT EP

- Official support for TensorRT 8.6
- Explicit shape profile overrides
- Support for TensorRT plugins via ORT custom op
- Improve support for TensorRT options (heuristics, sparsity, optimization level, auxiliary stream, tactic source selection etc.)
- Support for TensorRT timing cache
- Improvements to our test coverage, specifically for opset16-17 models and package pipeline unit test coverage.
- Other misc bugfixes and improvements.

OpenVINO EP

- Support for OpenVINO 2023.0
- Dynamic shapes support for iGPU
- Changes to OpenVINO backend to improve first inference latency
- Deprecation of HDDL-VADM and Myriad VPU support
- Misc bug fixes.

QNN EP
- [Initial Public preview release](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.QNN)

DirectML EP:
- Updated to [DirectML 1.12](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-112)
- Opset 16-17 support

AzureEP
- Added support for OpenAI whisper model
- Available in a Nuget pkg in addition to Python

Mobile
New packages
- Swift Package Manager for onnxruntime
- Nuget package for onnxruntime-extensions (supports Android/iOS for MAUI/Xamarin)
- React Native package for onnxruntime can optionally include onnxruntime-extensions

Pre/Post processing
- Added support for built-in pre and post processing for NLP scenarios: classification, question-answering, text-prediction
- Added support for built-in pre and post processing for Speech Recognition (Whisper)
- Added support for built-in post processing for Object Detection (YOLO). Non-max suppression, draw bounding boxes
- Additional CoreML and NNAPI kernels to support customer scenarios

- NNAPI: BatchNormalization, LRN
- CoreML: Div, Flatten, LeakyRelu, LRN, Mul, Pad, Pow, Sub

Web

- [preview] WebGPU support
- Support building the source code with "MinGW make" on Windows.

ORT Training
On-device training:
- Official package for On-Device Training now available. On-device training extends ORT Inference solutions to enable training on edge devices.
- APIs and Language bindings supported for C, C++, Python, C, Java.
- Packages available for Desktop and Android.
- For custom [build](https://onnxruntime.ai/docs/build/training.html#build-for-on-device-training)s refer build instructions.

Others
- Added [graph optimizations]( https://github.com/microsoft/onnxruntime/blob/rel-1.15.0/docs/ORTModule_Training_Guidelines.md#ortmodule_enable_compute_optimizer) which leverage the sparsity in the label data to improve performance. With these optimizations we see performance gains ranging from 4% to 15% for popular HF models over baseline ORT.
- Vision transformer models like ViT, BEIT and SwinV2 see upto 44% speedup with ORT Training+ DeepSpeed over PyTorch eager mode on AzureML.
- Added optimizations for SOTA models like Dolly and Whisper. ORT Training + DS now gives ~17% speedup for Whisper and ~4% speedup for Dolly over PyTorch eager mode. Dolly optimizations on main branch show a ~40% over eager mode.

Known Issues
- The onnxruntime-training 1.15.0 packages published to pypi.org were actually built in Debug mode instead of Release mode. You can get the right one from https://download.onnxruntime.ai/ . We will fix the issue in the next patch release.
- XNNPack EP does not work on x86 CPUs without AVX-512 instructions, because we used wrong alignment when allocating buffers for XNNPack to use.
- The CUDA EP source code has a build error when CUDA version <11.6. See 16000.
- The onnxruntime-training builds are missing the training header files.

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[snnn](https://github.com/snnn), [fs-eire](https://github.com/fs-eire), [edgchen1](https://github.com/edgchen1), [wejoncy](https://github.com/wejoncy), [mszhanyi](https://github.com/mszhanyi), [PeixuanZuo](https://github.com/PeixuanZuo), [pengwa](https://github.com/pengwa), [jchen351](https://github.com/jchen351), [cloudhan](https://github.com/cloudhan), [tianleiwu](https://github.com/tianleiwu), [PatriceVignola](https://github.com/PatriceVignola), [wangyems](https://github.com/wangyems), [adrianlizarraga](https://github.com/adrianlizarraga), [chenfucn](https://github.com/chenfucn), [HectorSVC](https://github.com/HectorSVC), [baijumeswani](https://github.com/baijumeswani), [justinchuby](https://github.com/justinchuby), [skottmckay](https://github.com/skottmckay), [yuslepukhin](https://github.com/yuslepukhin), [RandyShuai](https://github.com/RandyShuai), [RandySheriffH](https://github.com/RandySheriffH), [natke](https://github.com/natke), [YUNQIUGUO](https://github.com/YUNQIUGUO), [smk2007](https://github.com/smk2007), [jslhcl](https://github.com/jslhcl), [chilo-ms](https://github.com/chilo-ms), [yufenglee](https://github.com/yufenglee), [RyanUnderhill](https://github.com/RyanUnderhill), [hariharans29](https://github.com/hariharans29), [zhanghuanrong](https://github.com/zhanghuanrong), [askhade](https://github.com/askhade), [wschin](https://github.com/wschin), [jywu-msft](https://github.com/jywu-msft), [mindest](https://github.com/mindest), [zhijxu-MS](https://github.com/zhijxu-MS), [dependabot[bot]](https://github.com/dependabot[bot]), [xadupre](https://github.com/xadupre), [liqunfu](https://github.com/liqunfu), [nums11](https://github.com/nums11), [gramalingam](https://github.com/gramalingam), [Craigacp](https://github.com/Craigacp), [fdwr](https://github.com/fdwr), [shalvamist](https://github.com/shalvamist), [jstoecker](https://github.com/jstoecker), [yihonglyu](https://github.com/yihonglyu), [sumitsays](https://github.com/sumitsays), [stevenlix](https://github.com/stevenlix), [iK1D](https://github.com/iK1D), [pranavsharma](https://github.com/pranavsharma), [georgen117](https://github.com/georgen117), [sfatimar](https://github.com/sfatimar), [MaajidKhan](https://github.com/MaajidKhan), [satyajandhyala](https://github.com/satyajandhyala), [faxu](https://github.com/faxu), [jcwchen](https://github.com/jcwchen), [hanbitmyths](https://github.com/hanbitmyths), [jeffbloo](https://github.com/jeffbloo), [souptc](https://github.com/souptc), [ytaous](https://github.com/ytaous) [kunal-vaishnavi](https://github.com/kunal-vaishnavi)

1.14.1

This patch addresses packaging issues and bug fixes on top of v1.14.0:
* Mac OS Python build for x86 arch (issue: 14663)
* DirectML EP fixes: sequence ops (14442), package naming to remove -dev suffix
* CUDA12 build compatibility (14659)
* Performance regression fixes: IOBinding input (14719), Transformer models (14732, 14517, 14699)
* ORT Training kernel fix (14727)

Only select packages were published for this patch release; others can be found in the attachments below:
* Pypi: [onnxruntime](https://pypi.org/project/onnxruntime), [onnxruntime-gpu](https://pypi.org/project/onnxruntime-gpu), [onnxruntime-directml](https://pypi.org/project/onnxruntime-directml), [onnxruntime-training](https://pypi.org/project/onnxruntime-training/)
* Nuget: [Microsoft.ML.OnnxRuntime](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime), [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu), [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.directml), [Microsoft.AI.MachineLearning](https://www.nuget.org/packages/Microsoft.AI.MachineLearning)

1.14.0

Announcements
* Building ORT from source will require cmake version >=3.24 instead of >=3.18.

General
* [ONNX 1.13](https://github.com/onnx/onnx/releases/tag/v1.13.0) support (opset 18)
* Threading
* ORT Threadpool is now NUMA aware [(details)](https://onnxruntime.ai/docs/performance/tune-performance.html#numa-support-and-performance-tuning)
* New API to set thread affinity ([details](https://onnxruntime.ai/docs/performance/tune-performance.html#set-intra-op-thread-affinity))
* New custom operator APIs
* Enables a custom operator to wrap an entire model that is meant to be inferenced with an external API or runtime.
* [Details](https://onnxruntime.ai/docs/reference/operators/add-custom-op.html#define-and-register-a-custom-operator) and [example](https://github.com/microsoft/onnxruntime/tree/main/onnxruntime/test/testdata/custom_op_openvino_wrapper_library)
* Multi-stream Execution Provider refactoring
* Improves GPU utilization by putting parallel inference requests on different GPU streams. Updated for CUDA, TensorRT, and ROCM execution providers
* Improves memory efficiency by enabling GPU memory reuse across different streams
* Enables Execution Provider developer to customize its stream implementation by providing "Stream" interface in ExecutionProvider API
* *[Preview]* [Rust API](https://github.com/microsoft/onnxruntime/tree/main/rust) for ORT - not part of release branch but available to build in main.

Performance
* Support of quantization with AMX on Sapphire Rapids processors
* CUDA EP performance improvements:
* Improve performance of transformer models and decoding methods: beam search, greedy search, and topp sampling.
* Stable Diffusion model optimizations
* Change cudnn_conv_use_max_workspace default value to be 1
* Performance improvements to GRU and Slice operators

Execution Providers
* TensorRT EP
* Adds support for TensorRT 8.5 GA versions
* Bug fixes
* OpenVINO EP
* Adds support for OpenVINO 2022.3
* DirectML EP:
* Updated to DML [1.10.1](https://www.nuget.org/packages/Microsoft.AI.DirectML)
* Additional operators: [NonZero](https://github.com/microsoft/onnxruntime/pull/13768), [Shape](https://github.com/microsoft/onnxruntime/pull/13442), [Size](https://github.com/microsoft/onnxruntime/pull/13442), [Attention](https://github.com/microsoft/onnxruntime/pull/13371), [EmbedLayerNorm](https://github.com/microsoft/onnxruntime/pull/13868), [SkipLayerNorm](https://github.com/microsoft/onnxruntime/pull/13849), [BiasGelu](https://github.com/microsoft/onnxruntime/pull/13795)
* Additional data types: [Abs](https://github.com/microsoft/onnxruntime/pull/13470), [Sign](https://github.com/microsoft/onnxruntime/pull/13470), [Where](https://github.com/microsoft/onnxruntime/pull/13443)
* Enable SetOptimizedFilePath [export/reload](https://github.com/microsoft/onnxruntime/pull/13913)
* Bug fixes/extensions: [allow squeeze-13 axes](https://github.com/microsoft/onnxruntime/pull/13635), [EinSum with MatMul NHCW](https://github.com/microsoft/onnxruntime/pull/13440)
* [ROCm EP](https://onnxruntime.ai/docs/execution-providers/ROCm-ExecutionProvider.html): 5.4 support and GA ready
* *[Preview]* [Azure EP](https://onnxruntime.ai/docs/execution-providers/Azure-ExecutionProvider.html) - supports AzureML hosted models using Triton for hybrid inferencing on-device and on-cloud

Mobile
* Pre/Post processing
* Support updating mobilenet and super resolution models to move the pre and post processing into the model, including usage of custom ops for conversion to/from jpg/png
* [onnxruntime-extensions python package](https://pypi.org/project/onnxruntime-extensions/) includes the model update script to add pre/post processing to the model
* See [example](https://github.com/microsoft/onnxruntime-extensions/blob/main/tutorials/superresolution_e2e.py) model update usage
* *[Coming soon]* onnxruntime-extensions packages for Android and iOS with DecodeImage and EncodeImage custom ops
* Updated the onnxruntime inference examples to demonstrate end-to-end usage with onnxruntime-extensions package
* [SuperResolution model](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/mobile/examples/super_resolution)
* XNNPACK
* Added support for additional commonly used operators
* Add iOS build support
* XNNPACK EP is now included in the onnxruntime-c iOS package
* Added support for using the ORT allocator in XNNPACK kernels to minimize memory usage

Web
* [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions) included in default ort-web build (NLP centric)
* XNNPACK Gemm
* Improved exception handling
* New [utility functions](https://onnxruntime.ai/docs/api/js/index.html) (experimental) to help with exchanging data between images and tensors.

Training
* Performance optimizations and bug fixes for Hugging Face models (i.e. Xlnet and Bloom)
* Stable diffusion optimizations for training, including support for Resize and InstanceNorm gradients and addition of ORT-enabled examples to the [diffusers library](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/onnxruntime)
* FP16 optimizer exposed in torch-ort ([details](https://github.com/microsoft/onnxruntime/blob/main/docs/ORTModule_Training_Guidelines.md#4-use-fp16_optimizer-to-complement-deepspeedapex))
* Bug fixes for Hugging Face models

Known Issues
* The [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) package name includes -dev-* suffix. This is functionally equivalent to the release branch build, and a patch is in progress.
---
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[snnn](https://github.com/snnn), [skottmckay](https://github.com/skottmckay), [edgchen1](https://github.com/edgchen1), [hariharans29](https://github.com/hariharans29), [tianleiwu](https://github.com/tianleiwu), [yufenglee](https://github.com/yufenglee), [guoyu-wang](https://github.com/guoyu-wang), [yuslepukhin](https://github.com/yuslepukhin), [fs-eire](https://github.com/fs-eire), [pranavsharma](https://github.com/pranavsharma), [iK1D](https://github.com/iK1D), [baijumeswani](https://github.com/baijumeswani), [tracysh](https://github.com/tracysh), [thiagocrepaldi](https://github.com/thiagocrepaldi), [askhade](https://github.com/askhade), [RyanUnderhill](https://github.com/RyanUnderhill), [wangyems](https://github.com/wangyems), [fdwr](https://github.com/fdwr), [RandySheriffH](https://github.com/RandySheriffH), [jywu-msft](https://github.com/jywu-msft), [zhanghuanrong](https://github.com/zhanghuanrong), [smk2007](https://github.com/smk2007), [pengwa](https://github.com/pengwa), [liqunfu](https://github.com/liqunfu), [shahasad](https://github.com/shahasad), [mszhanyi](https://github.com/mszhanyi), [SherlockNoMad](https://github.com/SherlockNoMad), [xadupre](https://github.com/xadupre), [jignparm](https://github.com/jignparm), [HectorSVC](https://github.com/HectorSVC), [ytaous](https://github.com/ytaous), [weixingzhang](https://github.com/weixingzhang), [stevenlix](https://github.com/stevenlix), [tiagoshibata](https://github.com/tiagoshibata), [faxu](https://github.com/faxu), [wschin](https://github.com/wschin), [souptc](https://github.com/souptc), [ashbhandare](https://github.com/ashbhandare), [RandyShuai](https://github.com/RandyShuai), [chilo-ms](https://github.com/chilo-ms), [PeixuanZuo](https://github.com/PeixuanZuo), [cloudhan](https://github.com/cloudhan), [dependabot[bot]](https://github.com/dependabot[bot]), [jeffbloo](https://github.com/jeffbloo), [chenfucn](https://github.com/chenfucn), [linkerzhang](https://github.com/linkerzhang), [duli2012](https://github.com/duli2012), [codemzs](https://github.com/codemzs), [oliviajain](https://github.com/oliviajain), [natke](https://github.com/natke), [YUNQIUGUO](https://github.com/YUNQIUGUO), [Craigacp](https://github.com/Craigacp), [sumitsays](https://github.com/sumitsays), [orilevari](https://github.com/orilevari), [BowenBao](https://github.com/BowenBao), [yangchen-MS](https://github.com/yangchen-MS), [hanbitmyths](https://github.com/hanbitmyths), [satyajandhyala](https://github.com/satyajandhyala), [MaajidKhan](https://github.com/MaajidKhan), [smkarlap](https://github.com/smkarlap), [sfatimar](https://github.com/sfatimar), [jchen351](https://github.com/jchen351), [georgen117](https://github.com/georgen117), [wejoncy](https://github.com/wejoncy), [PatriceVignola](https://github.com/PatriceVignola), [adrianlizarraga](https://github.com/adrianlizarraga), [justinchuby](https://github.com/justinchuby), [zhangxiang1993](https://github.com/zhangxiang1993), [gineshidalgo99](https://github.com/gineshidalgo99), [tlh20](https://github.com/tlh20), [xzhu1900](https://github.com/xzhu1900), [jeffdaily](https://github.com/jeffdaily), [suryasidd](https://github.com/suryasidd), [yihonglyu](https://github.com/yihonglyu), [liuziyue](https://github.com/liuziyue), [chentaMS](https://github.com/chentaMS), [jcwchen](https://github.com/jcwchen), [ybrnathan](https://github.com/ybrnathan), [ajindal1](https://github.com/ajindal1), [zhijxu-MS](https://github.com/zhijxu-MS), [gramalingam](https://github.com/gramalingam), [WilBrady](https://github.com/WilBrady), [garymm](https://github.com/garymm), [kkaranasos](https://github.com/kkaranasos), [ashari4](https://github.com/ashari4), [martinb35](https://github.com/martinb35), [AdamLouly](https://github.com/AdamLouly), [zhangyaobit](https://github.com/zhangyaobit), [vvchernov](https://github.com/vvchernov), [jingyanwangms](https://github.com/jingyanwangms), [wenbingl](https://github.com/wenbingl), [daquexian](https://github.com/daquexian), [sreekanth-yalachigere](https://github.com/sreekanth-yalachigere), [NonStatic2014](https://github.com/NonStatic2014), [mayavijx](https://github.com/mayavijx), [mindest](https://github.com/mindest), [jstoecker](https://github.com/jstoecker), [manashgoswami](https://github.com/manashgoswami), [Andrews548](https://github.com/Andrews548), [baowenlei](https://github.com/baowenlei), [kunal-vaishnavi](https://github.com/kunal-vaishnavi)

1.13.1

Announcements
* Security issues addressed by this release
1. A protobuf security issue CVE-2022-1941 that impact users who load ONNX models from untrusted sources, for example, a deep learning inference service which allows users to upload their models then runs the inferences in a shared environment.
2. An ONNX security vulnerability that allows reading of tensor_data outside the model directory, which allows attackers to read or write arbitrary files on an affected system that loads ONNX models from untrusted sources. (12915)
* Deprecations
* CUDA 10.x support at source code level
* Windows 8.x support in Nuget/C API prebuilt binaries. Support for Windows 7+ Desktop versions (including Windows servers) will be retained by building ONNX Runtime from source.
* NUPHAR EP code is removed
* Dependency versioning updates
* C++ 17 compiler is now required to build ORT from source. On Linux, GCC version >=7.0 is required.
* Minimal numpy version bumped to 1.21.6 (from 1.21.0) for ONNX Runtime Python packages
* Official ONNX Runtime GPU packages now require CUDA version >=11.6 instead of 11.4.

General
* Expose all arena configs in Python API in an extensible way
* Fix ARM64 NuGet packaging
* Fix EP allocator setup issue affecting TVM EP

Performance
* Transformers CUDA improvements
* Quantization on GPU for BERT - notebook, documentation on QAT, transformer optimization toolchain and quantized kernels.
* Add fused attention CUDA kernels for BERT.
* Fuse `Add` (bias) and `Transpose` of Q/K/V into one kernel for Attention and LongformerAttention.
* Reduce GEMM computation in LongformerAttention with a new weight format.
* General quantization (tool and kernel)
* [Quantization debugging tool](https://onnxruntime.ai/docs/performance/quantization.html#quantization-debugging) - identify sensitive node/layer from accuracy drop discrepancies
* New quantize API based on QuantConfig
* New quantized operators: SoftMax, Split, Where

Execution Providers
* CUDA EP
* Official ONNX Runtime GPU packages are now built with CUDA version 11.6 instead of 11.4, but should still be backwards compatible with 11.4
* TensorRT EP
* Build option to link against pre-built onnx-tensorrt parser; this enables potential "no-code" TensorRT minor version upgrades and can be used to build against TensorRT 8.5 EA
* Improved nested control flow support
* Improve HashId generation used for uniquely identifying TRT engines. Addresses issues such as [TRT Engine Cache Regeneration Issue](https://github.com/triton-inference-server/onnxruntime_backend/issues/145)
* TensorRT uint8 support
* OpenVINO EP
* OpenVINO version upgraded to 2022.2.0
* Support for INT8 QDQ models from [NNCF](https://github.com/openvinotoolkit/nncf/tree/develop/examples/experimental/onnx/)
* Support for Intel 13th Gen Core Process (Raptor Lake)
* Preview support for Intel discrete graphics cards [Intel Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html) and [Intel Arc GPU](https://www.intel.com/content/www/us/en/products/details/discrete-gpus/arc.html)
* Increased test coverage for GPU Plugin
* SNPE EP
* Add support for [Windows Dev Kit 2023](https://onnxruntime.ai/winarm.html)
* [Nuget Package](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Snpe) is now available
* DirectML EP
* Update to [DML 1.9.1](https://www.nuget.org/packages/Microsoft.AI.DirectML/1.9.1)
* [New ops](https://github.com/microsoft/onnxruntime/blob/main/docs/OperatorKernels.md#dmlexecutionprovider): [LayerNormalization](https://github.com/microsoft/onnxruntime/pull/12809), [Gelu](https://github.com/microsoft/onnxruntime/pull/12898/), MatMulScale, [DFT](https://github.com/microsoft/onnxruntime/pull/12710), [FusedMatMul](https://github.com/microsoft/onnxruntime/pull/12898/) (contrib)
* Bug fixes: DML EP Fix InstanceNormalization with 3D tensors (12693), DML EP squeeze all axes when empty (12649), DirectML GEMM broken in opset 11 and 13 when optional tensor C not provided (12568)
* **[new]** CANN EP - Initial integration of CANN EP contributed by Huawei to support Ascend 310 (11477)

Mobile
* EP infrastructure
* Implemented support for additional EPs that use static kernels
* Required for EPs like XNNPACK to be supported in minimal build
* Removes need for kernel hashes to reduce maintenance overhead for developers
* NOTE: ORT format models will need to be regenerated as the format change is NOT backwards compatible. We're replacing hashes for the CPU EP kernels with operator constraint information for operators used by the model so that we can match any static kernels available at runtime.
* XNNPack
* Added more kernels including QDQ format model support
* AveragePool, Softmax,
* QLinearConv, QLinearAveragePool, QLinearSoftmax
* Added support for XNNPACK using threadpool
* See [documentation](https://onnxruntime.ai/docs/execution-providers/Xnnpack-ExecutionProvider.html) for recommendations on how to configure the XNNPACK threadpool
* ORT format model peak memory usage
* Added ability to use ORT format model directly for initializers to reduce peak memory usage
* Enabled via SessionOptions config
* https://onnxruntime.ai/docs/reference/ort-format-models.html#load-ort-format-model-from-an-in-memory-byte-array
* Set "session.use_ort_model_bytes_directly" and "session.use_ort_model_bytes_for_initializers" to "1"

Web
* Support for 4GB memory in webassembly
* Upgraded emscripten to 3.1.19
* Build from source support for [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions) and [sentencepiece](https://github.com/microsoft/onnxruntime-extensions/blob/main/docs/custom_ops.md#sentencepiecetokenizer)
* Initial support for XNNPACK for optimizations for Wasm

Training
* Training packages updated to CUDA version 11.6 and removed CUDA 10.2 and 11.3
* Performance improvements via op fusions like BiasSoftmax and Dropout fusion, Gather to Split fusion etc targeting SOTA models
* Added Aten support for GroupNorm, InstanceNormalization, Upsample nearest
* Bug fix for SimplifiedLayerNorm, seg fault for alltoall
---
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[snnn](https://github.com/snnn), [baijumeswani#2baijumeswani](https://github.com/baijumeswani#2baijumeswani), [edgchen1](https://github.com/edgchen1), [iK1D](https://github.com/iK1D), [skottmckay](https://github.com/skottmckay), [cloudhan](https://github.com/cloudhan), [tianleiwu](https://github.com/tianleiwu), [fs-eire](https://github.com/fs-eire), [mszhanyi](https://github.com/mszhanyi), [WilBrady](https://github.com/WilBrady), [hariharans29](https://github.com/hariharans29), [chenfucn](https://github.com/chenfucn), [fdwr](https://github.com/fdwr), [yuslepukhin](https://github.com/yuslepukhin), [wejoncy](https://github.com/wejoncy), [PeixuanZuo](https://github.com/PeixuanZuo), [pengwa](https://github.com/pengwa), [yufenglee](https://github.com/yufenglee), [jchen351](https://github.com/jchen351), [justinchuby](https://github.com/justinchuby), [dependabot[bot]](https://github.com/dependabot[bot]), [RandySheriffH](https://github.com/RandySheriffH), [sumitsays](https://github.com/sumitsays), [wschin](https://github.com/wschin), [wangyems](https://github.com/wangyems), [YUNQIUGUO](https://github.com/YUNQIUGUO), [ytaous](https://github.com/ytaous), [pranavsharma](https://github.com/pranavsharma), [vvchernov](https://github.com/vvchernov), [natke](https://github.com/natke), [Craigacp](https://github.com/Craigacp), [RandyShuai](https://github.com/RandyShuai), [smk2007](https://github.com/smk2007), [zhangyaobit](https://github.com/zhangyaobit), [jcwchen](https://github.com/jcwchen), [yihonglyu](https://github.com/yihonglyu), [georgen117](https://github.com/georgen117), [chilo-ms](https://github.com/chilo-ms), [ashbhandare](https://github.com/ashbhandare), [faxu](https://github.com/faxu), [jstoecker](https://github.com/jstoecker), [gramalingam](https://github.com/gramalingam), [garymm](https://github.com/garymm), [jeffbloo](https://github.com/jeffbloo), [xadupre](https://github.com/xadupre), [jywu-msft](https://github.com/jywu-msft), [askhade](https://github.com/askhade), [RyanUnderhill](https://github.com/RyanUnderhill), [thiagocrepaldi](https://github.com/thiagocrepaldi), [mindest](https://github.com/mindest), [jingyanwangms](https://github.com/jingyanwangms), [wenbingl](https://github.com/wenbingl), [ashari4](https://github.com/ashari4), [sfatimar](https://github.com/sfatimar), [MaajidKhan](https://github.com/MaajidKhan), [souptc](https://github.com/souptc), [HectorSVC](https://github.com/HectorSVC), [weixingzhang](https://github.com/weixingzhang), [zhanghuanrong](https://github.com/zhanghuanrong)

Page 3 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.