Onnxruntime

Latest version: v1.19.0

Safety actively analyzes 675368 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 9

1.18.1

What's new?

**Announcements:**
- ONNX Runtime Python packages now have numpy dependency >=1.21.6, <2.0. Support for numpy 2.0 will be added in a future release.
- CUDA 12.x ONNX Runtime GPU packages are now built against cuDNN 9.x (1.18.0 packages previously depended on cuDNN 8.x). CUDA 11.x ONNX Runtime GPU packages continue to depend on CuDNN 8.x.
- Windows packages require installation of Microsoft Visual C++ Redistributable Runtime 14.38 or newer.

**TensorRT EP:**
- TensorRT Weightless API integration.
- Support for TensorRT hardware compatible engines.
- Support for INT64 types in TensorRT constant layer calibration.
- Now using latest commit of onnx-tensorrt parser, which includes several issue fixes.
- Additional TensorRT support and performance improvements.

**Packages:**
- Publish CUDA 12 Java packages to Azure DevOps feed.
- Various packaging pipeline fixes.

This patch release also features various other bug fixes, including a CUDA 12.5 build error fix.

**Big thank you to yf711 for driving this release as the release manager and to all our contributors!**

yf711 jchen351 mszhanyi snnn wangyems jywu-msft skottmckay chilo-ms moraxu kevinch-nv pengwa wejoncy pranavsharma Craigacp jslhcl adrianlizarraga inisis jeffbloo mo-ja kunal-vaishnavi sumitsays neNasko1 yufenglee dhruvbird wangshuai09 xiaoyu-work axinging yuslepukhin YUNQIUGUO shubhambhokare1 fs-eire afantino951 tboby HectorSVC baijumeswani

1.18.0

Announcements
* **Windows ARM32 support has been dropped at the source code level**.
* **Python version >=3.8 is now required for build.bat/build.sh** (previously >=3.7). *Note: If you have Python version <3.8, you can bypass the tools and use CMake directly.*
* **The [onnxruntime-mobile](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-mobile) Android package and onnxruntime-mobile-c/onnxruntime-mobile-objc iOS cocoapods are being deprecated**. Please use the [onnxruntime-android](https://mvnrepository.com/artifact/com.microsoft.onnxruntime/onnxruntime-android) Android package, and onnxruntime-c/onnxruntime-objc cocoapods, which support ONNX and ORT format models and all operators and data types. *Note: If you require a smaller binary size, a custom build is required. See details on creating a custom Android or iOS package on [Custom build | onnxruntime](https://onnxruntime.ai/docs/build/custom.html#custom-build-packages).*

Build System & Packages
* CoreML execution provider now depends on coremltools.
* Flatbuffers has been upgraded from 1.12.0 → 23.5.26.
* ONNX has been upgraded from 1.15 → 1.16.
* EMSDK has been upgraded from 3.1.51 → 3.1.57.
* Intel neural_speed library has been upgraded from v0.1.1 → v0.3 with several important bug fixes.
* There is a new onnxruntime_CUDA_MINIMAL CMake option for building ONNX Runtime CUDA execution provider without any operations apart from memcpy ops.
* Added support for Catalyst for macOS build support.
* Added initial support for RISC-V and three new build options for it: `--rv64`, `--riscv_toolchain_root`, and `--riscv_qemu_path`.
* Now you can build TensorRT EP with protobuf-lite instead of the full version of protobuf.
* Some security-related compile/link flags have been moved from the default setting → new build option: `--use_binskim_compliant_compile_flags`. *Note: All our release binaries are built with this flag, but when building ONNX Runtime from source, this flag is default OFF.*
* Windows ARM64 build now depends on PyTorch CPUINFO library.
* Windows OneCore build now uses “Reverse forwarding” apisets instead of “Direct forwarding”, so onnxruntime.dll in our Nuget packages will depend on kernel32.dll. *Note: Windows systems without kernel32.dll need to have reverse forwarders (see [API set loader operation - Win32 apps | Microsoft Learn](https://learn.microsoft.com/en-us/windows/win32/apiindex/api-set-loader-operation) for more information).*

Core
* Added ONNX 1.16 support.
* Added additional optimizations related to Dynamo-exported models.
* Improved testing infrastructure for EPs developed as shared libraries.
* Exposed Reserve() in OrtAllocator to allow custom allocators to work when session.use_device_allocator_for_initializers is specified.
* Improved lock contention due to memory allocations.
* Improved session creation time (graph and graph transformer optimizations).
* Added new SessionOptions config entry to disable specific transformers and rules.
* [C API] Exposed SessionOptions.DisablePerSessionThreads to allow sharing of threadpool between sessions.
* [Java API] Added CUDA 12 Java support.

Performance
* Improved 4bit quant support:
* Added HQQ quantization support to improve accuracy.
* Implemented general GEMM kernel and improved GEMV kernel performance on GPU.
* Improved GEMM kernel quality and performance on x64.
* Implemented general GEMM kernel and improved GEMV performance on ARM64.
* Improved MultiheadAttention performance on CPU.

Execution Providers
* TensorRT
* Added support for TensorRT 10.
* Finalized support for DDS ops.
* Added Python support for user provided CUDA stream.
* Fixed various bugs.

* CUDA
* Added support of multiple CUDA graphs.
* Added a provider option to disable TF32.
* Added Python support for user provided CUDA stream.
* Extended MoE to support of Tensor Parallelism and int4 quantization.
* Fixed bugs in BatchNorm and TopK kernel.

* QNN
* Added support for up to QNN SDK 2.22.
* Upgraded support from A16W8 → mixed 8/16-bit precision configurability per layer.
* Added fp16 execution support via enable_htp_fp16 option.
* Added multiple partition support for QNN context binary.
* Expanded operator support and fixed various bugs.
* Added support for per-channel quantized weights for Conv.
* Integration with Qualcomm’s AIHub.

* OpenVINO
* Added support for up to OpenVINO 2024.1.
* Added support for importing pre-compiled blob as EPContext blob.
* Separated device and precision as inputs by removing support for device_id in provider options and adding precision as separate CLI option.
* Deprecated CPU_FP32 and GPU_FP32 terminology and introduced CPU and GPU terminology.
* `AUTO:GPU,CPU` will only create GPU blob, not CPU blob.

* DirectML
* Additional ONNX operator support: Resize-18 and Resize-19, Col2Im-18, InNaN-20, IsInf-20, and ReduceMax-20.
* Additional contrib op support: SimplifiedLayerNormalization, SkipSimplifiedLayerNormalization, QLinearAveragePool, MatMulIntegerToFloat, GroupQueryAttention, DynamicQuantizeMatMul, and QAttention.

Mobile
* Improved performance of ARM64 4-bit quantization.
* Added support for building with QNN on Android.
* Added MacCatalyst support.
* Added visionOS support.
* Added initial support for creating ML Program format CoreML models.
* Added support for 1D Conv and ConvTranspose to XNNPACK EP.

Web
* Added WebNN EP preview.
* Improved WebGPU performance (MHA, ROE).
* Added more WebGPU and WebNN examples.
* Increased generative model support.
* Optimized Buffer management to reduce memory footprint.

Training
* Large Model Training
* Added optimizations for Dynamo-exported models.
* Added Mixtral integration using ORT backend.
* On-Device Training
* Added support for models >2GB to enable SLM training on edge devices.

GenAI
* Added additional model support: Phi-3, Gemma, LLama-3.
* Added DML EP support.
* Improved tokenizer quality.
* Improved sampling method and ORT model performance.

Extensions
* Created Java packaging pipeline and published to Maven repository.
* Added support for conversion of Huggingface FastTokenizer into ONNX custom operator.
* Unified the SentencePiece tokenizer with other Byte Pair Encoding (BPE) based tokenizers.
* Fixed Whisper large model pre-processing bug.
* Enabled eager execution for custom operator and refactored the header file structure.

Contributors
Yi Zhang, Yulong Wang, Adrian Lizarraga, Changming Sun, Scott McKay, Tianlei Wu, Peng Wang, Hector Li, Edward Chen, Dmitri Smirnov, Patrice Vignola, Guenther Schmuelling, Ye Wang, Chi Lo, Wanming Lin, Xu Xing, Baiju Meswani, Peixuan Zuo, Vincent Wang, Markus Tavenrath, Lei Cao, Kunal Vaishnavi, Rachel Guo, Satya Kumar Jandhyala, Sheil Kumar, Yifan Li, Jiajia Qin, Maximilian Müller, Xavier Dupré, Yi-Hong Lyu, Yufeng Li, Alejandro Cid Delgado, Adam Louly, Prathik Rao, wejoncy, Zesong Wang, Adam Pocock, George Wu, Jian Chen, Justin Chu, Xiaoyu, guyang3532, Jingyan Wang, raoanag, Satya Jandhyala, Hariharan Seshadri, Jiajie Hu, Sumit Agarwal, Peter Mcaughan, Zhijiang Xu, Abhishek Jindal, Jake Mathern, Jeff Bloomfield, Jeff Daily, Linnea May, Phoebe Chen, Preetha Veeramalai, Shubham Bhokare, Wei-Sheng Chin, Yang Gu, Yueqing Zhang, Guangyun Han, inisis, ironman, Ivan Berg, Liqun Fu, Yu Luo, Rui Ren, Sahar Fatima, snadampal, wangshuai09, Zhenze Wang, Andrew Fantino, Andrew Grigorev, Ashwini Khade, Atanas Dimitrov, AtomicVar, Belem Zhang, Bowen Bao, Chen Fu, Dhruv Matani, Fangrui Song, Francesco, Frank Dong, Hans Chen, He Li, Heflin Stephen Raj, Jambay Kinley, Masayoshi Tsutsui, Matttttt, Nanashi, Phoebe Chen, Pranav Sharma, Segev Finer, Sophie Schoenmeyer, TP Boudreau, Ted Themistokleous, Thomas Boby, Xiang Zhang, Yongxin Wang, Zhang Lei, aamajumder, danyue, Duansheng Liu, enximi, fxmarty, kailums, maggie1059, mindest, mo-ja, moyo1997
**Big thank you to everyone who contributed to this release!**

1.17.3

What's new?

**General:**
- Update copying API header files to make Linux logic consistent with Windows ([19736](https://github.com/microsoft/onnxruntime/pull/19736)) - mszhanyi
- Pin ONNX version to fix DML and Python packaging pipeline exceptions ([20073](https://github.com/microsoft/onnxruntime/pull/20073)) - mszhanyi

**Build System & Packages:**
- Fix minimal build with training APIs enabled bug affecting Apple framework ([19858](https://github.com/microsoft/onnxruntime/pull/19858)) - edgchen1

**Core:**
- Fix SplitToSequence op with string tensor bug ([19942](https://github.com/microsoft/onnxruntime/pull/19942)) - Craigacp

**CUDA EP:**
- Fix onnxruntime_test_all build break with CUDA ([19673](https://github.com/microsoft/onnxruntime/pull/19673)) - gedoensmax
- Fix broken pooling CUDA NHWC ops and ensure NCHW / NHWC parity ([19889](https://github.com/microsoft/onnxruntime/pull/19889)) - mtavenrath

**TensorRT EP:**
- Fix TensorRT build break caused by image update ([19880](https://github.com/microsoft/onnxruntime/pull/19880)) - jywu-msft
- Fix TensorRT custom op list concurrency bug ([20093](https://github.com/microsoft/onnxruntime/pull/20093)) - chilo-ms

**Web:**
- Add hardSigmoid op support and hardSigmoid activation for fusedConv ([19215](https://github.com/microsoft/onnxruntime/pull/19215), [#19233](https://github.com/microsoft/onnxruntime/pull/19233)) - qjia7
- Add support for WebNN async API with Asyncify ([19415](https://github.com/microsoft/onnxruntime/pull/19145)) - Honry
- Add uniform support for conv, conv transpose, conv grouped, and fp16 ([18753](https://github.com/microsoft/onnxruntime/pull/18753), [#19098](https://github.com/microsoft/onnxruntime/pull/19098)) - axinging
- Add capture and replay support for JS EP ([18989](https://github.com/microsoft/onnxruntime/pull/18989)) - fs-eire
- Add LeakyRelu activation for fusedConv ([19369](https://github.com/microsoft/onnxruntime/pull/19369)) - qjia7
- Add FastGelu custom op support ([19392](https://github.com/microsoft/onnxruntime/pull/19369)) - fs-eire
- Allow uint8 tensors for WebGPU ([19545](https://github.com/microsoft/onnxruntime/pull/19545)) - satyajandhyala
- Add and optimize MatMulNBits ([19852](https://github.com/microsoft/onnxruntime/pull/19852)) - satyajandhyala
- Enable ort-web with any Float16Array polyfill ([19305](https://github.com/microsoft/onnxruntime/pull/19305)) - fs-eire
- Allow multiple EPs to be specified in backend resolve logic ([19735](https://github.com/microsoft/onnxruntime/pull/19735)) - fs-eire
- Various bug fixes: ([19258](https://github.com/microsoft/onnxruntime/pull/19258)) - gyagp, ([#19201](https://github.com/microsoft/onnxruntime/pull/19201), [#19554](https://github.com/microsoft/onnxruntime/pull/19554)) - hujiajie, ([#19262](https://github.com/microsoft/onnxruntime/pull/19262), [#19981](https://github.com/microsoft/onnxruntime/pull/19981)) - guschmue, ([#19581](https://github.com/microsoft/onnxruntime/pull/19581), [#19596](https://github.com/microsoft/onnxruntime/pull/19596), [#19387](https://github.com/microsoft/onnxruntime/pull/19387)) - axinging, ([#19613](https://github.com/microsoft/onnxruntime/pull/19613)) - satyajandhyala
- Various improvements for performance and usability: ([19202](https://github.com/microsoft/onnxruntime/pull/19202)) - qjia7, ([#18900](https://github.com/microsoft/onnxruntime/pull/18900), [#19281](https://github.com/microsoft/onnxruntime/pull/19281), [#18883](https://github.com/microsoft/onnxruntime/pull/18883)) - axinging, ([#18788](https://github.com/microsoft/onnxruntime/pull/18788), [#19737](https://github.com/microsoft/onnxruntime/pull/19737)) - satyajandhyala, ([#19610](https://github.com/microsoft/onnxruntime/pull/19610)) - segevfiner, ([#19614](https://github.com/microsoft/onnxruntime/pull/19614), [#19702](https://github.com/microsoft/onnxruntime/pull/19702), [#19677](https://github.com/microsoft/onnxruntime/pull/19677), [#19857](https://github.com/microsoft/onnxruntime/pull/19857), [#19940](https://github.com/microsoft/onnxruntime/pull/19940)) - fs-eire, ([#19791](https://github.com/microsoft/onnxruntime/pull/19791)) - gyagp, ([#19868](https://github.com/microsoft/onnxruntime/pull/19868)) - guschmue, ([#19433](https://github.com/microsoft/onnxruntime/pull/19433)) - martholomew, ([#19932](https://github.com/microsoft/onnxruntime/pull/19932)) - ibelem

**Windows:**
- Fix Windows memory mapping bug affecting some larger models ([19623](https://github.com/microsoft/onnxruntime/pull/19623)) - yufenglee

**Kernel Optimizations:**
- Fix GQA and Rotary Embedding bugs affecting some models ([19801](https://github.com/microsoft/onnxruntime/pull/19801), [#19874](https://github.com/microsoft/onnxruntime/pull/19874)) - aciddelgado
- Update replacement of MultiHeadAttention (MHA) and GroupQueryAttention (GQA) ([19882](https://github.com/microsoft/onnxruntime/pull/19882)) - kunal-vaishnavi
- Add support for packed QKV input and Rotary Embedding with sm<80 using Memory Efficient Attention kernel ([20012](https://github.com/microsoft/onnxruntime/pull/20012)) - aciddelgado

**Models:**
- Add support for benchmarking LLaMA model end-to-end performance ([19985](https://github.com/microsoft/onnxruntime/pull/19985), [#20033](https://github.com/microsoft/onnxruntime/pull/20033), [#20149](https://github.com/microsoft/onnxruntime/pull/20149)) - kunal-vaishnavi
- Add example to demonstrate export of Open AI Whisper implementation with batched prompts ([19854](https://github.com/microsoft/onnxruntime/pull/19854)) - shubhambhokare1

This patch release also includes additional fixes by spampana95 and enximi. **Big thank you to all our contributors!**

1.17.1

This patch release includes the following updates:

General

- Update thread affinity on server so it is only set with auto affinity ([19318](https://github.com/microsoft/onnxruntime/pull/19318)) - ivberg

Build System and Packages

- Fix bug that was breaking arm64 build by disabling __cpuid check on arm64 builds since intrinsic is not available ([19574](https://github.com/microsoft/onnxruntime/pull/19574)) - smk2007

Core

- Add capturestate / rundown ETW support logging for session and provider options ([19397](https://github.com/microsoft/onnxruntime/pull/19397)) - ivberg
- Restrict L2 cache core check on Intel devices ([19483](https://github.com/microsoft/onnxruntime/pull/19483)) - smk2007

Performance

- Optimize KahnsTopologicalSort and PriorityNodeCompare to fix performance degradation in session creation time that was affecting many models ([19475](https://github.com/microsoft/onnxruntime/pull/19475)) - smk2007

EPs

- Enable DirectML on Windows and CUDA on Linux for Node.js binding ([19274](https://github.com/microsoft/onnxruntime/pull/19274)) - jchen351

QNN

- Fix split index bugs uncovered by QNN SDK 2.19 release ([19381](https://github.com/microsoft/onnxruntime/pull/19381)) - adrianlizarraga
- Add job that builds x64 Python wheels for QNN EP so cached QNN models can be created on Windows x64 ([19499](https://github.com/microsoft/onnxruntime/pull/19499)) - adrianlizarraga

OpenVINO

- Fix bugs for API backwards compatibility ([19482](https://github.com/microsoft/onnxruntime/pull/19482)) - preetha-intel

DirectML

- Fix bug in external data packing that was causing crash ([19415](https://github.com/microsoft/onnxruntime/pull/19415)) - PatriceVignola
- Fix bug in allocation planner by disabling streams for DML EP ([19481](https://github.com/microsoft/onnxruntime/pull/19481)) - PatriceVignola

Web

- Fix bug with types export in package.json ([19458](https://github.com/microsoft/onnxruntime/pull/19458)) - fs-eire

Training

- Reduce onnxruntime-training package size so it can be published on PyPI ([19486](https://github.com/microsoft/onnxruntime/pull/19486)) - baijumeswani
- Update default std flag used during torch extensions compilation ([19516](https://github.com/microsoft/onnxruntime/pull/19516)) - baijumeswani
- Add ATen fallback support for bicubic interpolation algorithm ([19380](https://github.com/microsoft/onnxruntime/pull/19380)) - prathikr

Quantization

- Update Q/DQ quantization to ensure Microsoft opset ([19335](https://github.com/microsoft/onnxruntime/pull/19335)) - adrianlizarraga
- Add contrib Q/DQ ops to symbolic shape inference tool ([19340](https://github.com/microsoft/onnxruntime/pull/19340)) - adrianlizarraga
- Fix subgraph quantization regression ([19421](https://github.com/microsoft/onnxruntime/pull/19421)) - fxmarty
- Add DefaultTensorType option to specify the default tensor type to quantize ([19455](https://github.com/microsoft/onnxruntime/pull/19455)) - yufenglee
- Fix bug with command line argparse to process --symmetric [True|False] correctly ([19577](https://github.com/microsoft/onnxruntime/pull/19577)) - satyajandhyala

Whisper Model

- Fix bug in BeamSearch implementation of Whisper model that was causing a crash in some scenarios ([19345](https://github.com/microsoft/onnxruntime/pull/19345)) - petermcaughan
- Fix bug in Whisper model timestamps and temperature ([19509](https://github.com/microsoft/onnxruntime/pull/19509)) - kunal-vaishnavi

1.17.0

Announcements
In the next release, we will totally drop support for Windows ARM32.

General
- Added support for new ONNX 1.15 opsets: [IsInf-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#IsInf-20), [IsNaN-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#IsNaN-20), [DFT-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#DFT-20), [ReduceMax-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ReduceMax-20), [ReduceMin-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#reducemin-20), [AffineGrid-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#AffineGrid-20), [GridSample](https://github.com/onnx/onnx/blob/main/docs/Operators.md#GridSample), [ConstantOfShape-20](https://github.com/onnx/onnx/blob/main/docs/Changelog.md#ConstantOfShape-20), [RegexFullMatch](https://github.com/onnx/onnx/blob/main/docs/Operators.md#RegexFullMatch), [StringConcat](https://github.com/onnx/onnx/blob/main/docs/Operators.md#StringConcat), [StringSplit](https://github.com/onnx/onnx/blob/main/docs/Operators.md#StringSplit), and [ai.onnx.ml.LabelEncoder-4](https://github.com/onnx/onnx/blob/main/docs/Changelog-ml.md#ai.onnx.ml.LabelEncoder-4).
- Updated C/C++ libraries: abseil, date, nsync, googletest, wil, mp11, cpuinfo, safeint, and onnx.
- Added vector optimization code for loongarch architecture.

Build System and Packages
- Dropped CentOS 7 support. All Linux binaries now require glibc version >=2.28, but users can still build the source code for a lower glibc version.
- Added CUDA 12 packages for Python and Nuget.
- Added Python 3.12 packages for ONNX Runtime Inference. ONNX Runtime Training Python 3.12 packages cannot be provided at this time since training packages depend on PyTorch, which does not support Python 3.12 yet.
- Linux binaries (except those in AMD GPU packages) are built in a more secure way that is compliant with BinSkim's default policy (e.g., the binaries no longer have an executable stack).
- Added support for Windows ARM64X for users who build ONNX Runtime from source. No prebuilt package provided yet.
- Removed Windows ARM32 binaries from official packages. Users who still need these binaries can build them from source.
- Added AMD GPU package with ROCm and MiGraphX (Python + Linux only).
- Split ONNX Runtime GPU Nuget package into two packages.
- When building the source code for Linux ARM64 or Android, the C/C++ compiler must support BFloat16. Support for Android NDK 24.x has been removed. Please use NDK 25.x or 26.x instead.
- Link time code generation (LTCG or LTO) is now disabled by default when building from source. To re-enable it, users can add "--enable_lto" to the build command. All prebuilt binaries are still built with LTO.

Core
- Optimized graph inlining.
- Allow custom op to invoke internal thread-pool for parallelism.
- Added support for supplying a custom logger at the session level.
- Added new logging and tracing of session and execution provider options.
- Added new [dynamic ETW provider](https://onnxruntime.ai/docs/performance/tune-performance/logging_tracing.html#Tracing---Windows) that can trace/diagnose ONNX internals while maintaining great performance.

Performance
- Added 4bit quant support on NVIDIA GPU and ARM64.

EPs
TensorRT EP
- Added support for direct load of precompiled TensorRT engines and customizable engine prefix.
- Added Python support for TensorRT plugins via ORT custom ops.
- Fixed concurrent Session::Run bugs.
- Updated calls to deprecated TensorRT APIs (e.g., enqueue_v2 &rarr; enqueue_v3).
- Fixed various memory leak bugs.

QNN EP
- Added support for QNN SDK 2.18.
- Added context binary caching and model initialization optimizations.
- Added mixed precision (8/16 bit) quantization support.
- Add device-level session options (soc_model, htp_arch, device_id), extreme_power_saver for htp_performance_mode, and vtcm_mb settings.
- Fixed multi-threaded inference bug.
- Fixed various other bugs and added performance improvements.
- QNN [profiling](https://onnxruntime.ai/docs/execution-providers/QNN-ExecutionProvider.html#configuration-options) of the NPU can be enabled [dynamically with ETW](https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html#Qualcomm-QNN-EP) or [write out to CSV](https://onnxruntime.ai/docs/performance/tune-performance/profiling-tools.html#Cross-Platform-CSV-Tracing).

OpenVINO EP
- Added support for OpenVINO 2023.2.
- Added AppendExecutionProvider_OpenVINO_V2 API for supporting new OpenVINO EP options.

DirectML EP
- Updated to [DirectML 1.13.1](https://github.com/microsoft/DirectML/blob/master/Releases.md).
- Updated operators LpPool-18 and AveragePool-19 with dilations.
- Improved Python I/O binding support.
- Added RotaryEmbedding.
- Added support for fusing subgraphs into DirectML execution plans.
- Added new Python API to choose a specific GPU on multi-GPU devices with the DirectML EP.

Mobile
- Added initial support for 4bit quantization on ARM64.
- Extended CoreML/NNAPI operator coverage.
- Added support for YOLOv8 pose detection pre/post processing.
- Added support for macOS in CocoaPods package.

Web
- Added support for external data format.
- Added support for I/O bindings.
- Added support for training.
- Added WebGPU optimizations.
- Transitioned WebGPU out of experimental.
- Added FP16 support for WebGPU.

Training

Large Model Training
- Enabled support for QLoRA (with support for BFloat16).
- Added symbolic shape support for Triton codegen (see [PR](https://github.com/microsoft/onnxruntime/pull/18317)).
- Made improvements to recompute optimizer with easy ON/OFF to allow layer-wise recompute (see [PR](https://github.com/microsoft/onnxruntime/pull/18566)).
- Enabled memory-efficient gradient management. For Mistral, we see ~10GB drop in memory consumption when this feature is ON (see [PR](https://github.com/microsoft/onnxruntime/pull/18924)).
- Enabled embedding sparsity optimizations.
- Added support for Aten efficient attention and Triton Flash Attention (see [PR](https://github.com/microsoft/onnxruntime/pull/17959)).
- Packages now available for CUDA 11.8 and 12.1.

On Device Training
- On-Device training will now support training on the web. This release focuses on federated learning and developer exploration scenarios. More features coming soon in future releases.

Extensions
- Modified gen_processing_model tokenizer model to output int64, unifying output datatype of all tokenizers.
- Implemented support for post-processing of YOLO v8 within the Python extensions package.
- Introduced 'fairseq' flag to enhance compatibility with certain Hugging Face tokenizers.
- Incorporated 'added_token' attribute into the BPE tokenizer to improve CodeGen tokenizer functionality.
- Enhanced the SentencePiece tokenizer by integrating token indices into the output.
- Added support for the custom operator implemented with CUDA kernels, including two example operators.
- Added more tests on the Hugging Face tokenizer and fixed identified bugs.

Known Issues
- The onnxruntime-training package is not yet available in PyPI but can be accessed in ADO as follows:

python -m pip install cerberus flatbuffers h5py numpy>=1.16.6 onnx packaging protobuf sympy setuptools>=41.4.0
pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/ onnxruntime-training
pip install torch-ort
python -m torch_ort.configure

Installation instructions can also be accessed [here](https://onnxruntime.ai/getting-started).
- For models with int4 kernel only:
- Crash may occur when int4 is applied on Intel CPUs with hybrid core if the E-cores are disabled in BIOS. Fix is in progress to be patched.
- The "neural-speed" library used by int4 kernels has a bug that could lead to out-of-bounds memory read/write.
- Performance regression on the int4 kernel on x64 makes the op following MatMulNBits much slower. Fix is in progress to be patched.
- Current bug in BeamSearch implementation of T5, GPT, and Whisper may break models under heavy inference load using BeamSearch on CUDA. See [19345](https://github.com/microsoft/onnxruntime/pull/19345). Fix is in progress to be patched.
- Full support of ONNX 1.15 opsets is still in progress. A list of new ONNX 1.15 opset support that has been included in this release can be found above in the 'General' section.
- Some Cast nodes will not be removed (see https://github.com/microsoft/onnxruntime/pull/17953): Cast node from higher precision to lower precision (like fp32 to fp16) will be kept. If model result is different from ORT 1.16 and 1.17, check whether some Cast nodes was removed in 1.16 but kept in 1.17.
- When running ONNX Runtime's python 3.12 package on Windows 11, you may see a warning like: “Unsupported Windows version (11). ONNX Runtime supports Windows 10 and above, only.” You may safely ignore it.

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
Changming Sun, Yulong Wang, Tianlei Wu, Yi Zhang, Jian Chen, Jiajia Qin, Adrian Lizarraga, Scott McKay, Wanming Lin, pengwa, Hector Li, Chi Lo, Dmitri Smirnov, Edward Chen, Xu Xing, satyajandhyala, Rachel Guo, PeixuanZuo, RandySheriffH, Xavier Dupré, Patrice Vignola, Baiju Meswani, Guenther Schmuelling, Jeff Bloomfield, Vincent Wang, cloudhan, zesongw, Arthur Islamov, Wei-Sheng Chin, Yifan Li, raoanag, Caroline Zhu, Sheil Kumar, Ashwini Khade, liqun Fu, xhcao, aciddelgado, kunal-vaishnavi, Aditya Goel, Hariharan Seshadri, Ye Wang, Adam Pocock, Chen Fu, Jambay Kinley, Kaz Nishimura, Maximilian Müller, Yang Gu, guyang3532, mindest, Abhishek Jindal, Justin Chu, Numfor Tiapo, Prathik Rao, Yufeng Li, cao lei, snadampal, sophies927, BoarQing, Bowen Bao, George Wu, Jiajie Hu, MistEO, Nat Kershaw (MSFT), Sumit Agarwal, Ted Themistokleous, ivberg, zhijiang, Christian Larson, Frank Dong, Jeff Daily, Nicolò Lucchesi, Pranav Sharma, Preetha Veeramalai, Cheng Tang, Xiang Zhang, junchao-loongson, petermcaughan, rui-ren, shaahji, simonjub, trajep, Adam Louly, Akshay Sonawane, Artem Shilkin, Atanas Dimitrov, AtanasDimitrovQC, BODAPATIMAHESH, Bart Verhagen, Ben Niu, Benedikt Hilmes
Brian Lambert, David Justice, Deoksang Kim, Ella Charlaix, Emmanuel Ferdman, Faith Xu, Frank Baele, George Nash, hans00, computerscienceiscool, Jake Mathern, James Baker, Jiangzhuo, Kevin Chen, Lennart Hannink, Lukas Berbuer, Mike Guo, Milos Puzovic, Mustafa Ateş Uzun, Peishen Yan, Ran Gal, Ryan Hill, Steven Roussey, Suryaprakash Shanmugam, Vadym Stupakov, Yiming Hu, Yueqing Zhang, Yvonne Chen, Zhang Lei, Zhipeng Han, aimilefth, gunandrose4u, kailums, kushalpatil07, kyoshisuki, luoyu-intel, moyo1997, tbqh, weischan-quic, wejoncy, winskuo-quic, wirthual, yuwenzho

1.16.3

What's Changed
1. Stable Diffusion XL demo update by tianleiwu in https://github.com/microsoft/onnxruntime/pull/18496
2. Fixed a memory leak issue(18466) in TensorRT EP by chilo-ms in https://github.com/microsoft/onnxruntime/pull/18467
3. Fix a use-after-free bug in SaveInputOutputNamesToNodeMapping function by snnn in https://github.com/microsoft/onnxruntime/pull/18456 . The issue was found by AddressSanitizer.

Page 1 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.