Onnxruntime

Latest version: v1.20.1

Safety actively analyzes 682244 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 9

1.9

* Builds will require C++ 17 compiler
* GPU build will be updated to CUDA 11.1

General
* ONNX opset 14 support - new and updated operators from the [ONNX 1.9 release](https://github.com/onnx/onnx/releases/tag/v1.9.0)
* Dynamically loadable CUDA execution provider
* Allows a single build to work for both CPU and GPU (excludes Python packages)
* [Profiler tool](http://www.onnxruntime.ai/docs/how-to/tune-performance.html#profiling-and-performance-report) now includes information on threadpool usage
* multi-threading preparation time
* multi-threading run time
* multi-threading wait time
* *[Experimental]* [onnxruntime-extensions package](http://pypi.org/project/onnxruntime-extensions)
* Crowd-sourced library of common/shareable custom operator implementations that can be loaded and run with ONNX Runtime; community contributions are welcome! - [microsoft/onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions)
* Currently includes mostly ops and tokenizers for string operations (full list [here](https://github.com/microsoft/onnxruntime-extensions/tree/main/operators))
* Tutorials to export and load custom ops from onnxruntime-extensions: [TensorFlow](https://github.com/microsoft/onnxruntime-extensions/blob/main/tutorials/tf2onnx_custom_ops_tutorial.ipynb), [PyTorch](https://github.com/microsoft/onnxruntime-extensions/blob/main/tutorials/pytorch_custom_ops_tutorial.ipynb)

Training
* [torch-ort](https://pypi.org/project/torch-ort/) package released as the ONNX Runtime backend in PyTorch
* [onnxruntime-training-gpu](https://pypi.org/project/onnxruntime-training) and [onnxruntime-training-rocm](https://onnxruntimepackages.z14.web.core.windows.net/onnxruntime_stable_rocm42.html) packages now available for distributed training on NVIDIA and AMD GPUs

Mobile
* Official package now available
* [Pre-built Android and iOS packages](https://onnxruntime.ai/docs/how-to/mobile/overview.html#pre-built-package) with support for selected operators and data types
* Objective-C API for iOS in preview
* Expanded operators supported by NNAPI (Android) and CoreML (iOS) execution providers
* All operators in the ai.onnx domain now support type reduction
* Create ORT format model with `--enable_type_reduction` flag, and perform minimal build ` --enable_reduced_operator_type_support` flag

ORT Web
* New [ONNX Runtime Javascript API](https://github.com/microsoft/onnxruntime/tree/master/js#onnxruntime-web)
* ONNX Runtime Web package
* Support WebAssembly and WebGL for CPU and GPU
* Support Web Worker based multi-threaded WebAssembly backend
* Supports ORT model format
* Improved WebGL performance

Performance
* Memory footprint reduction through shared pre-packed weights for shared initializers
* Pre-packing refers to weights that are pre-processed at model load time
* Allows pre-packed weights of shared initializers to also be shared between sessions, preserving memory savings from using shared initializers
* Memory footprint reduction through arena shrinkage
* By default, the memory arena doesn't shrink and it holds onto any allocated memory forever. This feature exposes a RunOption that scans the arena and potentially returns unused memory back to the system after the end of a Run. This feature is particularly useful while running a dynamic shape model that may occasionally process an outlier inference request that requires a large amount of memory. If the shrinkage option if invoked as part of these Runs, the memory that was required for that Run is not held on forever by the memory arena.

* Quantization
* Native support of Quantize-Dequantize (QDQ) format for CPU
* Support for Concat, Transpose, GlobalAveragePool, AveragePool, Resize, Squeeze
* Improved performance on high-end ARM devices by leveraging dot-product instructions
* Improved performance for batched quant GEMM with optimized multi-threading logic
* Per-column quantization for MatMul
* Transformers
* GPT-2 and beam search integration ([example](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/notebooks/Inference_GPT2-OneStepSearch_OnnxRuntime_CPU.ipynb))

APIs
* WinML
* New native WinML API SetIntraOpThreadSpinning for toggling Intra Op thread spin behavior. When enabled, and when there is no current workload, IntraOp threads will continue to spin for some additional time as it waits for any additional work. This can result in better performance for the current workload but may impact performance of other unrelated workloads. This toggle is enabled by default.
* ORT Inferencing
* The following APIs have been added to this release. Please check the [API documentation](http://www.onnxruntime.ai/docs/reference/api/c-api.html#api-reference) for information.
* KernelInfoGetAttributeArray_float
* KernelInfoGetAttributeArray_int64
* CreateArenaCfgV2
* AddRunConfigEntry
* CreatePrepackedWeightsContainer
* PrepackedWeightsContainer
* CreateSessionWithPrepackedWeightsContainer
* CreateSessionFromArrayWithPrepackedWeightsContainer
Execution Providers
* TensorRT
* Added support for TensorRT EP configuration using session options instead of environment variables.
* Added support for DLA on Jetson Xavier (AGX, NX)
* General bug fixes and quality improvements.
* OpenVINO
* Added support for OpenVINO 2021.3
* Removed support for OpenVINO 2020.4
* Added support for Loading/Saving of Blobs on MyriadX devices to avoid expensive model blob compilation at runtime.
* DirectML
• Supports ARM/ARM64 architectures now in WinML and ONNX RunTime NuGet packages.
• Support for 8-dimensional tensors to: BatchNormalization, Cast, Join, LpNormalization, MeanVarianceNormalization, Padding, Tile, TopK.
• Substantial performance improvements for several operators.
• Resize nearest_mode “floor” and “round_prefer_ceil”.
• Fusion activations for: Conv, ConvTranspose, BatchNormalization, MeanVarianceNormalization, Gemm, MatMul.
• Decomposes unsupported QLinearSigmoid operation.
• Removes strided 64-bit emulation in Cast.
• Allows empty shapes on constant CPU inputs.


Known issues

* This release has an issue that may result in segmentation faults when deployed on Intel 12th Gen processors with hybrid architecture capabilities with Performance and Efficient-cores (P-core and E-core). **This has been fixed in ORT 1.9.**
* The CUDA build of this release has a regression in that the memory utilization increases significantly compared to the previous releases. A fix for this will be released shortly as part of 1.8.1 patch. Here is an incomplete list of issues where this was reported - 8287, 8171, 8147.
* GPU part of source code is not compatible with
- Visual Studio 2019 16.10.0 ( which was just released on May 25, 2021). 16.9.x is fine.
- clang 12
* CPU part of source code is not compatible with
- VS 2017 (https://github.com/microsoft/onnxruntime/issues/7936). Before we fix it please use VS 2019 instead.
- GCC 11. See 7918
* C OpenVino EP is broken. 7951
* Python and Windows only: if your CUDNN DLLs are not in CUDA's installation dir, then you need to set manually "CUDNN_HOME" variable. Just putting them in %PATH% is not enough. 7965
* onnxruntime-win-gpu-x64-1.8.0.zip on this page misses important DLLs, please don't use it.

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:

[snnn](https://github.com/snnn), [gwang-msft](https://github.com/gwang-msft), [baijumeswani](https://github.com/baijumeswani), [fs-eire](https://github.com/fs-eire), [edgchen1](https://github.com/edgchen1), [zhanghuanrong](https://github.com/zhanghuanrong), [yufenglee](https://github.com/yufenglee), [thiagocrepaldi](https://github.com/thiagocrepaldi), [hariharans29](https://github.com/hariharans29), [skottmckay](https://github.com/skottmckay), [weixingzhang](https://github.com/weixingzhang), [tianleiwu](https://github.com/tianleiwu), [SherlockNoMad](https://github.com/SherlockNoMad), [ashbhandare](https://github.com/ashbhandare), [tracysh](https://github.com/tracysh), [satyajandhyala](https://github.com/satyajandhyala), [liqunfu](https://github.com/liqunfu), [iK1D](https://github.com/iK1D), [RandySheriffH](https://github.com/RandySheriffH), [suffiank](https://github.com/suffiank), [hanbitmyths](https://github.com/hanbitmyths), [wangyems](https://github.com/wangyems), [askhade](https://github.com/askhade), [stevenlix](https://github.com/stevenlix), [chilo-ms](https://github.com/chilo-ms), [smk2007](https://github.com/smk2007), [kit1980](https://github.com/kit1980), [codemzs](https://github.com/codemzs), [raviskolli](https://github.com/raviskolli), [pranav-prakash](https://github.com/pranav-prakash), [chenfucn](https://github.com/chenfucn), [xadupre](https://github.com/xadupre), [gramalingam](https://github.com/gramalingam), [harshithapv](https://github.com/harshithapv), [oliviajain](https://github.com/oliviajain), [xzhu1900](https://github.com/xzhu1900), [ytaous](https://github.com/ytaous), [MaajidKhan](https://github.com/MaajidKhan), [RyanUnderhill](https://github.com/RyanUnderhill), [mrry](https://github.com/mrry), [orilevari](https://github.com/orilevari), [jingyanwangms](https://github.com/jingyanwangms), [sfatimar](https://github.com/sfatimar), [KeDengMS](https://github.com/KeDengMS), [jywu-msft](https://github.com/jywu-msft), [souptc](https://github.com/souptc), [adtsai](https://github.com/adtsai), [tlh20](https://github.com/tlh20), [yuslepukhin](https://github.com/yuslepukhin), [duli2012](https://github.com/duli2012), [pranavsharma](https://github.com/pranavsharma), [faxu](https://github.com/faxu), [georgen117](https://github.com/georgen117), [jeffbloo](https://github.com/jeffbloo), [Tixxx](https://github.com/Tixxx), [wschin](https://github.com/wschin), [YUNQIUGUO](https://github.com/YUNQIUGUO), [tiagoshibata](https://github.com/tiagoshibata), [martinb35](https://github.com/martinb35), [alberto-magni](https://github.com/alberto-magni), [ryanlai2](https://github.com/ryanlai2), [Craigacp](https://github.com/Craigacp), [suryasidd](https://github.com/suryasidd), [fdwr](https://github.com/fdwr), [jcwchen](https://github.com/jcwchen), [neginraoof](https://github.com/neginraoof), [natke](https://github.com/natke), [BowenBao](https://github.com/BowenBao)

1.9.0

Not secure
Announcements
* GCC version < 7 is no longer supported
* CMAKE_SYSTEM_PROCESSOR needs be set when cross-compiling on Linux because pytorch cpuinfo was introduced as a dependency for ARM big.LITTLE support. Set it to the value of `uname -m` output of your target device.

General
* ONNX 1.10 support
* opset 15
* ONNX IR 8 (SparseTensor type, model local functionprotos, Optional type not yet fully supported this release)
* Improved documentation of [C/C++ APIs](https://onnxruntime.ai/docs/api/c/)
* IBM Power support
* WinML - DLL dependency fix supports learning models on Windows 8.1
* Support for sub-building [onnxruntime-extensions](https://github.com/microsoft/onnxruntime-extensions) and statically linking into onnxruntime binary for custom builds
* Add `--_use_extensions` option to run models with custom operators implemented in onnxruntime-extensions


APIs
* Registration of a custom allocator for sharing between multiple sessions. (See RegisterAllocator and UnregisterAllocator APIs in onnxruntime_c_api.h)
* SessionOptionsAppendExecutionProvider_TensorRT API is deprecated; use SessionOptionsAppendExecutionProvider_TensorRT_V2
* New APIs: SessionOptionsAppendExecutionProvider_TensorRT_V2, CreateTensorRTProviderOptions, UpdateTensorRTProviderOptions, GetTensorRTProviderOptionsAsString, ReleaseTensorRTProviderOptions, EnableOrtCustomOps, RegisterAllocator, UnregisterAllocator, IsSparseTensor, CreateSparseTensorAsOrtValue, FillSparseTensorCoo, FillSparseTensorCsr, FillSparseTensorBlockSparse, CreateSparseTensorWithValuesAsOrtValue, UseCooIndices, UseCsrIndices, UseBlockSparseIndices, GetSparseTensorFormat, GetSparseTensorValuesTypeAndShape, GetSparseTensorValues, GetSparseTensorIndicesTypeShape, GetSparseTensorIndices,

Performance and quantization
* Performance improvement on ARM
* Added S8S8 (signed int8, signed int8) matmul kernel. This avoids extending uin8 to int16 for better performance on ARM64 without dot-product instruction
* Expanded GEMM udot kernel to 8x8 accumulator
* Added sgemm and qgemm optimized kernels for ARM64EC
* Operator improvements
* Improved performance for quantized operators: DynamicQuantizeLSTM, QLinearAvgPool
* Added new quantized operator QGemm for quantizing Gemm directly
* Fused HardSigmoid and Conv
* Quantization tool - subgraph support
* Transformers tool improvements
* Fused Attention for BART encoder and Megatron GPT-2
* Integrated mixed precision ONNX conversion and parity test for GPT-2
* Updated graph fusion for embed layer normalization for BERT
* Improved symbolic shape inference for operators: Attention, EmbedLayerNormalization, Einsum and Reciprocal

Packages
* Official ORT GPU packages (except Python) now include both CUDA and TensorRT Execution Providers.
* Python packages will be updated next release. Please note that EPs should be explicitly registered to ensure the correct provider is used.
* GPU packages are built with CUDA 11.4 and should be compatible with 11.x on systems with the minimum required driver version. See: [CUDA minor version compatibility](https://docs.nvidia.com/deploy/cuda-compatibility/#minor-version-compatibility)
* Pypi
* ORT + DirectML Python packages now available: [onnxruntime-directml](https://pypi.org/project/onnxruntime-directml/)
* GPU package can be used on both CPU-only and GPU machines
* Nuget
* C: Added support for using netstandard2.0 as a target framework
* Windows symbol (PDB) files are no longer included in the Nuget package, reducing size of the binary Nuget package by 85%. To download, please see the artifacts below in Github.

Execution Providers
* CUDA EP
* Framework improvements that boost CUDA performance of subgraph heavy models (8642, 8702)
* Support for sequence ops for improved performance for models using sequence type
* Kernel perf improvements for Pad and Upsample (up to 4.5x faster)

* TensorRT EP
* Added support for TensorRT 8.0 (x64 Windows/Linux, ARM Jetson), which includes new TensorRT explicit-quantization features (ONNX Q/DQ support)
* General fixes and quality improvements
* OpenVINO EP
* Added support for OpenVINO 2021.4
* DirectML EP
* Bug fix for Identity with non-float inputs affecting DynamicQuantizeLinear ONNX backend test

ORT Web
* WebAssembly
* SIMD (Single Instruction, Multiple Data) support
* Option to load WebAssembly from worker thread to avoid blocking main UI thread
* wasm file path override
* WebGL
* Simpler workflow for WebGL kernel implementation
* Improved performance with Conv kernel enhancement

ORT Mobile
* Added more [example mobile apps](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/mobile)
* CoreML and NNAPI EP enhancements
* Reduced peak memory usage when initializing session with ORT format model as bytes
* Enhanced partitioning to improve performance when using NNAPI and CoreML
* Reduce number of NNAPI/CoreML partitions required
* Add ability to force usage of CPU for post-processing in SSD models
* Improves performance by avoiding expensive device copy to/from NPU for cheap post-processing section of the model
* Changed to using xcframework in the iOS package
* Supports usage of arm64 iPhone simulator on Mac with Apple silicon

ORT Training
* Expanding input formats supported to include dictionaries and lists.
* Enable user defined autograd functions
* Support for fallback to PyTorch for execution
* Added support for deterministic compute to enable reproducibility with ORTModule
* Add DebugOptions and LogLevels to ORTModule API* to improve debuggability
* Improvements additions to kernels/gradients: Concat, Split, MatMul, ReluGrad, PadOp, Tile, BatchNormInternal
* Support for ROCm 4.3.1 on AMD GPU

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[edgchen1](https://github.com/edgchen1), [gwang-msft](https://github.com/gwang-msft), [tianleiwu](https://github.com/tianleiwu), [fs-eire](https://github.com/fs-eire), [hariharans29](https://github.com/hariharans29), [skottmckay](https://github.com/skottmckay), [baijumeswani](https://github.com/baijumeswani), [RyanUnderhill](https://github.com/RyanUnderhill), [iK1D](https://github.com/iK1D), [souptc](https://github.com/souptc), [nkreeger](https://github.com/nkreeger), [liqunfu](https://github.com/liqunfu), [pengwa](https://github.com/pengwa), [SherlockNoMad](https://github.com/SherlockNoMad), [wangyems](https://github.com/wangyems), [chilo-ms](https://github.com/chilo-ms), [thiagocrepaldi](https://github.com/thiagocrepaldi), [KeDengMS](https://github.com/KeDengMS), [suffiank](https://github.com/suffiank), [oliviajain](https://github.com/oliviajain), [chenfucn](https://github.com/chenfucn), [satyajandhyala](https://github.com/satyajandhyala), [yuslepukhin](https://github.com/yuslepukhin), [pranavsharma](https://github.com/pranavsharma), [tracysh](https://github.com/tracysh), [yufenglee](https://github.com/yufenglee), [hanbitmyths](https://github.com/hanbitmyths), [ytaous](https://github.com/ytaous), [YUNQIUGUO](https://github.com/YUNQIUGUO), [zhanghuanrong](https://github.com/zhanghuanrong), [stevenlix](https://github.com/stevenlix), [jywu-msft](https://github.com/jywu-msft), [chandru-r](https://github.com/chandru-r), [duli2012](https://github.com/duli2012), [smk2007](https://github.com/smk2007), [wschin](https://github.com/wschin), [MaajidKhan](https://github.com/MaajidKhan), [tiagoshibata](https://github.com/tiagoshibata), [xadupre](https://github.com/xadupre), [RandySheriffH](https://github.com/RandySheriffH), [ashbhandare](https://github.com/ashbhandare), [georgen117](https://github.com/georgen117), [Tixxx](https://github.com/Tixxx), [harshithapv](https://github.com/harshithapv), [Craigacp](https://github.com/Craigacp), [BowenBao](https://github.com/BowenBao), [askhade](https://github.com/askhade), [zhangxiang1993](https://github.com/zhangxiang1993), [gramalingam](https://github.com/gramalingam), [weixingzhang](https://github.com/weixingzhang), [natke](https://github.com/natke), [tlh20](https://github.com/tlh20), [codemzs](https://github.com/codemzs), [ryanlai2](https://github.com/ryanlai2), [raviskolli](https://github.com/raviskolli), [pranav-prakash](https://github.com/pranav-prakash), [faxu](https://github.com/faxu), [adtsai](https://github.com/adtsai), [fdwr](https://github.com/fdwr), [wenbingl](https://github.com/wenbingl), [jcwchen](https://github.com/jcwchen), [neginraoof](https://github.com/neginraoof), [cschreib-ibex](https://github.com/cschreib-ibex)

1.8.2

This is a minor patch release on [1.8.1](https://github.com/microsoft/onnxruntime/releases/tag/v1.8.1) with the following changes:

Inference
* Fix a crash issue when optimizing `Conv->Add->Relu` for CUDA EP
* ORT Mobile updates
* Change [Pre-built iOS package](https://onnxruntime.ai/docs/how-to/mobile/overview.html#pre-built-package) to static framework to fix App Store submission issue
* Support for metadata in ORT format models
* Additional operators
* Bug fixes

Known issues
* cudnn 8.0.5 causes memory leaks on T4 GPU as indicated by the [issue](https://github.com/microsoft/onnxruntime/issues/9643), an upgrade to later version solves the problem.

1.8.1

Not secure
This release contains fixes and key updates for 1.8.0.
For all package installation details, please refer to https://www.onnxruntime.ai.

Inference
* Fixes for GPU package loading issues
* Fix for memory issue for models with convolution nodes while using the EXHAUSTIVE algo search mode
* ORT Mobile updates
* CoreML EP enabled in iOS mobile package
* Additional operators
* Bug fixes
* [React Native package](https://www.npmjs.com/package/onnxruntime-react-native) now available

Training

Performance updates for ONNX Runtime for PyTorch (training acceleration for PyTorch models)
- Accelerates most popular Hugging Face models as well as GPT-Neo and Microsoft TNLG and TNLU models
- Support for PyTorch 1.8.1 and 1.9
- Support for CUDA 10.2 and 11.1
- Preview packages for ROCm 4.2

1.8.0

Not secure
Announcements
* This release
* Building onnxruntime from source now requires a C++ compiler with full C++14 support.
* Builds with OpenMP are no longer published. They can still be [built from source](http://www.onnxruntime.ai/docs/how-to/build/inferencing.html#openmp) if needed. The default threadpool option should provide optimal performance for the majority of models.
* New dependency for Python package: flatbuffers

1.7.2

This is a minor patch release on [1.7.1](https://github.com/microsoft/onnxruntime/releases/tag/v1.7.1) with the following changes:

- Fix [Microsoft.AI.MachineLearning](https://www.nuget.org/packages/Microsoft.AI.MachineLearning/) NuGet package to correctly install on C# UWP projects in Visual Studio.

Page 5 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.