Onnxruntime

Latest version: v1.20.1

Safety actively analyzes 682251 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 9

1.4.0

Not secure
Key Updates
* Performance optimizations for Transformer models
* GPT2 - Enable optimizations for Attention with Past State and Attention Mask
* BERT - Improve EmbedLayerNormalization fusion coverage
* Quantization updates
* Added new quantization operators: QLinearAdd, QAttention
* Improved quantization performance for transformer based models on CPU
* More graph fusion
* Further optimization in MLAS kernel
* Introduced pre-packing for constant Matrix B of DynamicQuantizeMatMul and Qattention
* New [Python IOBinding APIs](https://github.com/microsoft/onnxruntime/blob/master/docs/python/api_summary.rst#iobinding) (bind_cpu_input, bind_output, copy_outputs_to_cpu) allow easier benchmarking
* Users no longer need to allocate inputs and outputs on non-CPU devices using third-party allocators.
* Users no longer need to copy inputs to non-CPU devices; ORT handles the copy.
* Users can now use copy_outputs_to_cpu to copy outputs from non-CPU devices to CPU for verification.
* CUDA support for Einsum (opset12)
* ONNX Runtime Training updates
* Opset 12 support
* New [sample](https://github.com/microsoft/onnxruntime-training-examples) for training experiment using Huggingface GPT-2.
* Upgraded docker image built from the latest PyTorch release
* Telemetry is now enabled by default for Python packages and Github release zip files (C API); [see more details](https://github.com/microsoft/onnxruntime/blob/master/docs/Privacy.md#official-builds) on what/how telemetry is collected in ORT
* **[Coming soon]** Availability of Python package for ONNX Runtime 1.4 for Jetpack 4.4

Execution Providers
New Execution Providers available for preview:
* **[Preview]** [AMD MIGraphX](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/MIGraphX-ExecutionProvider.md)
* **[Preview]** [ARM NN](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/ArmNN-ExecutionProvider.md)

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:

[snnn](https://github.com/snnn), [tianleiwu](https://github.com/tianleiwu), [edgchen1](https://github.com/edgchen1), [hariharans29](https://github.com/hariharans29), [skottmckay](https://github.com/skottmckay), [tracysh](https://github.com/tracysh), [yufenglee](https://github.com/yufenglee), [fs-eire](https://github.com/fs-eire), [codemzs](https://github.com/codemzs), [tiagoshibata](https://github.com/tiagoshibata), [yuslepukhin](https://github.com/yuslepukhin), [gwang-msft](https://github.com/gwang-msft), [wschin](https://github.com/wschin), [smk2007](https://github.com/smk2007), [prabhat00155](https://github.com/prabhat00155), [liuziyue](https://github.com/liuziyue), [liqunfu](https://github.com/liqunfu), [ytaous](https://github.com/ytaous), [iK1D](https://github.com/iK1D), [BowenBao](https://github.com/BowenBao), [askhade](https://github.com/askhade), [pranavsharma](https://github.com/pranavsharma), [faxu](https://github.com/faxu), [jywu-msft](https://github.com/jywu-msft), [ryanlai2](https://github.com/ryanlai2), [xzhu1900](https://github.com/xzhu1900), [KeDengMS](https://github.com/KeDengMS), [tlh20](https://github.com/tlh20), [smkarlap](https://github.com/smkarlap), [weixingzhang](https://github.com/weixingzhang), [jeffbloo](https://github.com/jeffbloo), [RyanUnderhill](https://github.com/RyanUnderhill), [mrry](https://github.com/mrry), [jgbradley1](https://github.com/jgbradley1), [stevenlix](https://github.com/stevenlix), [zhanghuanrong](https://github.com/zhanghuanrong), [suffiank](https://github.com/suffiank), [Andrews548](https://github.com/Andrews548), [pengwa](https://github.com/pengwa), [SherlockNoMad](https://github.com/SherlockNoMad), [orilevari](https://github.com/orilevari), [duli2012](https://github.com/duli2012), [yangchen-MS](https://github.com/yangchen-MS), [yan12125](https://github.com/yan12125), [jornt-xilinx](https://github.com/jornt-xilinx), [ashbhandare](https://github.com/ashbhandare), [neginraoof](https://github.com/neginraoof), [Tixxx](https://github.com/Tixxx), [thiagocrepaldi](https://github.com/thiagocrepaldi), [Craigacp](https://github.com/Craigacp), [mayeut](https://github.com/mayeut), [chilo-ms](https://github.com/chilo-ms), [prasanthpul](https://github.com/prasanthpul), [martinb35](https://github.com/martinb35), [manashgoswami](https://github.com/manashgoswami), [zhangxiang1993](https://github.com/zhangxiang1993), [suryasidd](https://github.com/suryasidd), [wangyems](https://github.com/wangyems), [kit1980](https://github.com/kit1980), [RandySheriffH](https://github.com/RandySheriffH), [fdwr](https://github.com/fdwr)

1.3.1

This update includes changes to support the published packages for the Java and nodejs APIs for the 1.3.0 release.
* Maven: [Java API CPU](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime)
* Maven: [Java API GPU](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime_gpu)
* [NPM: ONNX Runtime Node.js API](https://www.npmjs.com/package/onnxruntime)

For all other APIs/builds, the [1.3.0](https://github.com/microsoft/onnxruntime/releases/tag/v1.3.0) release packages are suggested. 1.3.1 does address the 1.3.0 issue of [Crash when setting IntraOpNumThreads using the C/C++/C# API](https://github.com/microsoft/onnxruntime/issues/4070), so if this fix is needed it can be built from source using this release branch (with official release support).

1.3.0

Not secure
Key Updates
General
* ONNX 1.7 support
* Opset 12
* Function expansion support that enables several new ONNX 1.7 ops such as NegativeLogLikelihoodLoss, GreaterOrEqual, LessOrEqual, Celu to run without a kernel implementation.
* **[Preview]** ONNX Runtime Training
* ONNX Runtime Training is a new capability released in preview to accelerate training transformer models. See the sample [here](https://github.com/microsoft/onnxruntime-training-examples/) to use this feature in your training experiments.
* Improved threadpool support for better resource utilization
* Improved threadpool abstractions that switch between openmp and Eigen threadpools based on build settings. All operators have been updated to use these new abstractions.
* Improved Eigen based threadpool now allow ops to provide cost (among other things like thread affinity) for operations
* Simpler configuration of thread count. If built with OpenMP, use the OpenMP env variables; else use the ORT APIs to configure the number of threads.
* Support for sessions to share global threadpool. See [this](https://github.com/microsoft/onnxruntime/blob/rel-1.3.0/docs/C_API.md) for more information.
* Performance improvements
* ~10% average measured latency improvements amongst key representative models (including ONNX model zoo models, MLPerf, and production models shipped in Microsoft products)
* Further latency improvements for Transformer models on CPU and GPU - [benchmark script](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/benchmark_gpt2.py)
* Improved batch inferencing latency for scikit-learn models for large batch sizes
* Significant improvements in the implementations of the following ONNX operators: TreeEnsembleRegressor, TreeEnsembleClassifier, LinearRegressor, LinearClassifier, SVMRegressor, SVMClassifier, TopK
* C API optimizations - [PR3171](https://github.com/microsoft/onnxruntime/pull/3171)
* Telemetry enabled for Windows ([more details](https://github.com/microsoft/onnxruntime#DataTelemetry) on telemetry collection)
* Improved error reporting when a kernel cannot be found due to missing type implementation
* Minor fixes based on static code analysis

Dependency updates
Please note that this version of onnxruntime depends on **[Visual C++ 2019 runtime](https://support.microsoft.com/en-us/help/2977003/the-latest-supported-visual-c-downloads)**. Previous versions depended on Visual C++ 2017. Please also refer https://github.com/microsoft/onnxruntime/tree/rel-1.3.0#system-requirements for the full set of system requirements.

APIs and Packages
* **[General Availability]** Windows Machine Learning APIs - package published on Nuget - [Microsoft.AI.MachineLearning](https://www.nuget.org/packages/Microsoft.AI.MachineLearning)
* Performance improvements
* Opset updates
* **[General Availability]** ONNX Runtime with DirectML package published on Nuget -[Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML/)
* **[General Availability]** [Java API](https://github.com/microsoft/onnxruntime/tree/master/java) - Maven package coming soon.
* **[Preview]** [Javascript (node.js) API](https://github.com/microsoft/onnxruntime/tree/master/nodejs) now available to build from the master branch.
* ARM64 Linux CPU Python package [now available on Pypi](https://pypi.org/project/onnxruntime). Note: this requires [building ONNX for ARM64](https://github.com/onnx/onnx#build-onnx-on-arm-64).
* Nightly dev builds from master ([Nuget feed](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT-Nightly), TestPypi-[CPU]( https://test.pypi.org/project/ort-nightly), [GPU]( https://test.pypi.org/project/ort-gpu-nightly))
* API Updates
* I/O binding support for Python API - This reduces execution time significantly by allowing users to setup inputs/outputs on the GPU prior to model execution.
* API to specify free dimensions based on both denotations and symbolic names.

Execution Providers
* [OpenVINO v2.0 EP](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/OpenVINO-ExecutionProvider.md)
* [DirectML EP](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/DirectML-ExecutionProvider.md) updates
* Updated graph interface to abstract GPU-dependent graph optimization
* ONNX opset 10 and 11 support
* Initial support of 8bit and quantized operators
* Performance optimizations
* **[Preview]** [Rockchip NPU EP](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/RKNPU-ExecutionProvider.md)
* **[Preview]** [Xilinx FPGA Vitis-AI EP](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/Vitis-AI-ExecutionProvider.md)
* Capability to build execution providers as DLLs - supported for DNNL EP, work in progress for other EPs.
* If enabled in the build, the provider will be available as a shared library. Previously, EPs had to be statically linked with the core code.
* No runtime cost to include the EP if it isn't loaded; can now dynamically decide when to load it based on the model

Contributions
We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: [Adam Pocock](https://github.com/Craigacp), [pranavm-nvidia](https://github.com/pranavm-nvidia), [Andrew Kane](https://github.com/ankane), [Takeshi Watanabe](https://github.com/take-cheeze), [Jianhao Zhang](https://github.com/daquexian), [Colin Jermain](https://github.com/cjermain), [Andrews548](https://github.com/Andrews548), [Jan Scholz](https://github.com/radikalliberal), [Pranav Prakash](https://github.com/pranav-prakash), [suryasidd](https://github.com/suryasidd), and [S. Manohar Karlapalem](https://github.com/smkarlap).

The ONNX Runtime Training code was originally developed internally at Microsoft, before being ported to Github. We’d like to recognize the original contributors: Aishwarya Bhandare, Ashwin Kumar, Cheng Tang, Du Li, Edward Chen, Ethan Tao, Fanny Nina Paravecino, Ganesan Ramalingam, Harshitha Parnandi Venkata, Jesse Benson, Jorgen Thelin, Ke Deng, Liqun Fu, Li-Wen Chang, Peng Wang, Sergii Dymchenko, Sherlock Huang, Stuart Schaefer, Tao Qin, Thiago Crepaldi, Tianju Xu, Weichun Wang, Wei Zuo, Wei-Sheng Chin, Weixing Zhang, Xiaowan Dong, Xueyun Zhu, Zeeshan Siddiqui, and Zixuan Jiang.


Known Issues
1. The source doesn't compile on Ubuntu 14.04. See 4048
2. [Crash when setting IntraOpNumThreads using the C/C++/C API](https://github.com/microsoft/onnxruntime/issues/4070). [Fix is available in the master branch](https://github.com/microsoft/onnxruntime/commit/6c1b2f33b74ad48c3cb08d4ba1f38e1897659c8e).
**Workaround**: Setting IntraOpNumThreads is inconsequential when using ORT that is built with openmp enabled. Hence it's not required and can be safely commented out. Use the openmp env variables to set the threading params for openmp enabled builds (which is the recommended way).

1.2.0

Not secure
Key Updates
Execution Providers
* **[Preview]** Availability of [Windows Machine Learning (WinML)](https://docs.microsoft.com/en-us/windows/ai/windows-ml/) APIs in Windows builds of ONNX Runtime, with [DirectML](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/DirectML-ExecutionProvider.md) for GPU acceleration
* Windows ML is a WinRT API designed specifically for Windows developers that already ships as an inbox component in newer Windows versions
* Compatible with Windows 8.1 for CPU and Windows 10 1709 for GPU
* Available as source code in the GitHub and pre-built Nuget packages (windows.ai.machinelearning.dll)
* For additional documentation and samples on getting started, visit the [Windows ML API Reference documentation](https://docs.microsoft.com/en-us/windows/ai/windows-ml/api-reference)
* TensorRT Execution Provider upgraded to TRT 7
* CUDA updated to 10.1
* Linux build requires CUDA Runtime 10.1.243, cublas10-10.2.1.243, and CUDNN 7.6.5.32. Note: cublas 10.1.x will not work
* Windows build requires CUDA Runtime 10.1.243, CUDNN 7.6.5.32
* onnxruntime now depends on curand lib, which is part of the CUDA SDK. If you already have the SDK fully installed, then it won't be an issue

Builds and Packages
* Nuget package structure updated. There is now a separate Managed Assembly (Microsoft.ML.OnnxRuntime.Managed) shared between the CPU and GPU Nuget packages. The "native" Nuget will depend on the "managed" Nuget to bring it into relevant projects automatically. [PR 3104](https://github.com/microsoft/onnxruntime/pull/3104) Note that this should transparent for customers installing the Nuget packages. ORT package details are [here](https://github.com/microsoft/onnxruntime#builds-and-packages).
* Build system: support getting dependencies from vcpkg (a C++ package manager for Windows, Linux, and MacOS)
* Capability to generate an onnxruntime Android Archive (AAR) file from source, which can be imported directly in Android Studio

API Updates
* SessionOptions:
* default value of max_num_graph_transformation_steps increased to 10
* default value of graph optimization level is changed to ORT_ENABLE_ALL(99)
* OrtEnv can be created/destroyed multiple times
* Java API
* Gradle now required to build onnxruntime
* Available on Android
* C API Additions:
* GetDenotationFromTypeInfo
* CastTypeInfoToMapTypeInfo
* CastTypeInfoToSequenceTypeInfo
* GetMapKeyType
* GetMapValueType
* GetSequenceElementType
* ReleaseMapTypeInfo
* ReleaseSequenceTypeInfo
* SessionEndProfiling
* SessionGetModelMetadata
* ModelMetadataGetProducerName
* ModelMetadataGetGraphName
* ModelMetadataGetDomain
* ModelMetadataGetDescription
* ModelMetadataLookupCustomMetadataMap
* ModelMetadataGetVersion
* ReleaseModelMetadata

Operators
* This release introduces a change to the forward-compatibility pattern ONNX Runtime previously followed. This change was added to guarantee correctness of model prediction and removes behavior ambiguity due to missing opset information. This release adds a model opset number and IR version check - ONNX Runtime will not support models with ONNX versions higher than the supported opset implemented for that version (see [version matrix](https://github.com/microsoft/onnxruntime/blob/master/docs/Versioning.md#version-matrix)). If higher opset versions are needed, consider using custom operators via ORT's custom schema/kernel registry mechanism.
* Int8 type support for Where Op
* Updates to Contrib ops:
* Changes: ReorderInput in kMSNchwcDomain, SkipLayerNormalization
* New: QLinearAdd, QLinearMul, QLinearReduceMean, MulInteger, QLinearAveragePool
* Added featurizer operators as an expansion of Contrib operators - these are not part of the official build and are experimental

Contributions
We'd like to recognize our community members across various teams at Microsoft and other companies for all their valuable contributions. Our community contributors in this release include: [Eric Cousineau](https://github.com/EricCousineau-TRI) (Toyota Research Institute), [Adam Pocock](https://github.com/Craigacp) (Oracle), [tinchi](https://github.com/tinchi), [Changyoung Koh](https://github.com/kcy1019), [Andrews548](https://github.com/Andrews548), [Jianhao Zhang](https://github.com/daquexian), [nicklas-mohr-jas](https://github.com/niklas-mohr-jdas), [James Yuzawa](https://github.com/yuzawa-san), [William Tambellini](https://github.com/WilliamTambellini), [Maher Jendoubi](https://github.com/MaherJendoubi), [Mina Asham](https://github.com/mina-asham), [Saquib Nadeem Hashmi](https://github.com/Saqhas), [Sanster](https://github.com/Sanster), and [Takeshi Watanabe](https://github.com/take-cheeze).

1.1.2

Not secure
This is a minor patch release on 1.1.1.

This fixes the a minor issue that some logging in execution_frame.cc cannot be controlled by SessionLogVerbosityLevel in SessionOptions. PR 3043

1.1.1

Not secure
This is a minor patch release on 1.1.0.

Summary
* Updated default optimization level to apply **all** by default to support best performance for popular models
* Operator updates and other bugs

All fixes
* update default optimization level + fix gemm_activation fusion 2791
* Fix C handling of unicode strings 2697
* Initialize max of softmax with lowest of float 2786
* Implement a more stable softmax 2715
* add uint8 support to where op 2792
* Fix memory leak in samples and test 2778
* Fix memory leak in TRT 2815
* Fix nightly build version number issue 2771

Page 7 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.