Onnxruntime

Latest version: v1.20.1

Safety actively analyzes 682251 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 9

1.1.0

Not secure
Key Updates
* Performance improvements to accelerate BERT model inference latency on both GPU and CPU. Updates include:
* Additional fused CPU kernels as well as related transformers for key operators such as Attention, EmbedLayerNormalization, SkipLayerNormalization, FastGelu
* Further optimization such as parallelizing Gelu and LayerNorm, enabling legacy stream mode, improving performance of elementwise operators, and fusing add bias into SkipLayerNormalization and FastGelu
* Extended CUDA support for opset 11
* Performance improvement for Faster R-CNN and Master R-CNN with new and updated implementation of opset 11 CUDA kernels, including Resize, Expand, Scatter, and Pad
* TensorRT Execution Provider updates, including support for inputs with dynamic shapes
* MKL-DNN (renamed DNNL) updated to v1.1
* **[Preview]** NN API Execution Provider for Android - [see more](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/NNAPI-ExecutionProvider.md)
* **[Preview]** Java API for ONNX Runtime - [see more](https://aka.ms/onnxruntime-java)
* [Tool for Python API](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/featurizer_ops): Automatically maps a dataframe to the inputs of an ONNX graph based on schema information in the pandas frame
* Custom ops can be loaded from shared libraries: Custom ops can now be packaged in shared libraries and distributed for use in multiple applications without modification.

Contributions
We'd like to thank our community members across various teams at Microsoft and other companies for all the valuable contributions.

We'd like to extend special recognition to these individuals for their contributions in this release: [Jianhao Zhang](https://github.com/daquexian) (JD AI), [Adam Pocock](https://github.com/Craigacp) (Oracle), [nihui](https://github.com/nihui) (Tencent), and [Nick Groszewski](https://github.com/groszewn). From the Intel teams, we'd like to thank [Patrick Foley](https://github.com/psfoley), [Akhila Vidiyala](https://github.com/avidiyal), [Ilya Lavrenov](https://github.com/ilya-lavrenov), [Manohar Karlapalem](https://github.com/smkarlap), [Surya Siddharth Pemmaraju](https://github.com/suryasidd), [Sreekanth Yalachigere](https://github.com/sreekanth-yalachigere), [Michal Karzynski](https://github.com/postrational), [Thomas V Trimeloni](https://github.com/tvtrimel), [Tomasz Dolbniak](https://github.com/tomdol), [Amy Zhuang](https://github.com/ayzhuang), [Scott Cyphers](https://github.com/diyessi), [Alexander Slepko](https://github.com/aslepko) and other team members on their valuable work to support the Intel Execution Providers for ONNX Runtime.

1.0.0

Not secure
Key Updates

General
- [ONNX 1.6](https://github.com/onnx/onnx/releases/tag/v1.6.0) compatibility - operator support for all opset11 ops on CPU, including Sequence ops.
- Free dimension override: Add ability to override free dimensions to the inputs of a model. Free dimensions are tensor shapes which aren't statically known at model author time, and must be provided at runtime. Free dimensions are most often used for the batch size of a model's inputs, allowing for customizable batch sizes at runtime. This feature enables certain optimizations since the shape can be known apriori.
- Performance improvements to further accelerate model inferencing latency on CPU and GPU. Notable updates include:
- Additional CUDA operators added to support Object Detection and BERT models. *Note: CUDA operator coverage is still limited and performance will vary significantly depending on the model and operator usage.*
- Improved parallelism for operators that use GEMM and MatMul
- New implementation for 64 bits MatMul on x86_64 CPU
- Added ability to set of threads used by intra and inter operator parallelism to allow optimal configuration for both sequential and concurrent inferencing scenarios
- Gelu fusion optimizer
- Threading updates:
- Eigen ThreadPool is now the default (previously there were two thread pool implementations, TaskThreadPool and Eigen ThreadPool)
- Ability to disable multiple threading by setting thread pool size to 1 and onnxruntime_USE_OPENMP to OFF.
- MLAS now uses the number of thread pool threads plus one as the parallelism level. (e.g. if you have 4 CPUs, you need to set the thread pool size to 3 so that you only have one thread per CPU)
- [CPU Python package](https://pypi.org/project/onnxruntime) is [manylinux1](https://www.python.org/dev/peps/pep-0513/) compliant. The [GPU Python package](https://pypi.org/project/onnxruntime-gpu/) is manylinux2010 and compatible with CUDA 10.0/cuDNN 7.6
- Support for [CentOS](https://www.centos.org/) 6 and 7 for Python, C, and C++. Most of the code is now C++11 compliant (previously required C++14). C# .NET Core compatibility coming soon.
- Package for [ArchLinux](https://aur.archlinux.org/packages/python-onnxruntime/)
- Telemetry - component level logging through [Trace Logging](https://docs.microsoft.com/en-us/windows/win32/tracelogging/trace-logging-portal) for Windows builds. Data collection is limited and used strictly to identify areas for improvement. You can read more about the data collected and how to manage these settings [here](https://aka.ms/ort-privacy).
- Bug fixes to address various issues filed on Github and other channels

API updates
- Updates to the C API for clarity of usage. The 1.0 version of the API is now stable and will maintain backwards compatibility. Versioning is in supported to accommodate future updates.
- C APIs are ABI compatible and follows Semantic Versioning. Programs linked with the current version of the ONNX Runtime library will continue to work with subsequent releases without updating any client code or re-linking.
- New session option available for serializing optimized ONNX models
- Enabled some new capabilities through the Python and C APIs for feature parity, including registration of execution providers in Python and setting additional run options in C.

Execution Providers (EP)
Updates
- General Availability of the OpenVINO™ EP for Intel® CPU, Intel® Integrated Graphics, [Intel® Neural Compute Stick 2](https://software.intel.com/en-us/neural-compute-stick), and the [Intel® Vision Accelerator Design with Intel® Movidius™ Myriad™ VPU](https://software.intel.com/en-us/iot/hardware/vision-accelerator-movidius-vpu) powered by OpenVINO™nGraph EP support of new operators.
- MKL-DNN EP updated from 0.18.1 to 1.0.2 for an average of 5-10% (up to 50%) performance improvement on ONNX Model Zoo model latency
- nGraph EP updated from 0.18 to 0.26, with support of new operators for quantization and performance improvements on LSTM ops (without peephole) and Pad op
- TensorRT EP updated to the latest TensorRT 6.0 libraries
- Android DNNLibrary version update
New EP support
- *[Preview]* [NUPHAR](https://aka.ms/build-ort-nuphar) (Neural-network Unified Preprocessing Heterogeneous ARchitecture) is a TVM and LLVM based EP offering model acceleration by compiling nodes in subgraphs into optimized functions via JIT
- *[Preview]* [DirectML](https://aka.ms/build-ort-directml) is a high-performance, hardware-accelerated DirectX 12 library for machine learning on Windows, providing GPU acceleration for common machine learning tasks across a broad range of supported hardware and drivers
- *[Preview]* Support for [Intel® Vision Accelerator Design with Intel® Arria™ 10 FPGA powered by OpenVINO™](https://software.intel.com/en-us/iot/hardware/vision-accelerator-arria-10).
- *[Preview]* [ARM Compute Library (ACL)](https://aka.ms/build-ort-acl) Execution Provider targets ARM CPUs and GPUs for optimized execution of ONNX operators using the low-level libraries.

Build updates
- Two new cmake options: onnxruntime_USE_GEMMLOWP, onnxruntime_USE_AUTOML, onnxruntime_USE_DML
- Removed two cmake options: onnxruntime_USE_MLAS/onnxruntime_USE_EIGEN_THREADPOOL. These are always ON now.
- The minimal supported gcc version is 4.8.2

Tooling
- Availability of [ONNX Go Live tool](https://github.com/microsoft/OLive), which automates the process of shipping ONNX models by combining model conversion, correctness tests, and performance tuning into a single pipeline as a series of Docker images.
- Updates to the [quantization tool](./onnxruntime/python/tools/quantization)
- Supports selective quantization for some nodes instead of all possible nodes
- Bias quantization for Conv nodes
- Node fusion for dynamic quantization
- onnxruntime_perf_tool usage updates:
- new option "-y" for controlling inter_op_num_threads
- max optimization level is now 99, and 3 is now an invalid value. In most cases, this tool should be run with "-o 99"

Other Dependency Updates
- Replaced gsl with gsl-lite to be compatible with C++11
- Added NVIDIA cub
- Added Wil for DML execution provider
- Pybind11 updated from 2.2.4 to 2.4.0 to fix a compatibility issue with Baidu PaddlePaddle and some other python modules that are also depend on Pybind11
- TVM updated to a newer version

0.5.1

Bug Fixes
- Fix in C API marshalling for InferenceSession.Run()
- Some fixes in OnnxRuntime server

Only NuGet packages are released for this patch release, because only the C API users are impacted

0.5.0

Not secure
* Execution Provider updates
* MKL-DNN provider ([subgraph based execution](https://github.com/microsoft/onnxruntime/blob/master/docs/execution_providers/MKL-DNN-Subgraphs.md)) for improved performance
* Intel OpenVINO EP now available for Public Preview - [build instructions](https://github.com/microsoft/onnxruntime/blob/master/BUILD.md#openvino-build)
* Update to CUDA 10 for inferencing with NVIDIA GPUs
* Base CPU EP has faster convolution performance using the NCHWc blocked layout. This layout optimization can be enabled by setting graph optimization level to 3 in the session options.
* [C++ API](https://github.com/microsoft/onnxruntime/blob/master/include/onnxruntime/core/session/onnxruntime_cxx_api.h) for inferencing (wrapper on C API)
* [ONNX Runtime Server (Beta)](https://github.com/microsoft/onnxruntime/blob/master/docs/ONNX_Runtime_Server_Usage.md) for inferencing with HTTP and GRPC endpoints
* [Python Operator (Beta)](https://github.com/microsoft/onnxruntime/blob/master/docs/PyOp.md) to support custom Python code in a single node of an ONNX graph to make it easier for experimentation of custom operators
* Support of Keras-based Mask R-CNN model. The model relies on some custom operators pending to be added in ONNX; in the meantime, it can be converted using [this](https://github.com/onnx/keras-onnx/tree/master/applications/mask_rcnn) script for inferencing using ONNX Runtime 0.5. Other object detection models can be found from the [ONNX Model Zoo](https://github.com/onnx/models#object-detection--image-segmentation-).
* Minor updates to the C API
* For consistency, all C APIs now return an ORT status code
* Code coverage for this release is 83%

0.4.0

Not secure
Key Updates
* New execution providers for improved performance on specialized hardware
* Intel nGraph
* NVIDIA TensorRT
* ONNX 1.5 compatibility
* Opset 10 operator support
* Supports newly added ONNX model zoo object detection models (YOLO v3, SSD)
* Quantization operators
* Updates to C API for Custom Operators
* Allocation of outputs during compute
* C++ wrapper to greatly simplify implementation
* Supports custom op DLLs when ONNX Runtime is compiled statically
* Graph optimizations with Constant Folding for improved performance
* Official binary packages
* Nuget package creation pipeline updated with security-focused tasks
* CredScan
* SDLNative Rules for PreFast
* BinSim
* Additional binaries built with MKL-ML published in Nuget
* Size reduction in Windows (700KB+), Linux (65%) and Mac (45%) binaries

0.3.1

This is a patch release for 0.3.0.

Updates include
* Binary size reduction through usage of protobuf-lite and operator fixes
* Build option to disable contrib ops (ops not in ONNX standard)
* Build option to statically link MSVC
* Minor bug fixes

Page 8 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.