Onnxruntime

Latest version: v1.19.0

Safety actively analyzes 675368 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 9

1.8.1

Not secure
This release contains fixes and key updates for 1.8.0.
For all package installation details, please refer to https://www.onnxruntime.ai.

Inference
* Fixes for GPU package loading issues
* Fix for memory issue for models with convolution nodes while using the EXHAUSTIVE algo search mode
* ORT Mobile updates
* CoreML EP enabled in iOS mobile package
* Additional operators
* Bug fixes
* [React Native package](https://www.npmjs.com/package/onnxruntime-react-native) now available

Training

Performance updates for ONNX Runtime for PyTorch (training acceleration for PyTorch models)
- Accelerates most popular Hugging Face models as well as GPT-Neo and Microsoft TNLG and TNLU models
- Support for PyTorch 1.8.1 and 1.9
- Support for CUDA 10.2 and 11.1
- Preview packages for ROCm 4.2

1.8.0

Not secure
Announcements
* This release
* Building onnxruntime from source now requires a C++ compiler with full C++14 support.
* Builds with OpenMP are no longer published. They can still be [built from source](http://www.onnxruntime.ai/docs/how-to/build/inferencing.html#openmp) if needed. The default threadpool option should provide optimal performance for the majority of models.
* New dependency for Python package: flatbuffers

1.7.2

This is a minor patch release on [1.7.1](https://github.com/microsoft/onnxruntime/releases/tag/v1.7.1) with the following changes:

- Fix [Microsoft.AI.MachineLearning](https://www.nuget.org/packages/Microsoft.AI.MachineLearning/) NuGet package to correctly install on C# UWP projects in Visual Studio.

1.7.1

The [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/) and [Microsoft.ML.OnnxRuntime.Managed](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Managed/) packages are uploaded to Nuget.org. Please note the version numbers for the Microsoft.ML.OnnxRuntime.Managed package.

1.7.0

Not secure
Announcements
Starting from this release, all ONNX Runtime CPU packages are now built *without OpenMP*. A version *with OpenMP* is available on Nuget ([Microsoft.ML.OnnxRuntime.OpenMP](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.OpenMP)) and PyPi ([onnxruntime-openmp](https://pypi.org/project/onnxruntime-openmp/)). Please report any issues in [GH Issues](https://github.com/microsoft/onnxruntime/issues).

**Note:** The 1.7.0 GPU package is uploaded on [this Azure DevOps Feed](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT) due to the size limit on Nuget.org. Please use [1.7.1](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/1.7.1) for the GPU package through Nuget.

Key Feature Updates

General
* Mobile
* Custom operators now supported in the ONNX Runtime Mobile build
* Added ability to reduce types supported by operator kernels to only the types required by the models
* Expect a 25-33% reduction in binary size contribution from the kernel implementations. Reduction is model dependent, but testing with common models like Mobilenet v2, SSD Mobilenet and Mobilebert achieved reductions in this range.
* Custom op support for dynamic input
* MKLML/openblas/jemalloc build configs removed
* Removed dependency on gemmlowp
* *[Experimental]* Audio Operators
* Fourier Transforms (DFT, IDFT, STFT), Windowing Functions (Hann, Hamming, Blackman), and a MelWeightMatrix operator in "com.microsoft.experimental” domain
* Buildable using ms_experimental build flag (included in [Microsoft.AI.MachineLearning](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) NuGet package)

Performance
* Quantization
* Quantization tool now supports quantization of models in QDQ (QuantizeLinear-DequantizeLinear) format
* Depthwise Conv quantization performance improvement
* Quantization support added for Pad, Split and MaxPool for channel last
* QuantizeLinear performance improvement on AVX512
* Optimization: Fusion for Conv + Mul/Add
* Transformers
* Longformer Attention CUDA kernel memory footprint reduction
* Einsum Float16 CUDA kernel for ALBERT and XLNet
* [Python optimizer tool](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/optimizer.py) now supports fusion for BART
* [CPU profiling tool](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/profiler.py) for transformers models

APIs and Packages
* Python 3.8 and 3.9 support added for all platforms, removed support for 3.5
* ARM32/64 Windows builds are now included in the CPU Nuget and zip packages
* WinML
* .NET5 support - will work with .NET5 Standard 2.0 Projections
* Image descriptors expose NominalPixelRange properties
* Native support added for additional pixel ranges [0..1] and [-1..1] in image models.
* A new property is added to the ImageFeatureDescriptor runtimeclass to expose the ImageNominalPixelRange property in ImageFeatureDescriptor. Other similar properties exposed are the image’s BitmapPixelFormat and BitmapAlphaMode.
* Bug fixes and performance improvements, including [6249](https://github.com/microsoft/onnxruntime/issues/6249)
* *[Experimental]* Model Building API available under the Microsoft.AI.MachineLearning.Experimental namespace. (included in [Microsoft.AI.MachineLearning](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) NuGet package)
* Can be used to create dynamic models on the fly to enable engine-optimized and hardware accelerated dynamic tensor featurization [code sample](https://github.com/microsoft/onnxruntime/blob/87cb6fd495c046dac88893818478bb027969d611/winml/test/api/LearningModelSessionAPITest.cpp#L759)

Execution Providers
* CUDA EP
* Official GPU build now built with CUDA 11
* OpenVINO EP
* Support for OpenVINO 2021.2
* Deprecated support for OpenVINO 2020.2
* Support for OpenVINO EP options in onnxruntime_perf_test tool
* General fixes
* TensorRT EP
* Support for TensorRT 7.2
* General fixes and perf improvements
* DirectML EP
* Support for [DirectML 1.4.2](https://github.com/microsoft/DirectML/blob/master/Releases.md)
* DirectML PIX markers added to enable profiling graph at operator level.
* NNAPI EP
* Performance improvement for quantized models
* Support of per-channel quantization for QlinearConv
* Additional operator support – Min/Max/Pow

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[edgchen1](https://github.com/edgchen1), [snnn](https://github.com/snnn), [skottmckay](https://github.com/skottmckay), [gwang-msft](https://github.com/gwang-msft), [hariharans29](https://github.com/hariharans29), [tianleiwu](https://github.com/tianleiwu), [xadupre](https://github.com/xadupre), [yufenglee](https://github.com/yufenglee), [ryanlai2](https://github.com/ryanlai2), [wangyems](https://github.com/wangyems), [suffiank](https://github.com/suffiank), [liqunfu](https://github.com/liqunfu), [orilevari](https://github.com/orilevari), [baijumeswani](https://github.com/baijumeswani), [weixingzhang](https://github.com/weixingzhang), [pranavsharma](https://github.com/pranavsharma), [RandySheriffH](https://github.com/RandySheriffH), [ashbhandare](https://github.com/ashbhandare), [oliviajain](https://github.com/oliviajain), [smk2007](https://github.com/smk2007), [tracysh](https://github.com/tracysh), [stevenlix](https://github.com/stevenlix), [fs-eire](https://github.com/fs-eire), [Craigacp](https://github.com/Craigacp), [faxu](https://github.com/faxu), [mrry](https://github.com/mrry), [codemzs](https://github.com/codemzs), [chilo-ms](https://github.com/chilo-ms), [jcwchen](https://github.com/jcwchen), [zhanghuanrong](https://github.com/zhanghuanrong), [SherlockNoMad](https://github.com/SherlockNoMad), [iK1D](https://github.com/iK1D), [askhade](https://github.com/askhade), [zhangxiang1993](https://github.com/zhangxiang1993), [yuslepukhin](https://github.com/yuslepukhin), [tlh20](https://github.com/tlh20), [MaajidKhan](https://github.com/MaajidKhan), [wschin](https://github.com/wschin), [smkarlap](https://github.com/smkarlap), [wenbingl](https://github.com/wenbingl), [pengwa](https://github.com/pengwa), [duli2012](https://github.com/duli2012), [natke](https://github.com/natke), [alberto-magni](https://github.com/alberto-magni), [Tixxx](https://github.com/Tixxx), [HectorSVC](https://github.com/HectorSVC), [jingyanwangms](https://github.com/jingyanwangms), [jstoecker](https://github.com/jstoecker), [kit1980](https://github.com/kit1980), [suryasidd](https://github.com/suryasidd), [RandyShuai](https://github.com/RandyShuai), [sfatimar](https://github.com/sfatimar), [jywu-msft](https://github.com/jywu-msft), [liuziyue](https://github.com/liuziyue), [mosdav](https://github.com/mosdav), [thiagocrepaldi](https://github.com/thiagocrepaldi), [souptc](https://github.com/souptc), [fdwr](https://github.com/fdwr)

1.6.0

Not secure
Announcements
* OpenMP will be disabled in future official builds (build option will still be available). A NoOpenMP version of ONNX Runtime is now available with this release on [Nuget](http://nuget.org/packages/Microsoft.ML.OnnxRuntime.NoOpenMP) and [PyPi](https://pypi.org/project/onnxruntime/) for C/C++/C#/Python users.
* In the next release, *MKL-ML*, *openblas*, and *jemallac* build options will be removed, and the [Microsoft.ML.OnnxRuntime.MKLML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.MKLML/) Nuget package will no longer be published. Users of *MKL-ML* are recommended to use the Intel EPs. If you are using these options and identify issues switching to an alternative build, please [file an issue](https://github.com/microsoft/onnxruntime/issues) with details.

Key Feature Updates
General
* [ONNX 1.8](https://github.com/onnx/onnx/releases/tag/v1.8.0) support / opset 13
* New contrib ops: BiasSoftmax, MatMulIntegerToFloat, QLinearSigmoid, Trilu
* ORT Mobile now compatible with NNAPI for accelerating model execution on Android devices
* Build support for Mac with Apple Silicon (CPU only)
* New dependency: flatbuffers
* Support for loading sparse tensor initializers in pruned models
* Support for setting the execution priority of a node
* Support for selection of cuDNN conv algorithms
* [BERT Model profiling tool](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/profiler.py)

Performance
* New session option to disable denormal floating numbers on sse3 supporting CPUs
* Eliminates unexpected performance degradation due to denormals without needing to retrain the model
* Option to share initializers between sessions to improve memory utilization
* Useful when several models that use the same set of initializers except the last few layers of the model are loaded in the same process
* Eliminates wasteful memory usage when every model (session) creates a separate instance of the same initializer
* Exposed by the AddInitializer API
* Transformer model optimizations
* Longformer: LongformerAttention CUDA operator added
* Support for BERT models exported from Tensorflow with 1 or 2 inputs
* Python optimizer supports additional models: openai-GPT, ALBERT and FlauBERT
* Quantization
* Support of per-channel QuantizeLinear and DeQuantizeLinear
* Support of LSTM quantization
* Quantization performance improvement on ARM
* CNN quantization perf optimizations, including u8s8 support and NHWC transformer in QLinearConv
* ThreadPool
* Use `_mm_pause()` for spin loop to improve performance and power consumption

APIs and Packages
* Python - I/O Binding enhancements
* [Usage Documentation](https://www.onnxruntime.ai/python/api_summary.html) (OrtValue and IOBinding sections)
* Python binding for the `OrtValue` data structure
* An interface is exposed to allocate memory on a CUDA-supported device and define the contents of this memory. No longer need to use allocators provided by other libraries to allocate and manage CUDA memory to be used with ORT.
* Allows consuming ORT allocated device memory as an `OrtValue` (check Scenario 4 in the IOBinding section of the documentation for an example)
* `OrtValue` instances can be used to bind inputs/outputs. This is in addition to existing interfaces that allows binding a piece of memory directly/using numpy arrays that can be bound and may be particularly useful when binding ORT allocated device memory.
* C - float16 and bfloat16 support
* Windows ML
* NuGet package now supports UWP applications targeting Windows Store deployment for both CPU and GPU
* Minor API Improvements:
* Able to bind IIterable<Buffers> as inputs and outputs
* Able to create Tensor* via multiple buffers
* WindowsAI Redist now includes a statically linked C-Runtime package for additional deployment options

Execution Providers
* DNNL EP Updates
* DNNL updated from 1.1.1 to 1.7
* NNAPI EP Updates
* Support for CNN models
* Additional operator support - Resize/Flatten/Clip
* TensorRT EP Updates
* Int8 quantization support (experimental)
* Engine cache refactoring and improvements
* General fixes and performance improvements
* OpenVINO EP Updates
* OpenVINO 2021.1 support
* OpenVINO EP builds as shared library
* Multi-threaded inferencing support
* fp16 input type support
* Multi-device plugin support
* Hetero plugin support
* Enable build on ARM64
* DirectML EP Updates (1.3.0 -> 1.4.0)
* Utilizing the first public standalone release of the DirectML API through the [DirectML NuGet package](https://www.nuget.org/packages/Microsoft.AI.DirectML/) release
* General fixes and improvements
* nGraph EP is removed. Recommend to use OpenVINO instead

Additional notes
* VCRuntime2019 with OpenMP: pinning a process to NUMA node 1 forces the execution to be single threaded. Fix is in progress in VC++.
* Workaround: place the VS2017 vcomp DLL side-by-side so that ORT uses the VS2017 version
* Pip version >=20.3 is required for use on macOS Big Sur (11.x)
* The destructor of OrtEnv is now non-trivial and may do [DLL unloading](https://github.com/microsoft/onnxruntime/blob/rel-1.6.0/onnxruntime/core/session/ort_env.cc#L45) Do not call `ReleaseEnv` from DLLMain or put OrtEnv in global variables. It is not safe to call FreeLibrary from DllMain. - [reference](https://docs.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-freelibrary)
* Some unit tests fail on Pascal GPUs. See: https://github.com/microsoft/onnxruntime/issues/5914
* If using the default CPU package (built with OpenMP), consider tuning the OpenMP settings to improve performance. By default the number of threads to use for openmp parallel regions is set to the number of logical CPUs. This may not be optimal for machines with hyper-threading; when CPUs are oversubscribed the 99-percentile latency could be 10x greater. Setting the OMP_NUM_THREADS environment variable to the number of **physical** cores is a good starting point. As noted in Announcements, future official builds of ORT will be published without OpenMP

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[gwang-msft](https://github.com/gwang-msft), [snnn](https://github.com/snnn), [skottmckay](https://github.com/skottmckay), [edgchen1](https://github.com/edgchen1), [hariharans29](https://github.com/hariharans29), [wangyems](https://github.com/wangyems), [yufenglee](https://github.com/yufenglee), [yuslepukhin](https://github.com/yuslepukhin), [tianleiwu](https://github.com/tianleiwu), [SherlockNoMad](https://github.com/SherlockNoMad), [tracysh](https://github.com/tracysh), [ryanlai2](https://github.com/ryanlai2), [askhade](https://github.com/askhade), [xadupre](https://github.com/xadupre), [liqunfu](https://github.com/liqunfu), [RandySheriffH](https://github.com/RandySheriffH), [jywu-msft](https://github.com/jywu-msft), [KeDengMS](https://github.com/KeDengMS), [pranavsharma](https://github.com/pranavsharma), [mrry](https://github.com/mrry), [ashbhandare](https://github.com/ashbhandare), [iK1D](https://github.com/iK1D), [RyanUnderhill](https://github.com/RyanUnderhill), [MaajidKhan](https://github.com/MaajidKhan), [wenbingl](https://github.com/wenbingl), [kit1980](https://github.com/kit1980), [weixingzhang](https://github.com/weixingzhang), [tlh20](https://github.com/tlh20), [suffiank](https://github.com/suffiank), [Craigacp](https://github.com/Craigacp), [smkarlap](https://github.com/smkarlap), [stevenlix](https://github.com/stevenlix), [zhanghuanrong](https://github.com/zhanghuanrong), [sfatimar](https://github.com/sfatimar), [ytaous](https://github.com/ytaous), [tiagoshibata](https://github.com/tiagoshibata), [fdwr](https://github.com/fdwr), [oliviajain](https://github.com/oliviajain), [alberto-magni](https://github.com/alberto-magni), [jcwchen](https://github.com/jcwchen), [mosdav](https://github.com/mosdav), [xzhu1900](https://github.com/xzhu1900), [wschin](https://github.com/wschin), [codemzs](https://github.com/codemzs), [duli2012](https://github.com/duli2012), [smk2007](https://github.com/smk2007), [natke](https://github.com/natke), [zhijxu-MS](https://github.com/zhijxu-MS), [manashgoswami](https://github.com/manashgoswami), [zhangxiang1993](https://github.com/zhangxiang1993), [faxu](https://github.com/faxu), [HectorSVC](https://github.com/HectorSVC), [take-cheeze](https://github.com/take-cheeze), [jingyanwangms](https://github.com/jingyanwangms), [chilo-ms](https://github.com/chilo-ms), [YUNQIUGUO](https://github.com/YUNQIUGUO), [jgbradley1](https://github.com/jgbradley1), [jessebenson](https://github.com/jessebenson), [martinb35](https://github.com/martinb35), [Andrews548](https://github.com/Andrews548), [souptc](https://github.com/souptc), [pengwa](https://github.com/pengwa), [liuziyue](https://github.com/liuziyue), [orilevari](https://github.com/orilevari), [BowenBao](https://github.com/BowenBao), [thiagocrepaldi](https://github.com/thiagocrepaldi), [jeffbloo](https://github.com/jeffbloo)

Page 5 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.