Onnxruntime

Latest version: v1.20.1

Safety actively analyzes 706487 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 9

1.12.1

Not secure
This patch addresses packaging issues and bug fixes on top of v1.12.0.

- Java package: MacOS M1 support folder structure fix
- Android package: enable optimizations
- GPU (TensorRT provider): bug fixes
- DirectML: package fix
- WinML: bug fixes

See 12418 for full list of specific fixes included

1.12.0

Not secure
Announcements
* For Execution Provider maintainers/owners: the [lightweight compile API](https://github.com/microsoft/onnxruntime/blob/master/include/onnxruntime/core/framework/execution_provider.h#L249) is now the default compiler API for all Execution Providers (this was previously only available for the mobile build). If you have an EP using the [legacy compiler API](https://github.com/microsoft/onnxruntime/blob/master/include/onnxruntime/core/framework/execution_provider.h#L237), please migrate to the lightweight compile API as soon as possible. The legacy API will be deprecated in next release (ORT 1.13).
* netstandard1.1 support is being deprecated in this release and will be removed in the next ORT 1.13 release

Key Updates
General
* ONNX spec support
* onnx opset 17
* onnx-ml opset 3 (TreeEnsemble update)
* BeamSearch operator for encoder-decoder transformers models
* Support for invoking individual ops without the need to create a separate graph
* For use with custom op development to reuse ORT code
* Support for feeding external initializers (for large models) as byte arrays for model inferencing
* Build switch to disable usage of abseil library to remove dependency

Packages
* Python 3.10 support
* Mac M1 support in Python and Java packages
* .NET 6/MAUI support in Nuget C package
* Additional target frameworks: net6.0, net6.0-android, net6.0-ios, net6.0-macos
* NOTE: netstandard1.1 support is being deprecated in this release and will be removed in the 1.13 release
* [onnxruntime-openvino](https://pypi.org/project/onnxruntime-openvino/1.12.0/) package available on Pypi (from Intel)

Performance and Quantization
* Improved C++ APIs that now utilize RAII for better memory management
* Operator performance optimizations, including GatherElements
* Memory optimizations to support compute-intensive real-time inferencing scenarios (e.g. audio inferencing scenarios)
* CPU usage savings for infrequent inference requests by reducing thread spinning
* Memory usage reduction through use of containers from the abseil library, especially inlined vectors used to store tensor shapes and inlined hash maps
* New quantized kernels for weight symmetry to improve performance on ARM64 little core (GEMM and Conv)
* Specialized kernel to improve performance of quantized Resize by up to 2x speedup
* Improved the thread job partition for QLinearConv, demonstrating up to ~20% perf gain for certain models
* Quantization tool: improved ONNX shape inference for large models

Execution Providers
* TensorRT EP
* TensorRT 8.4 support
* Provide option to share execution context memory between TensorRT subgraphs
* Workaround long CI test time caused by frequent initialization/de-initialization of TensorRT builder
* Improve subgraph partitioning and consolidate TensorRT subgraphs when possible
* Refactor engine cache serialization/deserialization logic
* Miscellaneous bug fixes and performance improvements
* OpenVINO EP
* Pre-Built ONNXRuntime binaries with OpenVINO now available on pypi: [onnxruntime-openvino](https://pypi.org/project/onnxruntime-openvino/1.12.0/)
* Performance optimizations of existing supported models
* New runtime configuration option ‘enable_dynamic_shapes’ added to enable dynamic shapes for each iteration
* ORTModule included as part of OVEP Python Package to enable Torch ORT Inference
* DirectML EP
* Updated to [DirectML 1.9](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-190)
* Opset 13-15 support: [11827](https://github.com/microsoft/onnxruntime/pull/11827), [#11814](https://github.com/microsoft/onnxruntime/pull/11814), [#11782](https://github.com/microsoft/onnxruntime/pull/11782), [#11772](https://github.com/microsoft/onnxruntime/pull/11772)
* Bug fixes: [Xbox command list reuse](https://github.com/microsoft/onnxruntime/pull/12063), [descriptor heap reset](https://github.com/microsoft/onnxruntime/pull/12059), [command allocator memory growth](https://github.com/microsoft/onnxruntime/pull/12114), [negative pad counts](https://github.com/microsoft/onnxruntime/pull/11974), [node suffix removal](https://github.com/microsoft/onnxruntime/pull/11879)
* TVM EP - [details](https://onnxruntime.ai/docs/execution-providers/TVM-ExecutionProvider.html)
* Updated to add model .dll ingestion and execution on Windows
* Updated documentation and CI tests
* ***[New]*** SNPE EP - [details](https://onnxruntime.ai/docs/execution-providers/SNPE-ExecutionProvider.html)
* ***[Preview]*** XNNPACK EP - initial infrastructure with limited operator support, for use with ORT Mobile and ORT Web
* Currently supports Conv and MaxPool, with work in progress to add more kernels

Mobile
* Binary size reductions in Android minimal build - 12% reduction in size of base build with no operator kernels
* Added new operator support to NNAPI and CoreML EPs to improve ability to run super resolution and BERT models using NPU
* NNAPI: DepthToSpace, PRelu, Gather, Unsqueeze, Pad
* CoreML: DepthToSpace, PRelu
* Added [Docker file](https://onnxruntime.ai/docs/build/custom.html#android) to simplify running a custom minimal build to create an ORT Android package
* Initial XNNPACK EP compatibility

Web
* Memory usage optimizations
* Initial XNNPACK EP compatibility

ORT Training
* ***[New]*** ORT Training acceleration is also natively available through [HuggingFace Optimum](https://github.com/huggingface/optimum#training)
* ***[New]*** FusedAdam Optimizer now available through the torch-ort package for easier training integration
* FP16_Optimizer Support for more DeepSpeed Versions
* Bfloat16 support for AtenOp
* Added gradient ops for ReduceMax and ReduceMin
* Updates to Min and Max grad ops to use distributed logic
* Optimizations
* Optimized perf for Gelu and GeluGrad kernels for mixed precision models
* Enabled fusions for SimplifiedLayerNorm
* Added bitmask versions of Dropout, BiasDropout and DropoutGrad which brings ~8x space savings for the mast output.

Known issues
* The [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.DirectML) package on Nuget has an issue and will be fixed in a patch. Fix: #12368
* The [Maven package](https://search.maven.org/artifact/com.microsoft.onnxruntime/onnxruntime) has a packaging issue for Mac M1 builds and will be fixed in a patch. Fix: #12335 / [Workaround discussion](https://github.com/microsoft/onnxruntime/issues/11054#issuecomment-1195391571)
* Windows builds are not compatible with Windows 8.x in this release. Please use v1.11 for now.
---
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[snnn](https://github.com/snnn), [edgchen1](https://github.com/edgchen1), [fdwr](https://github.com/fdwr), [skottmckay](https://github.com/skottmckay), [iK1D](https://github.com/iK1D), [fs-eire](https://github.com/fs-eire), [mszhanyi](https://github.com/mszhanyi), [WilBrady](https://github.com/WilBrady), [justinchuby](https://github.com/justinchuby), [tianleiwu](https://github.com/tianleiwu), [PeixuanZuo](https://github.com/PeixuanZuo), [garymm](https://github.com/garymm), [yufenglee](https://github.com/yufenglee), [adrianlizarraga](https://github.com/adrianlizarraga), [yuslepukhin](https://github.com/yuslepukhin), [dependabot[bot]](https://github.com/dependabot[bot]), [chilo-ms](https://github.com/chilo-ms), [vvchernov](https://github.com/vvchernov), [oliviajain](https://github.com/oliviajain), [ytaous](https://github.com/ytaous), [hariharans29](https://github.com/hariharans29), [sumitsays](https://github.com/sumitsays), [wangyems](https://github.com/wangyems), [pengwa](https://github.com/pengwa), [baijumeswani](https://github.com/baijumeswani), [smk2007](https://github.com/smk2007), [RandySheriffH](https://github.com/RandySheriffH), [gramalingam](https://github.com/gramalingam), [xadupre](https://github.com/xadupre), [yihonglyu](https://github.com/yihonglyu), [zhangyaobit](https://github.com/zhangyaobit), [YUNQIUGUO](https://github.com/YUNQIUGUO), [jcwchen](https://github.com/jcwchen), [chenfucn](https://github.com/chenfucn), [souptc](https://github.com/souptc), [chandru-r](https://github.com/chandru-r), [jstoecker](https://github.com/jstoecker), [hanbitmyths](https://github.com/hanbitmyths), [RyanUnderhill](https://github.com/RyanUnderhill), [georgen117](https://github.com/georgen117), [jywu-msft](https://github.com/jywu-msft), [mindest](https://github.com/mindest), [sfatimar](https://github.com/sfatimar), [HectorSVC](https://github.com/HectorSVC), [Craigacp](https://github.com/Craigacp), [jeffdaily](https://github.com/jeffdaily), [zhijxu-MS](https://github.com/zhijxu-MS), [natke](https://github.com/natke), [stevenlix](https://github.com/stevenlix), [jeffbloo](https://github.com/jeffbloo), [guoyu-wang](https://github.com/guoyu-wang), [daquexian](https://github.com/daquexian), [faxu](https://github.com/faxu), [jingyanwangms](https://github.com/jingyanwangms), [adtsai](https://github.com/adtsai), [wschin](https://github.com/wschin), [weixingzhang](https://github.com/weixingzhang), [wenbingl](https://github.com/wenbingl), [MaajidKhan](https://github.com/MaajidKhan), [ashbhandare](https://github.com/ashbhandare), [ajindal1](https://github.com/ajindal1), [zhanghuanrong](https://github.com/zhanghuanrong), [tiagoshibata](https://github.com/tiagoshibata), [askhade](https://github.com/askhade), [liqunfu](https://github.com/liqunfu)

1.11.1

Not secure
This is a patch release on 1.11.0 with the following fixes:

- Symbolic shape infer error (https://github.com/microsoft/onnxruntime/pull/10674)
- Quantization tool bug (https://github.com/microsoft/onnxruntime/pull/10940)
- Adds missing numpy type when looking for the ort correspondance (https://github.com/microsoft/onnxruntime/pull/10943)
- Profiling tool JSON format bug (https://github.com/microsoft/onnxruntime/pull/11046)
- Function bug fix (https://github.com/microsoft/onnxruntime/pull/11148)
- Add mobile helpers to Python build (https://github.com/microsoft/onnxruntime/pull/11196)
- Scoped GIL release in run_with_iobinding (https://github.com/microsoft/onnxruntime/pull/11248)
- Fix output type mapping for JS (https://github.com/microsoft/onnxruntime/pull/11049)

All official packages are attached, and Python packages are additionally published to PyPi.

1.11.0

Not secure
* OpenCL _(in preview)_
* Introduced the EP for OpenCL to use with Mobile GPUs
* Available in `experimental/opencl` branch for users to try. Provide feedback through Issues and Discussions in the repo.
* README is available [here](https://github.com/microsoft/onnxruntime/blob/experimental/opencl/onnxruntime/core/providers/opencl/README.md).

Mobile
* Added general support for converting a model to NHWC layout at runtime
* Execution provider sets preferred layout and shared infrastructure in ORT will ensure the nodes the execution provider is assigned will be in that layout
* Added support for runtime optimization with minimal binary size impact
* Relevant optimizations are saved in the ORT format model for replay at runtime if applicable
* Added support for QDQ format models to the NNAPI EP
* Will fall back to CPU EP’s QDQ handling if NNAPI is not available using runtime optimizations
* Includes updates to the ORT QDQ optimizers so they work better with mobile scenarios
* Added helpers to:
* Analyze if a model can be used with the pre-built ORT Mobile package
* Update ONNX opset so model can be used with the pre-built package
* Convert dynamic inputs into fixed size inputs so that the model can be used with NNAPI/CoreML
* Optimize a QDQ format model for use with ORT
* Added Android and iOS packages with full ORT builds
* These packages have additional support for the full set of opsets and ops for ONNX models at the cost of a larger binary size.

Web
* Build option to create ONNX Runtime WebAssembly static library
* Support for concurrent creation of multiple inference sessions
* Upgraded emsdk version to 3.1.3 for more stable multi-threads and enables LTO with multi-threads build on WebAssembly.

Known issues
* When using tensor sequences/sparse tensors, the generated profile is not valid JSON. (Fixed in https://github.com/microsoft/onnxruntime/pull/10974)
* There is a bug in the quantization tool for calibration when choosing percentile algorithm (Fixed in https://github.com/microsoft/onnxruntime/pull/10940). To fix this, please apply the typo fix in the python file.
* Mac M

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[snnn](https://github.com/snnn), [edgchen1](https://github.com/edgchen1), [skottmckay](https://github.com/skottmckay), [yufenglee](https://github.com/yufenglee), [wangyems](https://github.com/wangyems), [yuslepukhin](https://github.com/yuslepukhin), [gwang-msft](https://github.com/gwang-msft), [iK1D](https://github.com/iK1D), [chilo-ms](https://github.com/chilo-ms), [fdwr](https://github.com/fdwr), [ytaous](https://github.com/ytaous), [RandySheriffH](https://github.com/RandySheriffH), [hanbitmyths](https://github.com/hanbitmyths), [chenfucn](https://github.com/chenfucn), [yihonglyu](https://github.com/yihonglyu), [ajindal1](https://github.com/ajindal1), [fs-eire](https://github.com/fs-eire), [souptc](https://github.com/souptc), [tianleiwu](https://github.com/tianleiwu), [YUNQIUGUO](https://github.com/YUNQIUGUO), [hariharans29](https://github.com/hariharans29), [oliviajain](https://github.com/oliviajain), [xadupre](https://github.com/xadupre), [ashari4](https://github.com/ashari4), [RyanUnderhill](https://github.com/RyanUnderhill), [jywu-msft](https://github.com/jywu-msft), [weixingzhang](https://github.com/weixingzhang), [baijumeswani](https://github.com/baijumeswani), [georgen117](https://github.com/georgen117), [natke](https://github.com/natke), [Craigacp](https://github.com/Craigacp), [jeffdaily](https://github.com/jeffdaily), [JingqiaoFu](https://github.com/JingqiaoFu), [zhanghuanrong](https://github.com/zhanghuanrong), [satyajandhyala](https://github.com/satyajandhyala), [smk2007](https://github.com/smk2007), [ryanlai2](https://github.com/ryanlai2), [askhade](https://github.com/askhade), [thiagocrepaldi](https://github.com/thiagocrepaldi), [jingyanwangms](https://github.com/jingyanwangms), [pengwa](https://github.com/pengwa), [scxiao](https://github.com/scxiao), [ashbhandare](https://github.com/ashbhandare), [BowenBao](https://github.com/BowenBao), [SherlockNoMad](https://github.com/SherlockNoMad), [sumitsays](https://github.com/sumitsays), [sfatimar](https://github.com/sfatimar), [mosdav](https://github.com/mosdav), [harshithapv](https://github.com/harshithapv), [liqunfu](https://github.com/liqunfu), [tiagoshibata](https://github.com/tiagoshibata), [gineshidalgo99](https://github.com/gineshidalgo99), [pranavsharma](https://github.com/pranavsharma), [jcwchen](https://github.com/jcwchen), [nkreeger](https://github.com/nkreeger), [xkszltl](https://github.com/xkszltl), [faxu](https://github.com/faxu), [suffiank](https://github.com/suffiank), [stevenlix](https://github.com/stevenlix), [jeffbloo](https://github.com/stevenlix), [feihugis](https://github.com/feihugis)

1.10.0

Not secure
Announcements
* As noted in the [deprecation notice](https://github.com/microsoft/onnxruntime/blob/4daa14bc74b5378d5fcb0d6de063a9fa8bd42eac/onnxruntime/python/onnxruntime_inference_collection.py#L350) in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
* Python 3.6 support removed for Mac builds. Since 3.6 is end-of-life in December 2021, it will no longer be supported from next release (ORT 1.11) onwards
* Removed dependency on [optional-lite](https://github.com/martinmoene/optional-lite)
* Removed experimental Featurizers code

General

* Support for plug-in custom thread creation and join functions to enable usage of external threads
* Optional type support from op set 15

Performance
* Introduced indirect Convolution method for QLinearConv which has symmetrically quantized filter, i.e., filter type is int8 and zero point of filter is 0. The method leverages in-direct buffer instead of memcpy'ing the original data and doesn’t need to compute the sum of each pixel of output image for quantized Conv.
* X64: new kernels - including avx2, avxvnni, avx512 and avx 512 vnni, for general and depthwise quantized Conv.
* ARM64: new kernels for depthwise quantized Conv.
* Tensor shape optimization to avoid allocating heap memory in most cases - [9542](https://github.com/microsoft/onnxruntime/pull/9542)
* Added transpose optimizer to push and cancel transpose ops, significantly improving perf for models requiring layout transformation

API
* Python
* Following through on the [deprecation notice](https://github.com/microsoft/onnxruntime/blob/4daa14bc74b5378d5fcb0d6de063a9fa8bd42eac/onnxruntime/python/onnxruntime_inference_collection.py#L350) in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
* C/C++
* New API to query CUDA stream to launch a custom kernel for scenarios where custom ops compiled into shared libraries need implicit synchronization with ORT CUDA kernels - [9141](https://github.com/microsoft/onnxruntime/pull/9141)
* Updated Invalid -> OrtInvalidAllocator
* Updated every item in OrtCudnnConvAlgoSearch to a safer global name
* WinML
* New APIs to create OrtValues from Windows platform specific ID3D12Resources by exposing DirectML Execution Provider specific APIs. These APIs allow DML to extend the C-API and provide EP specific extensions.
* OrtSessionOptionsAppendExecutionProviderEx_DML
* DmlCreateGPUAllocationFromD3DResource
* DmlFreeGPUAllocation
* DmlGetD3D12ResourceFromAllocation
* Bug fix: LearningModel::LoadFromFilePath in UWP apps

Packages
* Added Mac M1 Universal2 build support for a single binary that runs natively on both Apple silicon and Intel-based Macs. These are included in the official Nuget packages. ([build instructions](https://onnxruntime.ai/docs/build/inferencing.html#macos))
* Windows C API Symbols are now uploaded to [Microsoft symbol server](https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/microsoft-public-symbols)
* [Nuget package](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime) now supports ARM64 Linux C#
* [Python GPU package](https://pypi.org/project/onnxruntime-gpu/) now includes both TensorRT and CUDA EPs. *Note: EPs need to be explicitly registered to ensure the correct provider is used. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']). Please also ensure you have appropriate [TensorRT dependencies](https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html#requirements) and [CUDA dependencies](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements) installed.*

Execution Providers
* TensorRT EP
* Python GPU release packages now include support for TensorRT 8.0. Enable TensorrtExecutionProvider by explicitly setting providers parameter when creating an InferenceSession. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
* Published [quantized BERT model example](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/quantization/nlp/bert/trt)
* OpenVINO EP
* Add support for OpenVINO 2021.4.x
* Auto Plugin support
* IO Buffer/Copy Avoidance Optimizations for GPU plugin
* Misc fixes
* DNNL EP
* Add Softmaxgrad op
* Add Transpose, Reshape, Pow and LeakyRelu ops
* Add DynamicQuantizeLinear op
* Add squeeze/unsqueeze ops
* DirectML EP
* [Update](https://github.com/microsoft/onnxruntime/pull/9765) DirectML.dll from [1.5.1](https://www.nuget.org/packages/Microsoft.AI.DirectML/1.5.1) to [1.8.0](https://www.nuget.org/packages/Microsoft.AI.DirectML/1.8.0)
* Support full precision uint64/int64 for [48](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-180) operators
* Add 8D for [7](https://github.com/microsoft/DirectML/blob/master/Releases.md#directml-160) more existing operators
* Add DynamicQuantizeLinear op
* Accept ID3DResource's via [C API](https://github.com/microsoft/onnxruntime/pull/9686)

Mobile
* Added Xamarin support to the ORT C Nuget packages
* Updated target frameworks in native package
* iOS and Android binaries now included in native package
* ORT format models now have backwards compatibility guarantee

Web
* Support WebAssembly SIMD for qgemm kernel to accelerate the performance of quantized models
* Upgraded existing WebGL kernels to the latest opset
* Optimized bundle size to support various production scenarios, such as WebAssembly only or WebGL only
---
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[snnn](https://github.com/snnn), [gineshidalgo99](https://github.com/gineshidalgo99), [fs-eire](https://github.com/fs-eire), [gwang-msft](https://github.com/gwang-msft), [edgchen1](https://github.com/edgchen1), [hariharans29](https://github.com/hariharans29), [skottmckay](https://github.com/skottmckay), [jeffdaily](https://github.com/jeffdaily), [baijumeswani](https://github.com/baijumeswani), [fdwr](https://github.com/fdwr), [smk2007](https://github.com/smk2007), [suffiank](https://github.com/suffiank), [souptc](https://github.com/souptc), [RyanUnderhill](https://github.com/RyanUnderhill), [iK1D](https://github.com/iK1D), [yuslepukhin](https://github.com/yuslepukhin), [chilo-ms](https://github.com/chilo-ms), [satyajandhyala](https://github.com/satyajandhyala), [hanbitmyths](https://github.com/hanbitmyths), [thiagocrepaldi](https://github.com/thiagocrepaldi), [wschin](https://github.com/wschin), [tianleiwu](https://github.com/tianleiwu), [pengwa](https://github.com/pengwa), [xadupre](https://github.com/xadupre), [zhanghuanrong](https://github.com/zhanghuanrong), [SherlockNoMad](https://github.com/SherlockNoMad), [wangyems](https://github.com/wangyems), [RandySheriffH](https://github.com/RandySheriffH), [ashbhandare](https://github.com/ashbhandare), [tiagoshibata](https://github.com/tiagoshibata), [yufenglee](https://github.com/yufenglee), [mindest](https://github.com/mindest), [sumitsays](https://github.com/sumitsays), [MaajidKhan](https://github.com/MaajidKhan), [gramalingam](https://github.com/gramalingam), [tracysh](https://github.com/tracysh), [georgen117](https://github.com/georgen117), [jywu-msft](https://github.com/jywu-msft), [sfatimar](https://github.com/sfatimar), [martinb35](https://github.com/martinb35), [nkreeger](https://github.com/nkreeger), [ytaous](https://github.com/ytaous), [ashari4](https://github.com/ashari4), [stevenlix](https://github.com/stevenlix), [chandru-r](https://github.com/chandru-r), [jingyanwangms](https://github.com/jingyanwangms), [mosdav](https://github.com/mosdav), [raviskolli](https://github.com/raviskolli), [faxu](https://github.com/faxu), [liqunfu](https://github.com/liqunfu), [kit1980](https://github.com/kit1980), [weixingzhang](https://github.com/weixingzhang), [pranavsharma](https://github.com/pranavsharma), [jcwchen](https://github.com/jcwchen), [chenfucn](https://github.com/chenfucn), [BowenBao](https://github.com/BowenBao), [jeffbloo](https://github.com/jeffbloo)

1.9.1

This is a patch release on 1.9.0 with the following fixes:

- Microsoft.AI.MachineLearning NuGet Package Fixes
- Bug fix for the issue that fails GPU execution if the executable is on the path that contained the unicode characters - [9229](https://github.com/microsoft/onnxruntime/pull/9229).
- Bug fix for the NuGet package to be installed on UWP apps with 1.9 - [9182](https://github.com/microsoft/onnxruntime/pull/9182).
- Bug fix for OpenVino EP Python API- [9166](https://github.com/microsoft/onnxruntime/pull/9166).
- Bump up TVM version for NUPHAR EP - [9159](https://github.com/microsoft/onnxruntime/pull/9159).
- Fixed build issue for iOS 11 and earlier versions - [9036](https://github.com/microsoft/onnxruntime/pull/9036).

Page 4 of 9

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.