Onnxruntime

Latest version: v1.20.1

Safety actively analyzes 682244 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 9

1.7.1

The [Microsoft.ML.OnnxRuntime.Gpu](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/) and [Microsoft.ML.OnnxRuntime.Managed](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Managed/) packages are uploaded to Nuget.org. Please note the version numbers for the Microsoft.ML.OnnxRuntime.Managed package.

1.7.0

Not secure
Announcements
Starting from this release, all ONNX Runtime CPU packages are now built *without OpenMP*. A version *with OpenMP* is available on Nuget ([Microsoft.ML.OnnxRuntime.OpenMP](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.OpenMP)) and PyPi ([onnxruntime-openmp](https://pypi.org/project/onnxruntime-openmp/)). Please report any issues in [GH Issues](https://github.com/microsoft/onnxruntime/issues).

**Note:** The 1.7.0 GPU package is uploaded on [this Azure DevOps Feed](https://aiinfra.visualstudio.com/PublicPackages/_packaging?_a=feed&feed=ORT) due to the size limit on Nuget.org. Please use [1.7.1](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.Gpu/1.7.1) for the GPU package through Nuget.

Key Feature Updates

General
* Mobile
* Custom operators now supported in the ONNX Runtime Mobile build
* Added ability to reduce types supported by operator kernels to only the types required by the models
* Expect a 25-33% reduction in binary size contribution from the kernel implementations. Reduction is model dependent, but testing with common models like Mobilenet v2, SSD Mobilenet and Mobilebert achieved reductions in this range.
* Custom op support for dynamic input
* MKLML/openblas/jemalloc build configs removed
* Removed dependency on gemmlowp
* *[Experimental]* Audio Operators
* Fourier Transforms (DFT, IDFT, STFT), Windowing Functions (Hann, Hamming, Blackman), and a MelWeightMatrix operator in "com.microsoft.experimental” domain
* Buildable using ms_experimental build flag (included in [Microsoft.AI.MachineLearning](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) NuGet package)

Performance
* Quantization
* Quantization tool now supports quantization of models in QDQ (QuantizeLinear-DequantizeLinear) format
* Depthwise Conv quantization performance improvement
* Quantization support added for Pad, Split and MaxPool for channel last
* QuantizeLinear performance improvement on AVX512
* Optimization: Fusion for Conv + Mul/Add
* Transformers
* Longformer Attention CUDA kernel memory footprint reduction
* Einsum Float16 CUDA kernel for ALBERT and XLNet
* [Python optimizer tool](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/optimizer.py) now supports fusion for BART
* [CPU profiling tool](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/profiler.py) for transformers models

APIs and Packages
* Python 3.8 and 3.9 support added for all platforms, removed support for 3.5
* ARM32/64 Windows builds are now included in the CPU Nuget and zip packages
* WinML
* .NET5 support - will work with .NET5 Standard 2.0 Projections
* Image descriptors expose NominalPixelRange properties
* Native support added for additional pixel ranges [0..1] and [-1..1] in image models.
* A new property is added to the ImageFeatureDescriptor runtimeclass to expose the ImageNominalPixelRange property in ImageFeatureDescriptor. Other similar properties exposed are the image’s BitmapPixelFormat and BitmapAlphaMode.
* Bug fixes and performance improvements, including [6249](https://github.com/microsoft/onnxruntime/issues/6249)
* *[Experimental]* Model Building API available under the Microsoft.AI.MachineLearning.Experimental namespace. (included in [Microsoft.AI.MachineLearning](https://www.nuget.org/packages/Microsoft.AI.MachineLearning) NuGet package)
* Can be used to create dynamic models on the fly to enable engine-optimized and hardware accelerated dynamic tensor featurization [code sample](https://github.com/microsoft/onnxruntime/blob/87cb6fd495c046dac88893818478bb027969d611/winml/test/api/LearningModelSessionAPITest.cpp#L759)

Execution Providers
* CUDA EP
* Official GPU build now built with CUDA 11
* OpenVINO EP
* Support for OpenVINO 2021.2
* Deprecated support for OpenVINO 2020.2
* Support for OpenVINO EP options in onnxruntime_perf_test tool
* General fixes
* TensorRT EP
* Support for TensorRT 7.2
* General fixes and perf improvements
* DirectML EP
* Support for [DirectML 1.4.2](https://github.com/microsoft/DirectML/blob/master/Releases.md)
* DirectML PIX markers added to enable profiling graph at operator level.
* NNAPI EP
* Performance improvement for quantized models
* Support of per-channel quantization for QlinearConv
* Additional operator support – Min/Max/Pow

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[edgchen1](https://github.com/edgchen1), [snnn](https://github.com/snnn), [skottmckay](https://github.com/skottmckay), [gwang-msft](https://github.com/gwang-msft), [hariharans29](https://github.com/hariharans29), [tianleiwu](https://github.com/tianleiwu), [xadupre](https://github.com/xadupre), [yufenglee](https://github.com/yufenglee), [ryanlai2](https://github.com/ryanlai2), [wangyems](https://github.com/wangyems), [suffiank](https://github.com/suffiank), [liqunfu](https://github.com/liqunfu), [orilevari](https://github.com/orilevari), [baijumeswani](https://github.com/baijumeswani), [weixingzhang](https://github.com/weixingzhang), [pranavsharma](https://github.com/pranavsharma), [RandySheriffH](https://github.com/RandySheriffH), [ashbhandare](https://github.com/ashbhandare), [oliviajain](https://github.com/oliviajain), [smk2007](https://github.com/smk2007), [tracysh](https://github.com/tracysh), [stevenlix](https://github.com/stevenlix), [fs-eire](https://github.com/fs-eire), [Craigacp](https://github.com/Craigacp), [faxu](https://github.com/faxu), [mrry](https://github.com/mrry), [codemzs](https://github.com/codemzs), [chilo-ms](https://github.com/chilo-ms), [jcwchen](https://github.com/jcwchen), [zhanghuanrong](https://github.com/zhanghuanrong), [SherlockNoMad](https://github.com/SherlockNoMad), [iK1D](https://github.com/iK1D), [askhade](https://github.com/askhade), [zhangxiang1993](https://github.com/zhangxiang1993), [yuslepukhin](https://github.com/yuslepukhin), [tlh20](https://github.com/tlh20), [MaajidKhan](https://github.com/MaajidKhan), [wschin](https://github.com/wschin), [smkarlap](https://github.com/smkarlap), [wenbingl](https://github.com/wenbingl), [pengwa](https://github.com/pengwa), [duli2012](https://github.com/duli2012), [natke](https://github.com/natke), [alberto-magni](https://github.com/alberto-magni), [Tixxx](https://github.com/Tixxx), [HectorSVC](https://github.com/HectorSVC), [jingyanwangms](https://github.com/jingyanwangms), [jstoecker](https://github.com/jstoecker), [kit1980](https://github.com/kit1980), [suryasidd](https://github.com/suryasidd), [RandyShuai](https://github.com/RandyShuai), [sfatimar](https://github.com/sfatimar), [jywu-msft](https://github.com/jywu-msft), [liuziyue](https://github.com/liuziyue), [mosdav](https://github.com/mosdav), [thiagocrepaldi](https://github.com/thiagocrepaldi), [souptc](https://github.com/souptc), [fdwr](https://github.com/fdwr)

1.6.0

Not secure
Announcements
* OpenMP will be disabled in future official builds (build option will still be available). A NoOpenMP version of ONNX Runtime is now available with this release on [Nuget](http://nuget.org/packages/Microsoft.ML.OnnxRuntime.NoOpenMP) and [PyPi](https://pypi.org/project/onnxruntime/) for C/C++/C#/Python users.
* In the next release, *MKL-ML*, *openblas*, and *jemallac* build options will be removed, and the [Microsoft.ML.OnnxRuntime.MKLML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.MKLML/) Nuget package will no longer be published. Users of *MKL-ML* are recommended to use the Intel EPs. If you are using these options and identify issues switching to an alternative build, please [file an issue](https://github.com/microsoft/onnxruntime/issues) with details.

Key Feature Updates
General
* [ONNX 1.8](https://github.com/onnx/onnx/releases/tag/v1.8.0) support / opset 13
* New contrib ops: BiasSoftmax, MatMulIntegerToFloat, QLinearSigmoid, Trilu
* ORT Mobile now compatible with NNAPI for accelerating model execution on Android devices
* Build support for Mac with Apple Silicon (CPU only)
* New dependency: flatbuffers
* Support for loading sparse tensor initializers in pruned models
* Support for setting the execution priority of a node
* Support for selection of cuDNN conv algorithms
* [BERT Model profiling tool](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/profiler.py)

Performance
* New session option to disable denormal floating numbers on sse3 supporting CPUs
* Eliminates unexpected performance degradation due to denormals without needing to retrain the model
* Option to share initializers between sessions to improve memory utilization
* Useful when several models that use the same set of initializers except the last few layers of the model are loaded in the same process
* Eliminates wasteful memory usage when every model (session) creates a separate instance of the same initializer
* Exposed by the AddInitializer API
* Transformer model optimizations
* Longformer: LongformerAttention CUDA operator added
* Support for BERT models exported from Tensorflow with 1 or 2 inputs
* Python optimizer supports additional models: openai-GPT, ALBERT and FlauBERT
* Quantization
* Support of per-channel QuantizeLinear and DeQuantizeLinear
* Support of LSTM quantization
* Quantization performance improvement on ARM
* CNN quantization perf optimizations, including u8s8 support and NHWC transformer in QLinearConv
* ThreadPool
* Use `_mm_pause()` for spin loop to improve performance and power consumption

APIs and Packages
* Python - I/O Binding enhancements
* [Usage Documentation](https://www.onnxruntime.ai/python/api_summary.html) (OrtValue and IOBinding sections)
* Python binding for the `OrtValue` data structure
* An interface is exposed to allocate memory on a CUDA-supported device and define the contents of this memory. No longer need to use allocators provided by other libraries to allocate and manage CUDA memory to be used with ORT.
* Allows consuming ORT allocated device memory as an `OrtValue` (check Scenario 4 in the IOBinding section of the documentation for an example)
* `OrtValue` instances can be used to bind inputs/outputs. This is in addition to existing interfaces that allows binding a piece of memory directly/using numpy arrays that can be bound and may be particularly useful when binding ORT allocated device memory.
* C - float16 and bfloat16 support
* Windows ML
* NuGet package now supports UWP applications targeting Windows Store deployment for both CPU and GPU
* Minor API Improvements:
* Able to bind IIterable<Buffers> as inputs and outputs
* Able to create Tensor* via multiple buffers
* WindowsAI Redist now includes a statically linked C-Runtime package for additional deployment options

Execution Providers
* DNNL EP Updates
* DNNL updated from 1.1.1 to 1.7
* NNAPI EP Updates
* Support for CNN models
* Additional operator support - Resize/Flatten/Clip
* TensorRT EP Updates
* Int8 quantization support (experimental)
* Engine cache refactoring and improvements
* General fixes and performance improvements
* OpenVINO EP Updates
* OpenVINO 2021.1 support
* OpenVINO EP builds as shared library
* Multi-threaded inferencing support
* fp16 input type support
* Multi-device plugin support
* Hetero plugin support
* Enable build on ARM64
* DirectML EP Updates (1.3.0 -> 1.4.0)
* Utilizing the first public standalone release of the DirectML API through the [DirectML NuGet package](https://www.nuget.org/packages/Microsoft.AI.DirectML/) release
* General fixes and improvements
* nGraph EP is removed. Recommend to use OpenVINO instead

Additional notes
* VCRuntime2019 with OpenMP: pinning a process to NUMA node 1 forces the execution to be single threaded. Fix is in progress in VC++.
* Workaround: place the VS2017 vcomp DLL side-by-side so that ORT uses the VS2017 version
* Pip version >=20.3 is required for use on macOS Big Sur (11.x)
* The destructor of OrtEnv is now non-trivial and may do [DLL unloading](https://github.com/microsoft/onnxruntime/blob/rel-1.6.0/onnxruntime/core/session/ort_env.cc#L45) Do not call `ReleaseEnv` from DLLMain or put OrtEnv in global variables. It is not safe to call FreeLibrary from DllMain. - [reference](https://docs.microsoft.com/en-us/windows/win32/api/libloaderapi/nf-libloaderapi-freelibrary)
* Some unit tests fail on Pascal GPUs. See: https://github.com/microsoft/onnxruntime/issues/5914
* If using the default CPU package (built with OpenMP), consider tuning the OpenMP settings to improve performance. By default the number of threads to use for openmp parallel regions is set to the number of logical CPUs. This may not be optimal for machines with hyper-threading; when CPUs are oversubscribed the 99-percentile latency could be 10x greater. Setting the OMP_NUM_THREADS environment variable to the number of **physical** cores is a good starting point. As noted in Announcements, future official builds of ORT will be published without OpenMP

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
[gwang-msft](https://github.com/gwang-msft), [snnn](https://github.com/snnn), [skottmckay](https://github.com/skottmckay), [edgchen1](https://github.com/edgchen1), [hariharans29](https://github.com/hariharans29), [wangyems](https://github.com/wangyems), [yufenglee](https://github.com/yufenglee), [yuslepukhin](https://github.com/yuslepukhin), [tianleiwu](https://github.com/tianleiwu), [SherlockNoMad](https://github.com/SherlockNoMad), [tracysh](https://github.com/tracysh), [ryanlai2](https://github.com/ryanlai2), [askhade](https://github.com/askhade), [xadupre](https://github.com/xadupre), [liqunfu](https://github.com/liqunfu), [RandySheriffH](https://github.com/RandySheriffH), [jywu-msft](https://github.com/jywu-msft), [KeDengMS](https://github.com/KeDengMS), [pranavsharma](https://github.com/pranavsharma), [mrry](https://github.com/mrry), [ashbhandare](https://github.com/ashbhandare), [iK1D](https://github.com/iK1D), [RyanUnderhill](https://github.com/RyanUnderhill), [MaajidKhan](https://github.com/MaajidKhan), [wenbingl](https://github.com/wenbingl), [kit1980](https://github.com/kit1980), [weixingzhang](https://github.com/weixingzhang), [tlh20](https://github.com/tlh20), [suffiank](https://github.com/suffiank), [Craigacp](https://github.com/Craigacp), [smkarlap](https://github.com/smkarlap), [stevenlix](https://github.com/stevenlix), [zhanghuanrong](https://github.com/zhanghuanrong), [sfatimar](https://github.com/sfatimar), [ytaous](https://github.com/ytaous), [tiagoshibata](https://github.com/tiagoshibata), [fdwr](https://github.com/fdwr), [oliviajain](https://github.com/oliviajain), [alberto-magni](https://github.com/alberto-magni), [jcwchen](https://github.com/jcwchen), [mosdav](https://github.com/mosdav), [xzhu1900](https://github.com/xzhu1900), [wschin](https://github.com/wschin), [codemzs](https://github.com/codemzs), [duli2012](https://github.com/duli2012), [smk2007](https://github.com/smk2007), [natke](https://github.com/natke), [zhijxu-MS](https://github.com/zhijxu-MS), [manashgoswami](https://github.com/manashgoswami), [zhangxiang1993](https://github.com/zhangxiang1993), [faxu](https://github.com/faxu), [HectorSVC](https://github.com/HectorSVC), [take-cheeze](https://github.com/take-cheeze), [jingyanwangms](https://github.com/jingyanwangms), [chilo-ms](https://github.com/chilo-ms), [YUNQIUGUO](https://github.com/YUNQIUGUO), [jgbradley1](https://github.com/jgbradley1), [jessebenson](https://github.com/jessebenson), [martinb35](https://github.com/martinb35), [Andrews548](https://github.com/Andrews548), [souptc](https://github.com/souptc), [pengwa](https://github.com/pengwa), [liuziyue](https://github.com/liuziyue), [orilevari](https://github.com/orilevari), [BowenBao](https://github.com/BowenBao), [thiagocrepaldi](https://github.com/thiagocrepaldi), [jeffbloo](https://github.com/jeffbloo)

1.5.3

This is a minor patch release on [1.5.2](https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.2) with the following changes:
* Fix shared provider unload crash 5553
* Minor minimal build header fix

1.5.2

Not secure
This is a minor patch release on [1.5.1](https://github.com/Microsoft/onnxruntime/releases/tag/v1.5.1) with the following changes:
* Remove dependency on cudnn64_7.dll for GPU C nuget: https://github.com/microsoft/onnxruntime/pull/5386
* Add config keys header file in the packages for Linux and Mac: https://github.com/microsoft/onnxruntime/pull/5388
* Add flatbuffers verifier for ORT format buffer: https://github.com/microsoft/onnxruntime/pull/5378
* Use official flatbuffers v1.12: https://github.com/microsoft/onnxruntime/pull/5392
* Mitigate pybind11 build break using Xcode 12 on macOS: https://github.com/microsoft/onnxruntime/pull/5381
* Support trilinear sampling in the Resize operator: https://github.com/microsoft/onnxruntime/pull/5300
* Update TensorRT parser to fix accuracy issue in some opset11 models: https://github.com/microsoft/onnxruntime/pull/5442


orttraining_rc3.1
Fixes issue discovered during validation.

Changes:
- https://github.com/microsoft/onnxruntime/pull/5350

orttraining_rc3
See: https://github.com/microsoft/onnxruntime/releases/tag/v1.5.1

1.5.1

Not secure
Key Updates
General
* Reduced Operator Kernel build allows ORT binaries to be built with only required operators in the model(s) - [learn more](https://github.com/microsoft/onnxruntime/blob/master/docs/Reduced_Operator_Kernel_build.md)
* **[Preview]** ORT for Mobile Platforms - minimizes build size for mobile and embedded devices - [learn more](https://github.com/microsoft/onnxruntime/blob/master/docs/ONNX_Runtime_for_Mobile_Platforms.md)
* Transformer model inferencing performance optimizations
* Perf improvement for DistilBERT
* Benchmark tool supports more pretrained models
* Improvements in quantization tool
* Support quantization-aware training models
* Make calibration tool to support general preprocessing and calibrate on input
* Simplify the quantization APIs
* Support of model larger than 2G
* New operators for static quantization: QLinearMul, QLinearAdd, QlinearSigmoid and QLinearLeakyRelu
* Prepack constant matrix B for float GEMM (MatMul, Attention)
* Limited Python 3.8 support added in addition to 3.5-3.7 for official Python packages. Not yet supported for Windows GPU and Linux ARM builds.
* Telemetry enabled in Java and NodeJS packages for Windows builds. Note: data is not directly sent to Microsoft or ORT teams by ONNX Runtime; enabling telemetry means trace events are collected by the Windows operating system and may be sent to the cloud based on the user's privacy settings - [learn more](https://github.com/microsoft/onnxruntime/blob/master/docs/Privacy.md).

API
* Python API support for RegisterCustomOpsLibrary
* IO Binding API for C/C++/C language bindings. This allows use of pre-allocated buffers on targeted devices and also target device for unknown output shapes.
* Sharing of allocators between multiple sessions. This allows much better utilization of memory by not creating a separate arena for each session in the same process. See [this](https://github.com/microsoft/onnxruntime/blob/rel-1.5.1/docs/C_API.md) for details.

Windows ML
* NuGet package now supports UWP applications targeting Windows Store deployment (CPU only)
* NuGet package now supports .NET and .NET framework applications
* RUST Developers can now deploy Windows ML – sample and documentation available [here](https://github.com/microsoft/Windows-Machine-Learning/tree/master/Samples/RustSqueezenet)
* New APIs to for additional performance control:
* IntraopNumThreads: Provides an ability to change the number of threads used in the threadpool for Intra Operator Execution for CPU operators through LearningModelSessionOptions.
* SetNamedDimensionOverrides: Provides the ability to override named input dimensions to concrete values through LearningModelSessionOptions in order to achieve better runtime performance.
* Support for additional ONNX format image type denotations – Gray8, normalized [0..1] and normalized [-1..1]
* Reduced Windows ML package size by separating debug symbols into separate distribution package.

Execution Providers
* CUDA updates
* CUDA 10.2 / cuDNN 8.0 in official package
* CUDA 11 support added and available to build from source
* CUDA conv kernel support asymmetrical padding to fully support models such as YoloV3 for improved GPU perf
* TensorRT EP updates
* Support for TensorRT 7.1
* Added TensorRT engine caching feature, turned on by setting env variable ORT_TENSORRT_ENGINE_CACHE_ENABLE=1
* TensorRT builds are now built with the Execution Provider as a separate dll. If enabled in the build, the provider will be available as a shared library. This was previously also enabled for the DNNL EP (ORT 1.3). Other Execution Providers will be added in the future.
* OpenVINO EP updates
* Support for OpenVINO 2020.4
* Added runtime options for VPU hardware to select specific hardware device and enable fast compilation of models.
* Enable C binding support for OpenVINO EP
* DirectML EP updates
* API available for Python ([build from source](https://github.com/microsoft/onnxruntime/blob/v1.5.1/BUILD.md#directml)) and C [Microsoft.ML.OnnxRuntime.DirectML](https://www.nuget.org/packages/Microsoft.ML.OnnxRuntime.directml)
* 7 new operators for ONNX 1.7 (opset 12): Celu, GreaterOrEqual, LessOrEqual, ArgMin/Max with select_last_index, GatherND with batch_dim, RoiAlign
* New data integer types were added to existing operators: Clip int, Max int, Min int, MaxPool int8, ReduceMin int8, ReduceMax int8, Pow int exponent
* Higher dimension support 1D to 8D added to these operators: ElementWise*, Activation*, Reduce*, ArgMin/ArgMax, Gather*, Scatter*, OneHot
* 64-bit support for indices on GPU's that support it: Gather, Scatter, OneHot, ArgMax/ArgMin, Cast.
* Android NNAPI EP updates:
* Support for dynamic input shape
* Int32/float32/uint8 data type
* 50% more supported operators (36 total)
* Support for Uint8 static quantization
* Smaller binary size
* Lower memory consumption
* CPU fallback for Android level 26-
* MiGraphX EP updates
* Added ONNX operators: GatherElements, NonZero, Equal, and Where
* Support for Boolean data type
* Improve support for existing operators:
* Asymmetric padding of AveragePool
* Multi-dimensional support for Convolution, Pooling, LRN, and Batchnormalization
* Ceil mode support for AveragePool and MaxPool
* More general approach to check whether constant folding is possible
* Improved graph partitioning logic

Training (RC3 release)
* New and improved API to simplify integration with PyTorch trainer code - [see instructions here](https://github.com/microsoft/onnxruntime-training-examples/tree/master/getting-started)
* Updated CUDA 11 / cuDNN 8.0 support to accelerate in NVIDIA A100

Dependency updates
MacOS binaries now rely on openmp to be installed. See [this](https://github.com/microsoft/onnxruntime/issues/5344#issuecomment-701921165) for reference.

Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:

[gwang-msft](https://github.com/gwang-msft), [snnn](https://github.com/snnn), [skottmckay](https://github.com/skottmckay), [hariharans29](https://github.com/hariharans29), [thiagocrepaldi](https://github.com/thiagocrepaldi), [tianleiwu](https://github.com/tianleiwu), [wangyems](https://github.com/wangyems), [RandySheriffH](https://github.com/RandySheriffH), [yufenglee](https://github.com/yufenglee), [SherlockNoMad](https://github.com/SherlockNoMad), [smk2007](https://github.com/smk2007), [jywu-msft](https://github.com/jywu-msft), [liqunfu](https://github.com/liqunfu), [edgchen1](https://github.com/edgchen1), [yuslepukhin](https://github.com/yuslepukhin), [tiagoshibata](https://github.com/tiagoshibata), [fdwr](https://github.com/fdwr), [ashbhandare](https://github.com/ashbhandare), [iK1D](https://github.com/iK1D), [wschin](https://github.com/wschin), [BowenBao](https://github.com/BowenBao), [zhanghuanrong](https://github.com/zhanghuanrong), [RyanUnderhill](https://github.com/RyanUnderhill), [ryanlai2](https://github.com/ryanlai2), [askhade](https://github.com/askhade), [pranavsharma](https://github.com/pranavsharma), [martinb35](https://github.com/martinb35), [suffiank](https://github.com/suffiank), [ytaous](https://github.com/ytaous), [KeDengMS](https://github.com/KeDengMS), [rayankrish](https://github.com/rayankrish), [natke](https://github.com/natke), [YUNQIUGUO](https://github.com/YUNQIUGUO), [range4life](https://github.com/range4life), [smkarlap](https://github.com/smkarlap), [zhangxiang1993](https://github.com/zhangxiang1993), [xzhu1900](https://github.com/xzhu1900), [codemzs](https://github.com/codemzs), [weixingzhang](https://github.com/weixingzhang), [stevenlix](https://github.com/stevenlix), [tracysh](https://github.com/tracysh), [mosdav](https://github.com/mosdav), [jingyanwangms](https://github.com/jingyanwangms), [tlh20](https://github.com/tlh20), [souptc](https://github.com/souptc), [orilevari](https://github.com/orilevari), [kit1980](https://github.com/kit1980), [yangchen-MS](https://github.com/yangchen-MS), [faxu](https://github.com/faxu), [fs-eire](https://github.com/fs-eire), [wenbingl](https://github.com/wenbingl), [chilo-ms](https://github.com/chilo-ms), [xkszltl](https://github.com/xkszltl), [Andrews548](https://github.com/Andrews548), [yuzawa-san](https://github.com/yuzawa-san), [MaximKalininMS](https://github.com/MaximKalininMS), [jgbradley1](https://github.com/jgbradley1), [nickfeeney](https://github.com/nickfeeney), [zhijxu-MS](https://github.com/zhijxu-MS), [Tixxx](https://github.com/Tixxx), [suryasidd](https://github.com/suryasidd), [Craigacp](https://github.com/Craigacp), [duli2012](https://github.com/duli2012), [jeffbloo](https://github.com/jeffbloo)

orttraining_rc2

Page 6 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.