Onnxconverter-common

Latest version: v1.14.0

Safety actively analyzes 682251 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.13.0

Improvement and new features:
- upgrade version number to 1.13
- add warning message when fp32 was truncated to fp16, 246
- update tasoptions to align with security review 244
- add new test cases 243
- add RandomUniformLike op into fp16 converter block list 239
- shorten CI pipeline by reduce the matrix size 237
- delay import onnxruntime to avoid ImportError when onnxruntime is not necessary 235
- create and update OneBranchPipeline-Official.yml for security review 232 226 223
- add 3 descriptions files for OSS 230
- add auto_mixed_precision_model_path function for large model (larger than 2G) 217 230
- fix Resize op convert to fp16 issue 212

Closed issues:
245 242 241 238 229 227 220 200 218 215 213 211 208 207 196 195 171 150

1.9.0

Opset 14
* Upgrade to opset 14(183)
* Fix RNN version in opset 14 change(186)

Opset 15
* Update max supported opset to 15.(198)

float16
* Temporarily disable fp16 test(185)
* Add op_block_list arg to float16 converter(190)
* Add node_block_list to fp16 conversion script(191)
* Added script for automatic conversion to float16, excluding the minim…(193)

onnx2py
* Fix onnx2py for new onnx package(177)
* Fix onnx2py to avoid making long paths(192)
* Fix onnx2py for seq types(194)

Bug fixes and CI
* Replace 'output' with 'input' in RuntimeError(182)
* Create nightly CI(184)
* Fix nightly CI names(187)
* Fix nightly CI(188)
* Try to fix nightly CI again(189)
* Add InstanceNormalization op to DEFAULT_OP_BLOCK_LIST(197)

1.8.1

1.8.0

API

* Initialize container node_domain_version_pair_sets (123)
* Fix handling onnx model opset after creating Graph (125)
* Add CumSum to black list and fix duplicated node name issue (127)
* Add Softsign activation function. (135)
* Enforce model to graph opset (145)
* Upgrade op_version to pass onnx initializer checker (146)
* Add support for complex number, unsigned integers (147)
* Add hummingbird installed method (134)
* Set keepdims=1 as default for ReduceSum (157)
* Fix rank shift in apply_reducesum and _apply_squeeze_unsqueeze (158)

Opset 13

* Update default values for opset 13 (160)
* Update to opset 13 (156)
* Bump DEFAULT_OPSET_NUMBER = 13 (159)

Optimizer

* (Optimizer) Remove Matmul from broadcast op (129)
* (Optimizer) Refine the onnx_fx and optimizer code. (130)
* Handle len(pred_0.tensors) == 0 in is_same_node_merge (133)
* Hanlde Split op in is_same_node_merge (136)
* Fix next.precedence range(1, 5) case in ConvBatchNormOptimizer (137)
* Add a matmul optimization (138)
* Pass Max/Min for PushTransposeSolution (139)
* Support the sub graph and constant in const folding (122)

PushTranspose

* Combine TransposeOptimizer and PushTransposeOptimizer into one 131
* PushTranspose optimizer for LSTM - Squeeze (128)
* Fix PushTransposeSolution for a node_transpose_no_pass case 140
* Fix MergeOptimizer for the case Transpose + xxx + Transpose (142)
* Handle multiple end.precedences for SwapOpSolution (143)
* Skip PushTranspose when broadcast has two un-init inputs (144)

float16

* Updated float16 conversion script to maintain sign and finiteness of converted constants 153
* Support >2GB ONNX models for fp16 conversion (167)
* fix the version which starts to support infer_shapes_path (168)

onn2py

*onnx2py* is a tool which converts an ONNX graph into a python script (161, 162, 164, 165, 166)

1.7

2. improve the onnx optimizer
3. a new tool to create the ONNX model from a python function.

1.7.0

The major update:

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.