Onnx2tf

Latest version: v1.26.3

Safety actively analyzes 688313 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 85

2.16.1

- Keras 3.0 will be the default Keras version. You may need to update your script to use Keras 3.0.
- Please refer to the new Keras documentation for Keras 3.0 (https://keras.io/keras_3).
- This pull request is to experimentally verify the upgrade to `TensorFlow==2.16.1`.

What's Changed
* [experimental] tensorflow==2.16.1, Keras 3.0 by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/609


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.19.16...1.20.0

2.13.0rc0

- [[TODO] Verification of operation when upgrading to TensorFlow v1.13.0 348](https://github.com/PINTO0309/onnx2tf/issues/348)

What's Changed
* TF v2.12.0 -> TF v2.13.0-rc0 by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/363


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.11.11...1.12.0

2.12.0rc0

-okv3, --output_keras_v3: Optional[bool]
Output model in Keras (keras_v3) format.

![image](https://user-images.githubusercontent.com/33194443/218926250-5f3b6ecc-c05d-462f-8ceb-8b53c2feb335.png)
![image](https://user-images.githubusercontent.com/33194443/218926343-e750ef5b-61ba-40ca-86c9-a17f9d3561be.png)

What's Changed
* [Experimental / WIP] Generate a miniscule model for each OP only, retain the results of test inference, and propagate to all OPs. STEP.1,2 by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/184
* TF v2.12.0rc0 - 1. `Transpose` now supports 6D tensors. 2. Support for `Atan2`. by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/189
* Support for `keras_v3` format by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/190


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.6.7...1.7.0

1.26.3

- `MatMul`
Fix incorrect tensor expansion in `MatMul` operation

1. Content and background
The MatMul operation was incorrectly handling 1-dimensional tensors by expanding
the wrong input tensor. When handling a 1D `input tensor (shape [256])`, it was
erroneously expanding `input_tensor_2 (shape [256, 254])` instead of `input_tensor_1`,
leading to incorrect shape transformations.

2. Summary of corrections
Changed:
python
input_tensor_1 = tf.expand_dims(input_tensor_2, axis=0)

to
python
input_tensor_1 = tf.expand_dims(input_tensor_1, axis=0)


This ensures the correct tensor is expanded when handling 1D inputs.


3. Before/After
Before:

Input1 shape: [256] -> incorrectly became [1,256,254]
Input2 shape: [256,254] remained unchanged

After:

Input1 shape: [256] -> correctly becomes [1,256]
Input2 shape: [256,254] remains unchanged


What's Changed
* renamed replace_GRU.json to allow cloning to Windows by kwikwag in https://github.com/PINTO0309/onnx2tf/pull/718
* Bugfix in MatMul.py by oesi333 in https://github.com/PINTO0309/onnx2tf/pull/725

New Contributors
* kwikwag made their first contribution in https://github.com/PINTO0309/onnx2tf/pull/718
* oesi333 made their first contribution in https://github.com/PINTO0309/onnx2tf/pull/725

**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.26.2...1.26.3

1.26.2

- Supports multi-batch quantization of image input.

onnx2tf \
-i batch_size_2.onnx \
-oiqt \
-cind images test.npy [[[[0.485,0.456,0.406]]]] [[[[0.229,0.224,0.225]]]]

![image](https://github.com/user-attachments/assets/e7683b5e-e06d-4162-ad36-c77ffd313e26)
![image](https://github.com/user-attachments/assets/3d90446b-4647-41e5-9916-6f6c36975b0d)
- [Problem of quantization with batch_size 2 714](https://github.com/PINTO0309/onnx2tf/issues/714)

What's Changed
* Multi batch quant by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/715


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.26.1...1.26.2

1.26.1

- Added `Float32` as an option for input and output types after quantization.
bash
-iqd {int8,uint8,float32}, --input_quant_dtype {int8,uint8,float32}
Input dtypes when doing Full INT8 Quantization.
"int8"(default) or "uint8" or "float32"

-oqd {int8,uint8,float32}, --output_quant_dtype {int8,uint8,float32}
Output dtypes when doing Full INT8 Quantization.
"int8"(default) or "uint8" or "float32"


What's Changed
* Comment fix. `input_quant_dtype`, `output_quant_dtype` by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/706
* Update `replace_slice.json` reference by emmanuel-ferdman in https://github.com/PINTO0309/onnx2tf/pull/708
* Fixed mistake of 710 by marcoschepis in https://github.com/PINTO0309/onnx2tf/pull/711
* Added `Float32` option by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/712

New Contributors
* emmanuel-ferdman made their first contribution in https://github.com/PINTO0309/onnx2tf/pull/708

**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.26.0...1.26.1

Page 1 of 85

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.