Onnx2tf

Latest version: v1.27.1

Safety actively analyzes 723177 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 86

1.22.0

- `Docker Image` (arm64, Apple Silicon Mac)
Distributed docker image is not compatible with ARM environments such as Apple Silicon Mac.
Although Docker offers emulation for x86/amd64 environments, onnx2tf within this emulation mode results in an error, as shown below:

bash
$ root39d07181ce27:/ onnx2tf -h
> Illegal instruction

This issue is suspected to be caused by the dependency of PyPI TensorFlow on x86-specific instruction sets.

To address this,
I've augmented the release process with GitHub Actions to include building and pushing Docker images for Arm64 architecture. This make it possible to execute onnx2tf with docker on Arm64 hosts.

- Update Dockerfile
- Extra dependencies (pkg-config and libhdf5-dev) are required to install h5py for arm64 docker build.
- Add Arm64 build and push in GitHub Actions
- Referenced the following URL for guidance: https://docs.docker.com/build/ci/github-actions/multi-platform/

What's Changed
* Add arm64 docker image by ysohma in https://github.com/PINTO0309/onnx2tf/pull/636


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.21.6...1.22.0

1.21.6

- `MatMulInteger`
Currently, MatMulInteger is implemented as tf matmul with int32 inputs/outputs, which leads to generation of Flex(Batch)MatMul ops.

When `-rtpo MatMulInteger` is specified, inputs of MatMulInteger are casted to float32 instead, allowing the node to be converted to the builtin FullyConnected or BatchMatMul ops.

ONNX input:
![image](https://github.com/PINTO0309/onnx2tf/assets/25856103/25c91c62-1b3e-4ca4-b541-1cd59db7f39b)

Before:
![Screenshot_20240517_202911](https://github.com/PINTO0309/onnx2tf/assets/25856103/dbb7386f-f650-4803-873e-d1517bde445e)

After:
![image](https://github.com/PINTO0309/onnx2tf/assets/25856103/caaf48b0-8385-47c0-a134-5e21d0644df0)

What's Changed
* support suppressing flex ops for MatMulInteger by DDoSolitary in https://github.com/PINTO0309/onnx2tf/pull/635


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.21.5...1.21.6

1.21.4

- Add `-odrqt, --output-dynamic-range-quantized-tflite` option.
While `output_integer_quantized_tflite` already enables dynamic range quantization output, the option also triggers checks for calibration data, which is only required for full integer quantization, and causes errors when no calibration data is provided.

This is undesirable if only dynamic quantization is wanted.

A new option (`-odrqt, --output-dynamic-range-quantized-tflite`) is added to only enable dynamic range quant output, which doesn't need calibration data.

Before:

$ onnx2tf -i some_model_with_non_regular_input_shape.onnx -oiqt
(other output omitted)
Model conversion started ============================================================
INFO: input_op_name: input shape: [1] dtype: float32
ERROR: For INT8 quantization, the input data type must be Float32. Also, if --custom_input_op_name_np_data_path is not specified, all input OPs must assume 4D tensor image data. INPUT Name: input INPUT Shape: [1] INPUT dtype: float32

After:

$ onnx2tf -i some_model_with_non_regular_input_shape.onnx -odrqt
(other output omitted)
saved_model output started ==========================================================
saved_model output complete!
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
W0000 00:00:1715853625.734342 7691 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
W0000 00:00:1715853625.734397 7691 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
Float32 tflite output complete!
W0000 00:00:1715853629.274694 7691 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
W0000 00:00:1715853629.274724 7691 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
Float16 tflite output complete!
W0000 00:00:1715853631.535535 7691 tf_tfl_flatbuffer_helpers.cc:390] Ignored output_format.
W0000 00:00:1715853631.535568 7691 tf_tfl_flatbuffer_helpers.cc:393] Ignored drop_control_dependency.
Dynamic Range Quantization tflite output complete!


What's Changed
* add option to enable only dynamic range quant by DDoSolitary in https://github.com/PINTO0309/onnx2tf/pull/633


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.21.3...1.21.4

1.21.3

- Significantly faster flatbuffer update speed with `-coion` option
Currently, the `copy_onnx_input_output_names_to_tflite` flag converts the tflite model to json for modification, and then convert it back. For large models, the conversions take a long time and consume a large amount of disk space.

Flatbuffers provides Python API allowing reading and writing model files as Python objects directly. We can run `flatc --python --gen-object-api` to generate Python object API from the downloaded schema file and use it to read the model, add signature defs, and write it back.

What's Changed
* use flatbuffers python api for coion by DDoSolitary in https://github.com/PINTO0309/onnx2tf/pull/631
* flatbuffers 631 by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/632

New Contributors
* DDoSolitary made their first contribution in https://github.com/PINTO0309/onnx2tf/pull/631

**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.21.2...1.21.3

1.21.2

- `GatherElements`
- Added automatic error correction.
- [convnext-det.onnx.zip](https://github.com/PINTO0309/onnx2tf/files/15314366/convnext-det.onnx.zip)
![image](https://github.com/PINTO0309/onnx2tf/assets/33194443/fa899d2c-4569-4072-8780-8864b49a88d2)
- [Quantized ConvNext 629](https://github.com/PINTO0309/onnx2tf/issues/629)

What's Changed
* Added automatic error correction for `GatherElements` by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/630


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.21.1...1.21.2

1.21.1

- `Constant`
- Bring `Constant` layers unconnected to the model into the model.
- It is assumed that the `-nuo` option is specified because running `onnxsim` will remove constants from the ONNX file.
- Wrap constants in a `Lambda` layer and force them into the model.
- [toy_with_constant.onnx.zip](https://github.com/PINTO0309/onnx2tf/files/15292126/toy_with_constant.onnx.zip)
- Convert test
bash
onnx2tf -i toy_with_constant.onnx -nuo -cotof

|ONNX|TFLite|
|:-:|:-:|
|![image](https://github.com/PINTO0309/onnx2tf/assets/33194443/ab2260bd-aa41-4522-ad4b-83567d094edf)|![image](https://github.com/PINTO0309/onnx2tf/assets/33194443/7efc5d4d-ec6f-4b16-8d28-b1389a469df7)|

![image](https://github.com/PINTO0309/onnx2tf/assets/33194443/f8d336f0-e99e-4125-b8f1-bed3960aef7e)

- Inference test
python
import tensorflow as tf
import numpy as np
from pprint import pprint

interpreter = tf.lite.Interpreter(model_path="saved_model/toy_with_constant_float32.tflite")
interpreter.allocate_tensors()

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

interpreter.set_tensor(
tensor_index=input_details[0]['index'],
value=np.ones(tuple(input_details[0]['shape']), dtype=np.float32)
)
interpreter.invoke()

variable_output = interpreter.get_tensor(output_details[0]['index'])
constant_output = interpreter.get_tensor(output_details[1]['index'])

print("=================")
print("Variable Output:")
pprint(variable_output)
print("=================")
print("Constant Output:")
pprint(constant_output)


=================
Variable Output:
array([[-0.02787317, -0.05505124, 0.05421712, 0.03526559, -0.14131774,
0.0019211 , 0.08399964, 0.00433664, -0.00984338, -0.03370604]],
dtype=float32)
=================
Constant Output:
array([1., 2., 3., 4., 5.], dtype=float32)

- [Constant outputs removed from ONNX during conversion 627](https://github.com/PINTO0309/onnx2tf/issues/627)

Page 8 of 86

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.