Onnx2tf

Latest version: v1.27.1

Safety actively analyzes 723177 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 48 of 86

1.6.7

- Fixed a bug that caused an error by referencing Numpy data that should not exist as output OP in TensorFlow.
- 185
- https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/yolact_regnetx_600mf_d2s_31classes_512x512.onnx
- https://github.com/PINTO0309/onnx2tf/releases/download/1.1.28/yolact_regnetx_800mf_20classes_512x512.onnx

What's Changed
* Fixed a bug that caused an error by referencing Numpy data that should not exist as output OP in TensorFlow. 185 by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/188


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.6.6...1.6.7

1.6.6

- Escaping `:`, `/` in `saved_model` output name rewrite.
- [Conversion fails with ONNX with : in node names 186](https://github.com/PINTO0309/onnx2tf/issues/186)

python
Bring back output names from ONNX model
for output, name in zip(outputs, output_names):
output.node.layer._name = name.replace(':','_')
if output_signaturedefs:
output.node.layer._name = re.sub('^/', '', output.node.layer._name)


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.6.5...1.6.6

1.6.5

1. Content and background

Order and names of TF & TFLite model output predictions can be wrong and don't match the behaviour of ONNX model used for conversion.

2. Summary of corrections

- Order of the output Tensors (previous implementation iterated over dict keys, which might be nondeterministic),
- Name of the output Tensors (at least when running inference on models exported with TF signatures).

Both of these fixes allow for coherent interface for running inference regardless of the model format.

3. Before/After (If there is an operating log that can be used as a reference)

Script used for reproduction:

python
import torch
import onnxruntime
import numpy as np
import onnx2tf
import tensorflow as tf
from tensorflow.lite.python import interpreter as tflite_interpreter


class Model(torch.nn.Module):
def forward(self, x, y):
return {
"add": x + y,
"sub": x - y,
}


Let's double check what PyTorch gives us
model = Model()
pytorch_output = model.forward(10, 2)
print("[PyTorch] Model Predictions:", pytorch_output)

First, export the above model to ONNX
torch.onnx.export(
Model(),
{"x": 10, "y": 2},
"model.onnx",
opset_version=16,
input_names=["x", "y"],
output_names=["add", "sub"],
)

And check its output
session = onnxruntime.InferenceSession("model.onnx")
onnx_output = session.run(["add", "sub"], {"x": np.array(10), "y": np.array(2)})
print("[ONNX] Model Outputs:", [o.name for o in session.get_outputs()])
print("[ONNX] Model Predictions:", onnx_output)

Now, let's convert the ONNX model to TF
onnx2tf.convert(
input_onnx_file_path="model.onnx",
output_folder_path="model.tf",
output_signaturedefs=True,
non_verbose=True,
)

Let's check TensorFlow model
tf_model = tf.saved_model.load("model.tf")
tf_output = tf_model.signatures["serving_default"](
x=tf.constant((10,), dtype=tf.int64),
y=tf.constant((2,), dtype=tf.int64),
)
print("[TF] Model Predictions:", tf_output)

Rerun TFLite conversion but from saved model
converter = tf.lite.TFLiteConverter.from_saved_model("model.tf")
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS,
]
tf_lite_model = converter.convert()
with open("model.tf/model_float32.tflite", "wb") as f:
f.write(tf_lite_model)

Now, test the newer TFLite model
interpreter = tf.lite.Interpreter(model_path="model.tf/model_float32.tflite")
tf_lite_model = interpreter.get_signature_runner()
tt_lite_output = tf_lite_model(
x=tf.constant((10,), dtype=tf.int64),
y=tf.constant((2,), dtype=tf.int64),
)
print("[TFLite] Model Predictions:", tt_lite_output)


Before:


[PyTorch] Model Predictions: {'add': 12, 'sub': 8}
[ONNX] Model Outputs: ['add', 'sub']
[ONNX] Model Predictions: [array(12, dtype=int64), array(8, dtype=int64)]
WARNING:absl:Please consider providing the trackable_obj argument in the from_concrete_functions. Providing without the trackable_obj argument is deprecated and it will use the deprecated conversion path.
[TF] Model Predictions: {'tf.math.add': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([12])>, 'tf.math.subtract': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([8])>}
[TFLite] Model Predictions: {'tf.math.add': array([12]), 'tf.math.subtract': array([8])}


After:


[PyTorch] Model Predictions: {'add': 12, 'sub': 8}
[ONNX] Model Outputs: ['add', 'sub']
[ONNX] Model Predictions: [array(12, dtype=int64), array(8, dtype=int64)]
WARNING:absl:Please consider providing the trackable_obj argument in the from_concrete_functions. Providing without the trackable_obj argument is deprecated and it will use the deprecated conversion path.
[TF] Model Predictions: {'add': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([12])>, 'sub': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([8])>}
[TFLite] Model Predictions: {'add': array([12]), 'sub': array([8])}


4. Issue number (only if there is a related issue)

https://github.com/PINTO0309/onnx2tf/issues/178

What's Changed
* Fix order and name of TF model outputs by jpowie01 in https://github.com/PINTO0309/onnx2tf/pull/185


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.6.4...1.6.5

1.6.4

- Work around a bug in TensorFlow's model optimizer. `Reducemax` -> `Subtract` -> `Softmax`
- There seems to be a rather critical bug in TensorFlow's `Softmax`.
- Fatal bug where `Reducemax` and `Subtract` are ignored
- Before
|onnx|Left: Keras .h5, Right: tflite|
|:-:|:-:|
|![image](https://user-images.githubusercontent.com/33194443/218233243-523a7f64-982d-44fd-b67b-03ca350e8913.png)|![image](https://user-images.githubusercontent.com/33194443/218233231-fc7b8fd7-507b-43b4-95d6-9bae8f53cf40.png)|
- After
![image](https://user-images.githubusercontent.com/33194443/218233371-231a3e82-87a6-4bff-a866-95d27173b506.png)
- [ReduceMax before Softmax ignored during conversion 182](https://github.com/PINTO0309/onnx2tf/issues/182)

What's Changed
* Work around a bug in TensorFlow's model optimizer. `Reducemax` -> `Subtract` -> `Softmax` by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/183


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.6.3...1.6.4

1.6.3

- Fixed `Transpose`->`Softmax`->`Transpose` pattern to take over `Transpose` enable/disable status when `Transpose`->`Softmax`->`Transpose` pattern is used to remove useless `Transpose` from the graph and optimize it.

- Before
![image](https://user-images.githubusercontent.com/33194443/217759939-7f75b7d6-abbc-48a8-bf84-554966fca146.png)
ONNX | TF
:-: | :-:
<img src="https://user-images.githubusercontent.com/34959032/217731925-bae71fee-257a-47c9-a245-a9b7b555f66c.png" width="150">| <img src="https://user-images.githubusercontent.com/34959032/217731953-09e75533-ef33-476e-8da1-e9e28a6140c4.png" width="150">

- After
![image](https://user-images.githubusercontent.com/33194443/217760618-35809f57-80e2-485e-8fb4-5612c2769306.png)
ONNX | TF
:-: | :-:
<img src="https://user-images.githubusercontent.com/34959032/217731925-bae71fee-257a-47c9-a245-a9b7b555f66c.png" width="150">| <img src="https://user-images.githubusercontent.com/33194443/217760365-b1f000fb-34ab-493b-9acc-1a6e406abbf8.png">

- [[FastestDet] height and width axes are switched before last concatenation 180](https://github.com/PINTO0309/onnx2tf/issues/180)

What's Changed
* Fixed `Transpose`->`Softmax`->`Transpose` pattern to take over `Transpose` enable/disable status when `Transpose`->`Softmax`->`Transpose` pattern is used to remove useless `Transpose` from the graph and optimize it. by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/181


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.6.2...1.6.3

1.6.2

- Added the ability to override tflite input/output names with ONNX input/output names. `-coion`

-coion, --copy_onnx_input_output_names_to_tflite
Copy the input/output OP name of ONNX to the input/output OP name of tflite.
Due to Tensorflow internal operating specifications,
the input/output order of ONNX does not necessarily match
the input/output order of tflite.
Be sure to check that the input/output OP names in the generated
tflite file have been converted as expected.
Also, this option generates a huge JSON file as a temporary file for processing.
Therefore, it is strongly discouraged to use it on large models of hundreds
of megabytes or more.

- onnx
![image](https://user-images.githubusercontent.com/33194443/217715632-940e778c-6d0c-4507-bdfb-63f0aaa92177.png)

- tflite
![image](https://user-images.githubusercontent.com/33194443/217706803-4201ad6f-83e1-48fe-aff0-a59f0c8858ad.png)

- [ONNX model input & output names are not preserved in TensorFlow & TensorFlow Lite models 178](https://github.com/PINTO0309/onnx2tf/issues/178)

What's Changed
* Added the ability to overwrite ONNX input/output names with tflite input/output names by PINTO0309 in https://github.com/PINTO0309/onnx2tf/pull/179


**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.6.1...1.6.2

Page 48 of 86

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.