1. Content and background
Order and names of TF & TFLite model output predictions can be wrong and don't match the behaviour of ONNX model used for conversion.
2. Summary of corrections
- Order of the output Tensors (previous implementation iterated over dict keys, which might be nondeterministic),
- Name of the output Tensors (at least when running inference on models exported with TF signatures).
Both of these fixes allow for coherent interface for running inference regardless of the model format.
3. Before/After (If there is an operating log that can be used as a reference)
Script used for reproduction:
python
import torch
import onnxruntime
import numpy as np
import onnx2tf
import tensorflow as tf
from tensorflow.lite.python import interpreter as tflite_interpreter
class Model(torch.nn.Module):
def forward(self, x, y):
return {
"add": x + y,
"sub": x - y,
}
Let's double check what PyTorch gives us
model = Model()
pytorch_output = model.forward(10, 2)
print("[PyTorch] Model Predictions:", pytorch_output)
First, export the above model to ONNX
torch.onnx.export(
Model(),
{"x": 10, "y": 2},
"model.onnx",
opset_version=16,
input_names=["x", "y"],
output_names=["add", "sub"],
)
And check its output
session = onnxruntime.InferenceSession("model.onnx")
onnx_output = session.run(["add", "sub"], {"x": np.array(10), "y": np.array(2)})
print("[ONNX] Model Outputs:", [o.name for o in session.get_outputs()])
print("[ONNX] Model Predictions:", onnx_output)
Now, let's convert the ONNX model to TF
onnx2tf.convert(
input_onnx_file_path="model.onnx",
output_folder_path="model.tf",
output_signaturedefs=True,
non_verbose=True,
)
Let's check TensorFlow model
tf_model = tf.saved_model.load("model.tf")
tf_output = tf_model.signatures["serving_default"](
x=tf.constant((10,), dtype=tf.int64),
y=tf.constant((2,), dtype=tf.int64),
)
print("[TF] Model Predictions:", tf_output)
Rerun TFLite conversion but from saved model
converter = tf.lite.TFLiteConverter.from_saved_model("model.tf")
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS,
tf.lite.OpsSet.SELECT_TF_OPS,
]
tf_lite_model = converter.convert()
with open("model.tf/model_float32.tflite", "wb") as f:
f.write(tf_lite_model)
Now, test the newer TFLite model
interpreter = tf.lite.Interpreter(model_path="model.tf/model_float32.tflite")
tf_lite_model = interpreter.get_signature_runner()
tt_lite_output = tf_lite_model(
x=tf.constant((10,), dtype=tf.int64),
y=tf.constant((2,), dtype=tf.int64),
)
print("[TFLite] Model Predictions:", tt_lite_output)
Before:
[PyTorch] Model Predictions: {'add': 12, 'sub': 8}
[ONNX] Model Outputs: ['add', 'sub']
[ONNX] Model Predictions: [array(12, dtype=int64), array(8, dtype=int64)]
WARNING:absl:Please consider providing the trackable_obj argument in the from_concrete_functions. Providing without the trackable_obj argument is deprecated and it will use the deprecated conversion path.
[TF] Model Predictions: {'tf.math.add': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([12])>, 'tf.math.subtract': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([8])>}
[TFLite] Model Predictions: {'tf.math.add': array([12]), 'tf.math.subtract': array([8])}
After:
[PyTorch] Model Predictions: {'add': 12, 'sub': 8}
[ONNX] Model Outputs: ['add', 'sub']
[ONNX] Model Predictions: [array(12, dtype=int64), array(8, dtype=int64)]
WARNING:absl:Please consider providing the trackable_obj argument in the from_concrete_functions. Providing without the trackable_obj argument is deprecated and it will use the deprecated conversion path.
[TF] Model Predictions: {'add': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([12])>, 'sub': <tf.Tensor: shape=(1,), dtype=int64, numpy=array([8])>}
[TFLite] Model Predictions: {'add': array([12]), 'sub': array([8])}
4. Issue number (only if there is a related issue)
https://github.com/PINTO0309/onnx2tf/issues/178
What's Changed
* Fix order and name of TF model outputs by jpowie01 in https://github.com/PINTO0309/onnx2tf/pull/185
**Full Changelog**: https://github.com/PINTO0309/onnx2tf/compare/1.6.4...1.6.5