Onnxtr

Latest version: v0.6.2

Safety actively analyzes 702072 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.6.2

<!-- Release notes generated using configuration in .github/release.yml at main -->

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>

What's Changed

**NOTE:** OnnxTR v0.6.2 requires `Python >=3.10`

Bug Fixes
* [Fix] pathlib issue windows with older onnxruntime versions by felixdittrich92 in https://github.com/felixdittrich92/OnnxTR/pull/62



**Publicly available pre-built Docker images:** [here](https://github.com/felixdittrich92/OnnxTR/pkgs/container/onnxtr)

**OnnxTR demo:** [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Felix92/OnnxTR-OCR)

**OnnxTR model collection:** [OnnxTR Hugging Face collection](https://huggingface.co/collections/Felix92/onnxtr-66bf213a9f88f7346c90e842)

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.6.1...v0.6.2

0.6.1

<!-- Release notes generated using configuration in .github/release.yml at main -->

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>

What's Changed

**NOTE:** OnnxTR v0.6.1 requires `Python >=3.10`

- Small fix for custom loaded detection models where `assume_straight_pages=False` wasn't set correctly.
- Maintenance updates



**Publicly available pre-built Docker images:** [here](https://github.com/felixdittrich92/OnnxTR/pkgs/container/onnxtr)

**OnnxTR demo:** [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Felix92/OnnxTR-OCR)

**OnnxTR model collection:** [OnnxTR Hugging Face collection](https://huggingface.co/collections/Felix92/onnxtr-66bf213a9f88f7346c90e842)

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.6.0...v0.6.1

0.6.0

<!-- Release notes generated using configuration in .github/release.yml at main -->

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>

What's Changed

**NOTE:** OnnxTR v0.6.0 requires `Python >=3.10`

New version specifiers

To further enhance `OnnxTR` as a go-to solution for production environments, two new installation options are introduced, tailored for OpenVINO-powered deployments:

bash
pip install "onnxtr[openvino]"
pip install "onnxtr[openvino-headless]" same as "onnxtr[openvino]" but with opencv-headless


OpenVINO™ (Open Visual Inference and Neural Network Optimization) is an open-source toolkit developed by Intel to optimize and deploy AI inference across a variety of hardware. It is specifically designed for **Intel architectures** but supports multiple hardware targets, including CPUs, GPUs, NPUs, and so on. OpenVINO is particularly well-suited for applications requiring high-performance inference, such as computer vision, natural language processing, and edge AI scenarios.

As you can see this provides a great performance boost:

![Screenshot from 2024-11-23 14-30-03](https://github.com/user-attachments/assets/403612e1-c174-4dc8-ad53-da25801fdca6)


Publicly available pre-built Docker images added

Can be found [here](https://github.com/felixdittrich92/OnnxTR/pkgs/container/onnxtr)

OnnxTR demo

[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Felix92/OnnxTR-OCR)

[OnnxTR Hugging Face collection](https://huggingface.co/collections/Felix92/onnxtr-66bf213a9f88f7346c90e842)

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.5.1...v0.6.0

0.5.1

<!-- Release notes generated using configuration in .github/release.yml at main -->

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>

What's Changed

- Improved `result.syntesize()`
- Updated Hugging Face demo

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.5.0...v0.5.1

0.5.0

<!-- Release notes generated using configuration in .github/release.yml at main -->

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>

What's Changed

New version specifiers

To go further forward making `OnnxTR` the choice for production scenarios 2 new installation options was added:


pip install "onnxtr[cpu-headless]" same as "onnxtr[cpu]" but with opencv-headless
pip install "onnxtr[gpu-headless]" same as "onnxtr[gpu]" but with opencv-headless


Disable page orientation classification

* If you deal with documents which contains only small rotations (~ -45 to 45 degrees), you can disable the page orientation classification to speed up the inference.
* This will only have an effect with `assume_straight_pages=False` and/or `straighten_pages=True` and/or `detect_orientation=True`.

from onnxtr.models import ocr_predictor
model = ocr_predictor(assume_straight_pages=False, disable_page_orientation=True)


Disable crop orientation classification

* If you deal with documents which contains only horizontal text, you can disable the crop orientation classification to speed up the inference.
* This will only have an effect with `assume_straight_pages=False` and/or `straighten_pages=True`.

from onnxtr.models import ocr_predictor
model = ocr_predictor(assume_straight_pages=False, disable_crop_orientation=True)


Loading custom exported orientation classification models

Syncronized with `docTR`:


from onnxtr.io import DocumentFile
from onnxtr.models import ocr_predictor, mobilenet_v3_small_page_orientation, mobilenet_v3_small_crop_orientation
from onnxtr.models.classification.zoo import crop_orientation_predictor, page_orientation_predictor
custom_page_orientation_model = mobilenet_v3_small_page_orientation("<PATH_TO_CUSTOM_EXPORTED_ONNX_MODEL>")
custom_crop_orientation_model = mobilenet_v3_small_crop_orientation("<PATH_TO_CUSTOM_EXPORTED_ONNX_MODEL>"))

predictor = ocr_predictor(assume_straight_pages=False, detect_orientation=True)

Overwrite the default orientation models
predictor.crop_orientation_predictor = crop_orientation_predictor(custom_crop_orientation_model)
predictor.page_orientation_predictor = page_orientation_predictor(custom_page_orientation_model)


FP16 Support

* GPU only feature (OnnxTR needs to run on GPU)
* Added a script which can be used to convert the default FP32 models to FP16 (Input / Output will be unchanged fp32), this will further speed up the inference on GPU and lower the required VRAM
* Script is available at: https://github.com/felixdittrich92/OnnxTR/blob/main/scripts/convert_to_float16.py

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.4.1...v0.5.0

0.4.1

<!-- Release notes generated using configuration in .github/release.yml at main -->

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>

What's Changed

- Fix: `straighten_pages=True` now also displayed with `.show()` correctly
- Added numpy 2.0 support


New Contributors
* dependabot made their first contribution in https://github.com/felixdittrich92/OnnxTR/pull/17

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.4.0...v0.4.1

Page 1 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.