Onnxtr

Latest version: v0.5.0

Safety actively analyzes 665902 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.5.0

<!-- Release notes generated using configuration in .github/release.yml at main -->

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>

What's Changed

New version specifiers

To go further forward making `OnnxTR` the choice for production scenarios 2 new installation options was added:


pip install "onnxtr[cpu-headless]" same as "onnxtr[cpu]" but with opencv-headless
pip install "onnxtr[gpu-headless]" same as "onnxtr[gpu]" but with opencv-headless


Disable page orientation classification

* If you deal with documents which contains only small rotations (~ -45 to 45 degrees), you can disable the page orientation classification to speed up the inference.
* This will only have an effect with `assume_straight_pages=False` and/or `straighten_pages=True` and/or `detect_orientation=True`.

from onnxtr.model import ocr_predictor
model = ocr_predictor(assume_straight_pages=False, disable_page_orientation=True)


Disable crop orientation classification

* If you deal with documents which contains only horizontal text, you can disable the crop orientation classification to speed up the inference.
* This will only have an effect with `assume_straight_pages=False` and/or `straighten_pages=True`.

from onnxtr.model import ocr_predictor
model = ocr_predictor(pretrained=True, assume_straight_pages=False, disable_crop_orientation=True)


Loading custom exported orientation classification models

Syncronized with `docTR`:


from onnxtr.io import DocumentFile
from onnxtr.models import ocr_predictor, mobilenet_v3_small_page_orientation, mobilenet_v3_small_crop_orientation
from onnxtr.models.classification.zoo import crop_orientation_predictor, page_orientation_predictor
custom_page_orientation_model = mobilenet_v3_small_page_orientation("<PATH_TO_CUSTOM_EXPORTED_ONNX_MODEL>")
custom_crop_orientation_model = mobilenet_v3_small_crop_orientation("<PATH_TO_CUSTOM_EXPORTED_ONNX_MODEL>"))

predictor = ocr_predictor(assume_straight_pages=False, detect_orientation=True)

Overwrite the default orientation models
predictor.crop_orientation_predictor = crop_orientation_predictor(custom_crop_orientation_model)
predictor.page_orientation_predictor = page_orientation_predictor(custom_page_orientation_model)


FP16 Support

* GPU only feature (OnnxTR needs to run on GPU)
* Added a script which can be used to convert the default FP32 models to FP16 (Input / Output will be unchanged fp32), this will further speed up the inference on GPU and lower the required VRAM
* Script is available at: https://github.com/felixdittrich92/OnnxTR/blob/main/scripts/convert_to_float16.py

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.4.1...v0.5.0

0.4.1

<!-- Release notes generated using configuration in .github/release.yml at main -->

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>

What's Changed

- Fix: `straighten_pages=True` now also displayed with `.show()` correctly
- Added numpy 2.0 support


New Contributors
* dependabot made their first contribution in https://github.com/felixdittrich92/OnnxTR/pull/17

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.4.0...v0.4.1

0.4.0

<!-- Release notes generated using configuration in .github/release.yml at main -->

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>

What's Changed

- Sync with current docTR state
- Hf hub integration

HuggingFace Hub integration

Now you can load and/or push models to the hub directly.

Loading

python
from onnxtr.io import DocumentFile
from onnxtr.models import ocr_predictor, from_hub

img = DocumentFile.from_images(['<image_path>'])
Load your model from the hub
model = from_hub('onnxtr/my-model')

Pass it to the predictor
If your model is a recognition model:
predictor = ocr_predictor(
det_arch='db_mobilenet_v3_large',
reco_arch=model
)

If your model is a detection model:
predictor = ocr_predictor(
det_arch=model,
reco_arch='crnn_mobilenet_v3_small'
)

Get your predictions
res = predictor(img)


Push

python
from onnxtr.models import parseq, push_to_hf_hub, login_to_hub
from onnxtr.utils.vocabs import VOCABS

Login to the hub
login_to_hub()

Recogniton model
model = parseq("~/onnxtr-parseq-multilingual-v1.onnx", vocab=VOCABS["multilingual"])
push_to_hf_hub(
model,
model_name="onnxtr-parseq-multilingual-v1",
task="recognition", The task for which the model is intended [detection, recognition, classification]
arch="parseq", The name of the model architecture
override=False Set to `True` if you want to override an existing model / repository
)

Detection model
model = linknet_resnet18("~/onnxtr-linknet-resnet18.onnx")
push_to_hf_hub(
model,
model_name="onnxtr-linknet-resnet18",
task="detection",
arch="linknet_resnet18",
override=True
)

HF Hub search: [here](https://huggingface.co/models?search=onnxtr).

Collection: [here](https://huggingface.co/collections/Felix92/onnxtr-66bf213a9f88f7346c90e842)


**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.3.2...v0.4.0

0.3.2

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>


What's Changed

- Fix: Resize transformation / interpolation adjusted to docTR (10 22)

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.3.1...v0.3.2

0.3.1

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>


What's Changed

- Minor configuration fix for CUDAExecutionProvider
- Adjusted default batch sizes
- avoid init EngineConfig multiple times

**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.3.0...v0.3.1

0.3.0

<p align="center">
<img src="https://github.com/felixdittrich92/OnnxTR/blob/main/docs/images/logo.jpg" width="50%">
</p>


What's Changed

- Sync with current docTR state
- Added advanced options to configure the underlying execution engine
- Added new `db_mobilenet_v3_large` converted models (fp32 & 8bit)

Advanced engine configuration

python
from onnxruntime import SessionOptions

from onnxtr.models import ocr_predictor, EngineConfig

general_options = SessionOptions() For configuartion options see: https://onnxruntime.ai/docs/api/python/api_summary.html#sessionoptions
general_options.enable_cpu_mem_arena = False

NOTE: The following would force to run only on the GPU if no GPU is available it will raise an error
List of strings e.g. ["CUDAExecutionProvider", "CPUExecutionProvider"] or a list of tuples with the provider and its options e.g.
[("CUDAExecutionProvider", {"device_id": 0}), ("CPUExecutionProvider", {"arena_extend_strategy": "kSameAsRequested"})]
providers = [("CUDAExecutionProvider", {"device_id": 0})] For available providers see: https://onnxruntime.ai/docs/execution-providers/

engine_config = EngineConfig(
session_options=general_options,
providers=providers
)
We use the default predictor with the custom engine configuration
NOTE: You can define different engine configurations for detection, recognition and classification depending on your needs
predictor = ocr_predictor(
det_engine_cfg=engine_config,
reco_engine_cfg=engine_config,
clf_engine_cfg=engine_config
)





**Full Changelog**: https://github.com/felixdittrich92/OnnxTR/compare/v0.2.0...v0.3.0

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.