Deepparse

Latest version: v0.9.13

Safety actively analyzes 682404 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 9

0.9.7

- New models release with more meta-data.
- Add a feature to use an AddressParser from a URI.
- Add a feature to upload the trained model to a URI.
- Add an example of how to use URI for parsing from and uploading to.
- Improve error handling of `path_to_retrain_model`.
- Bug-fix pre-processor error.
- Add verbose override and improve verbosity handling in retraining.
- Bug-fix the broken FastText installation using `fasttext-wheel` instead of `fasttext` (
see [here](https://github.com/facebookresearch/fastText/issues/512#issuecomment-1534519551)
and [here](https://github.com/facebookresearch/fastText/pull/1292)).

0.9.6

- Add Python 3.11.
- Add pre-processor when parsing addresses.
- Add `pin_memory=True` when using a CUDA device to increase performance as suggested
by [Torch documentation](https://pytorch.org/tutorials/recipes/recipes/tuning_guide.html).
- Add `torch.no_grad()` context manager in `__call__()` to increase performance.
- Reduce memory swap between CPU and GPU by instantiating Tensor directly on the GPU device.
- Improve some warnings' clarity (i.e., category and message).
- Bug-fix MacOS multiprocessing. It was impossible to use in multiprocess since we were not testing whether Torch
multiprocess was set properly. Now, we set it properly and raise a warning instead of an error.
- Drop Python 3.7 support since newer Python versions are faster
and [Torch 2.0 does not support Python 3.7](https://dev-discuss.pytorch.org/t/dropping-support-for-cuda-11-6-and-python-3-7-from-pytorch-2-0-release/1021).
- Improve error handling with wrong checkpoint loading in AddressParser retrain_path use.
- Add `torch.compile` integration to improve performance (Torch 1.x still supported) with `mode="reduce-overhead"` as
suggested in the [documentation](https://pytorch.org/tutorials//intermediate/torch_compile_tutorial.html). It
increases the performance by about 1/100.

0.9.5

- Fixed tags converter bug with the data processor.

0.9.4

- Improve codebase.

0.9.3

- Improve error handling.
- Bug-fix FastText error is not handled in the test API.
- Add a feature to allow new_prediction_tags to retrain CLI.

0.9.2

- Improve Deepparse server error handling and error output.
- Remove deprecated argument `saving_dir` in `download_fasttext_magnitude_embeddings`
and `download_fasttext_embeddings` functions.
- Add offline argument to remove verification of the latest version.
- Bug-fix cache handling in the download model.
- Add the `download_models` CLI function.
- [Temporary hot-fix BPEmb SSL certificate error](https://github.com/GRAAL-Research/deepparse/issues/156).

Page 2 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.