Inference

Latest version: v0.29.1

Safety actively analyzes 685670 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 14

0.9.12rc2

Fixed hashing of text embeddings

0.9.12rc1

Not secure
Release candidate with fix to Yolo-World pre-processing

0.9.11

Not secure
πŸš€ Added

YOLO World in the `inference`
Have you heard about YOLO World model? πŸ€” If not - you would probably be interested to learn something about it! Our [blog post πŸ“° ](https://blog.roboflow.com/what-is-yolo-world/) may be a good starting point❗

Great news is that YOLO World is already integrated with `inference`. Model is capable to perform zero-shot detections of classes specified in inference parameter. Thanks to that, you may start making videos like that just now πŸš€

<div align="center">
<video src="https://github.com/roboflow/inference/assets/146137186/09ac38a9-091a-4bb3-a152-c2ba53dcf00b" />
</div>

Simply install dependencies.
bash
pip install inference-sdk inference-cli

Start the server
bash
inference server start


And run inference against our HTTP server:
python
from inference_sdk import InferenceHTTPClient

0.9.10

Not secure
πŸš€ Added

`inference` Benchmarking πŸƒβ€β™‚οΈ

A new command has been added to the `inference-cli` for benchmarking performance. Now you can test `inference` in different environments with different configurations and measure its performance. Look at us testing speed and scalability of hosted inference at Roboflow platform 🀯

<div align="center">
<video src="https://github.com/roboflow/inference/assets/146137186/d6cc04e3-0590-42fa-9a79-e184711db2b2" />
</div>

Run your own benchmark with a simple command:
bash
inference benchmark python-package-speed -m coco/3


[See the docs](https://inference.roboflow.com/inference_helpers/inference_cli/#inference-benchmark) for more details.

🌱 Changed
* Improved serialisation logic of requests and responses that helps Roboflow platform to improve model monitoring

πŸ”¨ Fixed
* bug https://github.com/roboflow/inference/issues/260 causing `inference` API instability in multiple-workers setup and in case of shuffling large amount of models - from now on, API container should not raise strange HTTP 5xx errors due to model management
* faulty logic for getting request_id causing errors in parallel-http container

πŸ† Contributors
paulguerrie (Paul Guerrie), SolomonLake (Solomon Lake ), robiscoding (Rob Miller) PawelPeczek-Roboflow (PaweΕ‚ PΔ™czek)

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.9...v0.9.10

0.9.10rc3

Not secure
This is a pre-release version that mainly addresses some instabilities in the model manager.

What's Changed
* Add source to cache serializer by SolomonLake in https://github.com/roboflow/inference/pull/242
* Parse request/response before caching by robiscoding in https://github.com/roboflow/inference/pull/227
* Inference benchmarking by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/250


**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.9...v0.9.10rc3

0.9.9

Not secure
πŸš€ Added

Roboflow `workflows` πŸ€–
A new way to create ML pipelines without writing code. Declare the sequence of models and intermediate processing steps using JSON config and execute using `inference` container (or Hosted Roboflow platform). No Python code needed! 🀯 Just watch our feature preview

<div align="center">
<video src="https://github.com/roboflow/inference/assets/146137186/66c3936b-980c-4e68-a845-fa8a20819f71" />
</div>

**Want to experiment more?**
bash
pip install inference-cli

inference server start --dev

Hit http://127.0.0.1:9001 in your browser, then click **`Jump Into an Inference Enabled Notebook β†’`** button and open the notebook named `workflows.ipynb`:

<p align="center">
<img src="https://github.com/roboflow/inference/assets/146137186/c297e751-fc11-4f62-832c-97531142dcbe" />
</p>

We encourage to acknowledge our [documentation](https://github.com/roboflow/inference/tree/main/inference/enterprise/workflows) πŸ“– to reveal full potential of Roboflow `workflows`.

This feature is still under heavy development. **Your feedback is needed to make it better!**

Take `inference` to the cloud with one command πŸš€
Yes, you got it right. `inference-cli` package now provides set of `inference cloud` commands to deploy required infrastructure without effort.

Just:
bash
pip install inference-cli

And depended on your needs use:
bash
inference cloud deploy --provider aws --compute-type gpu
or
inference cloud deploy --provider gcp --compute-type cpu

With example posted here, we are just scratching the surface - visit our [docs](roboflow-inference-aws-cpu-ftbmy-3453) πŸ“– where more examples are presented.


πŸ”₯ **YOLO-NAS** is coming!
* We plan to onboard YOLO-NAS to the Roboflow platform. In this release we are introducing foundation work to make that happen. Stay tuned!

[`supervision`](https://github.com/roboflow/supervision) 🀝 `inference`

We've extended capabilities of `inference infer` command of `inference-cli` package. Now it is capable to run inference against images, directories of images and videos, visualise predictions using `supervision` and save them in the location of choice.

<p align="center">
<img src="https://github.com/roboflow/inference/assets/146137186/90b8aa72-44cb-4cd1-9c41-42952b7bf5e9" />
</p>

**What does it take to get your predictions?**
bash
pip install inference-cli

start the server
inference server start

run inference
inference infer -i {PATH_TO_VIDEO} -m coco/3 -c bounding_boxes_tracing -o {OUTPUT_DIRECTORY} -D

There are plenty of configuration options that can alter the visualisation. You can use predefined configs (example: `-c bounding_boxes_tracing`) or create your own. See our [docs](https://inference.roboflow.com/inference_helpers/inference_cli/#inference-infer) πŸ“– to discover all options.

🌱 Changed
* ❗ **`breaking`**: Pydantic 2: Inference now depends on `pydantic>=2`.
* ❗ **`breaking`**: Default values of parameters (like `confidence`, `iou_threshold` etc.) that were set for newer parts of `inference` (including inference HTTP container endpoints) were aligned with more reasonable defaults that hosted Roboflow platform uses. That is going to make the experience of `inference` usage consistent with Roboflow platform. This, however, will alter the behaviour of package for clients that **do not specify** their own values of parameters while making predictions. Summary: `confidence` is from now on defaulted to `0.4` and `iou_threshold` to `0.3`. We encourage clients using self-hosted containers to evaluate results on their end. Changes to be inspected [here](https://github.com/roboflow/inference/pull/234/files).
* API calls to HTTP endpoints with Roboflow models now accept `disable_active_learning` flag that prevents Active Learning being active for specific request
* [Documentation](https://inference.roboflow.com/) πŸ“– was refreshed. Redesign is supposed to make the content easier to comprehend. We would love to have some feedback πŸ™


πŸ”¨ Fixed
* ❗ **`breaking`**: Fixed the issue https://github.com/roboflow/inference/issues/260 with bug introduced in version [v0.9.3](https://github.com/roboflow/inference/releases/tag/v0.9.3) causing classification models with 10 and more classes to assign wrong `class` name to predictions (despite maintaining good class ids) - clients relying on `class` name instead on class_id of predictions were affected.
* ❗ **`breaking`**: Typo `coglvm -> cogvlm` in `inference-sdk` HTTP client method name `prompt_cogvlm(...)`


**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.8...v0.9.9

Page 10 of 14

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.