Inference

Latest version: v0.29.1

Safety actively analyzes 685670 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 8 of 14

0.9.22

Not secure
What's Changed
* Add new endpoints for workflows and prepare for future deprecation by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/336
* Update description for workflows steps by grzegorz-roboflow in https://github.com/roboflow/inference/pull/345
* Add error status code to benchmark output by grzegorz-roboflow in https://github.com/roboflow/inference/pull/351
* Add more test cases to cover tests/inference/unit_tests/core/utils/test_postprocess.py::post_process_polygons by grzegorz-roboflow in https://github.com/roboflow/inference/pull/352
* Inference TensorRT execution provider container revival by probicheaux in https://github.com/roboflow/inference/pull/347
* Bugfix for gaze detection (batch request) by PacificDou in https://github.com/roboflow/inference/pull/358
* Allow alternate video sources by sberan in https://github.com/roboflow/inference/pull/348
* Skip encode image as jpeg if no-resize is specified by PacificDou in https://github.com/roboflow/inference/pull/359

New Contributors
* grzegorz-roboflow made their first contribution in https://github.com/roboflow/inference/pull/345

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.20...v0.9.22

0.9.20

Not secure
What's Changed
* Bump version for pypi wheels

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.19...v0.9.20

0.9.19

GroundingDINO bugfixes and enhancements!

Allows users to pass custom box_threshold and text_threshold params to Grounding DINO core model.
Update docs to reflect box_threshold and text_threshold params.
Fixes error by filtering out detections where text similarity is lower than text_threshold and Grounding DINO returns None for class ID.
Fixes images passed to Grounding DINO model being loaded as RBG instead of BGR.
Adds NMS to Grounding DINO, optionally using class agnostic NMS via CLASS_AGNOSTIC_NMS env var.

Try it out:

from inference.models.grounding_dino import GroundingDINO

model = GroundingDINO(api_key="")

results = model.infer(
{
"image": {
"type": "url",
"value": "https://media.roboflow.com/fruit.png",
},
"text": ["apple"],

Optional params
"box_threshold": 0.5
"text_threshold": 0.5
}
)

print(results.json())


**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.18...v0.9.19

0.9.18

Not secure
🚀 Added

🎥 🎥 Multiple video sources 🤝 `InferencePipeline`
Previous versions of the `InferencePipeline` could only support a single video source. However, from now on, you can pass multiple videos into a single pipeline and have all of them processed! Here is a demo:

<video src="https://github.com/roboflow/inference/assets/146137186/0cf8338a-7fe4-4e07-83c4-600abbeb7c10"></video>

Here's how to achieve the result:

python
from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes

pipeline = InferencePipeline.init(
video_reference=["your_video.mp4", "your_other_ideo.mp4"],
model_id="yolov8n-640",
on_prediction=render_boxes,
)
pipeline.start()
pipeline.join()


There were a lot of internal changes made, but the majority of users should not experience any breaking changes. Please visit our [📖 documentation](https://inference.roboflow.com/using_inference/inference_pipeline/) to discover all the differences. If you are affected by the changes we needed to introduce, here is the [🔧 migration guide](https://inference.roboflow.com/using_inference/inference_pipeline/#migrate-to-changes-introduced-in-v0918).

Barcode detector in `workflows`
Thanks to chandlersupple, we have ability to detect and read barcodes in `workflows`.
<p align="center">
<img src="https://github.com/roboflow/inference/assets/146137186/5b9d2374-f90b-4c08-9b03-8b7b4f0b4ff4" width=480 />
</p>

Visit our [📖 documentation](https://inference.roboflow.com/workflows/detect_barcodes/) to see how to bring this step into your workflow.

🌱 Changed

Easier data collection in `inference` 🔥

We've introduced a new parameter handled by the `inference` server (including hosted `inference` at Roboflow platform). This parameter, called `active_learning_target_dataset`, can now be added to requests to specify the Roboflow project where collected data should be stored.

Thanks to this change, you can now collect datasets while using [Universe](https://universe.roboflow.com/) models. We've also updated [Active Learning 📖 docs](https://inference.roboflow.com/enterprise/active-learning/active_learning/)

python
from inference_sdk import InferenceHTTPClient, InferenceConfiguration

prepare and set configuration
configuration = InferenceConfiguration(
active_learning_target_dataset="my_dataset",
)
client = InferenceHTTPClient(
api_url="https://detect.roboflow.com",
api_key="<YOUR_ROBOFLOW_API_KEY>",
).configure(configuration)

run normal request and have your data sampled 🤯
client.infer(
"./path_to/your_image.jpg",
model_id="yolov8n-640",
)


Other changes
* Add `inference_id` to batches created by AL by robiscoding in https://github.com/roboflow/inference/pull/319
* Improvements in 📖 documentation regarding `RGB vs BGR` topic by probicheaux in https://github.com/roboflow/inference/pull/330


🔨 Fixed
Thanks to contribution of hvaria 🏅 we have two problems solved:
* Ensure Graceful Interruption of Benchmark Process - Fixing for Bug 313: in https://github.com/roboflow/inference/pull/325
* Better error handling in inference CLI: in https://github.com/roboflow/inference/pull/328

New Contributors
* chandlersupple made their first contribution in https://github.com/roboflow/inference/pull/311

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.17...v0.9.18

0.9.17

Not secure
🚀 Added

YOLOWorld - new versions and Roboflow hosted inference 🤯
`inference` package now support 5 new versions of YOLOWorld model. We've added versions `x`, `v2-s`, `v2-m`, `v2-l`, `v2-x`. Versions with prefix `v2` have better performance than the previously published ones.

To use YOLOWorld in `inference`, use the following `model_id`: `yolo_world/<version>`, substituting `<version>` with one of `[s, m, l, x, v2-s, v2-m, v2-l, v2-x]`.

You can use the models in different contexts:

Roboflow hosted `inference` - easiest way to get your predictions :boom:

<details><summary> 💡 Please make sure you have inference-sdk installed </summary>

If you do not have the whole `inference` package installed, you will need to install at least`inference-sdk`:

bash
pip install inference-sdk


</details>

<details><summary> 💡 You need Roboflow account to use our hosted platform </summary>

* [Create account](https://roboflow.com/)
* [Get your API key](https://docs.roboflow.com/api-reference/authentication)

</details>

python
import cv2
from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(api_url="https://infer.roboflow.com", api_key="<YOUR_ROBOFLOW_API_KEY>")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
image,
["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
model_version="s", <-- you do not need to provide `yolo_world/` prefix here
)


Self-hosted `inference` server
<details><summary> 💡 Please remember to clean up old version of docker image </summary>

If you ever used `inference` server before, please run:
bash
docker rmi roboflow/roboflow-inference-server-cpu:latest

or, if you have GPU on the machine
docker rmi roboflow/roboflow-inference-server-gpu:latest


in order to make sure the newest version of image is pulled.

</details>

<details><summary> 💡 Please make sure you run the server and have sdk installed </summary>

If you do not have the whole `inference` package installed, you will need to install at least `inference-cli` and `inference-sdk`:

bash
pip install inference-sdk inference-cli


Make sure you start local instance of `inference server` before running the code
bash
inference server start


</details>

python
import cv2
from inference_sdk import InferenceHTTPClient

0.9.16

Not secure
🚀 Added

🎬 `InferencePipeline` can now process the video using your custom logic

Prior to `v0.9.16`, `InferencePipeline` was only able to make inference against Roboflow models. Now - you can inject any arbitrary logic of your choice and process videos (files and streams) using custom function you create. Just look at the example:

python
import os
import json
from inference.core.interfaces.camera.entities import VideoFrame
from inference import InferencePipeline

TARGET_DIR = "./my_predictions"

class MyModel:

def __init__(self, weights_path: str):
self._model = your_model_loader(weights_path)

def infer(self, video_frame: VideoFrame) -> dict:
return self._model(video_frame.image)


def save_prediction(prediction: dict, video_frame: VideoFrame) -> None:
with open(os.path.join(TARGET_DIR, f"{video_frame.frame_id}.json")) as f:
json.dump(prediction, f)

my_model = MyModel("./my_model.pt")

pipeline = InferencePipeline.init_with_custom_logic(
video_reference="./my_video.mp4",
on_video_frame=my_model.infer, <-- your custom video frame processing function
on_prediction=save_prediction, <-- your custom sink for predictions
)

start the pipeline
pipeline.start()
wait for the pipeline to finish
pipeline.join()

That's not everything! Remember our `workflows` feature? We've just added `workflows` into `InferencePipeline` (in experimental mode). Check `InferencePipeline.init_with_workflow(...)` to test the feature.

❗ Breaking change: we've reverted changes introduced in `v0.9.15` to `InferencePipeline.init(...)` making it compatible with `YOLOWorld` model. Now, you would need to use `InferencePipeline.init_with_yolo_world(...)` as shown [here](https://github.com/roboflow/inference/blob/main/development/stream_interface/yolo_world_demo.py):
python
pipeline = InferencePipeline.init_with_yolo_world(
video_reference="YOUR-VIDEO"
on_prediction=...,
classes=["person", "dog", "car", "truck"]
)


**We've updated 📖 [docs](https://inference.roboflow.com/using_inference/inference_pipeline/)** to make it easy to use new feature.

Thanks paulguerrie for great contribution

🌱 Changed
* Huge changes in 📖 [docs](https://inference.roboflow.com/) - thanks capjamesg, SkalskiP, SolomonLake for contribution
* Improved contributor experience by adding contributor guide and separating GHA CI, such that most important tests could work against repository fork
* `OpenVINO` as default ONNX Execution Provider for x86 based docker images to improve speed of inference (probicheaux )
* Camera properties in `InferencePipeline` can be set now by caller (sberan)

🔨 Fixed
* added missing `structlog` dependency to package (paulguerrie)
* clarified models licence (yeldarby)
* bugs in lambda HTTP inference
* fixed portion of security vulnerabilities
* ❗ **breaking**: Two exceptions (`WorkspaceLoadError`, `MalformedWorkflowResponseError`), when raised will be given HTTP502 error, instead of HTTP500 as previously
* bug in `workflows` with class-filter at the level of detection-based model blocks not being applied.


New Contributors
* hansent made their first contribution in https://github.com/roboflow/inference/pull/293
* hvaria made their first contribution in https://github.com/roboflow/inference/pull/302

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.15...v0.9.16

Page 8 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.