Inference

Latest version: v0.30.0

Safety actively analyzes 687881 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 12 of 15

0.9.7.rc1

0.9.6

Not secure
What's Changed
* Automated Build for Parallel Interface by paulguerrie in https://github.com/roboflow/inference/pull/168
* Deprecate TRT Support by paulguerrie in https://github.com/roboflow/inference/pull/169
* Better API Key Docs and Error Handling by paulguerrie in https://github.com/roboflow/inference/pull/171
* Add true implementation for AL configuration getter by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/173
* Bug Fix for Numpy Inputs by paulguerrie in https://github.com/roboflow/inference/pull/172
* features/sv-from-roboflow-no-need-class-list-args by ShingoMatsuura in https://github.com/roboflow/inference/pull/149
* Add development documentation of Active Learning by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/167
* Refactor inference methods to use make_response directly by SkalskiP in https://github.com/roboflow/inference/pull/147
* Updated HTTP Quickstart by paulguerrie in https://github.com/roboflow/inference/pull/176
* Peter/cogvlm by probicheaux in https://github.com/roboflow/inference/pull/175
* Error Handling for Onnx Session Creation by paulguerrie in https://github.com/roboflow/inference/pull/177
* Slim Docker Images by paulguerrie in https://github.com/roboflow/inference/pull/178
* Rename cog to cogvlm by paulguerrie in https://github.com/roboflow/inference/pull/182
* Wheel and Setuptools Upgrade by paulguerrie in https://github.com/roboflow/inference/pull/184
* Finalize keypoint detection by SolomonLake in https://github.com/roboflow/inference/pull/174
* Parallel Entrypoint Cleanup by probicheaux in https://github.com/roboflow/inference/pull/179
* Peter/orjson by probicheaux in https://github.com/roboflow/inference/pull/166
* Remove Legacy Cache Path by paulguerrie in https://github.com/roboflow/inference/pull/185
* Multi-Stage Builds by paulguerrie in https://github.com/roboflow/inference/pull/186
* Revert "Peter/orjson" by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/190
* Accept numpy image in batch as base64 encoded string by sberan in https://github.com/roboflow/inference/pull/187
* Improve missing api key error handling by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/188

Highlights
CogVLM
Inference server users can now run CogVLM for a fully self hosted, multimodal LLM. [See the example here](https://github.com/roboflow/inference/blob/main/examples/cogvlm/cogvlm_client.py).

Slim Docker Images
For use cases that do not need Core Model functionality (e.g. CLIP), there are `-slim` docker images available which include fewer dependancies and are much smaller.
* roboflow/roboflow-inference-server-cpu-slim
* roboflow/roboflow-inference-server-gpu-slim

Breaking Changes
Infer API Update
The `infer()` method of Roboflow models now returns an `InferenceResponse` object instead of raw model output. This means that using models in application logic should feel similar to using models via the HTTP interface. In practice, programs that used the following pattern

python
...
model = get_roboflow_model(...)
results = model.infer(...)
results = model.make_response(...)
...


should be updated to

python
...
model = get_roboflow_model(...)
results = model.infer(...)
...


New Contributors
* ShingoMatsuura made their first contribution in https://github.com/roboflow/inference/pull/149

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.5...v0.9.6

0.9.5

Not secure
Features, Fixes, and Improvements

* Fixed the automated pypi deploys by paulguerrie in https://github.com/roboflow/inference/pull/126
* Fixed broken docs links for entities by paulguerrie in https://github.com/roboflow/inference/pull/127
* revert accidental change to makefile by sberan in https://github.com/roboflow/inference/pull/128
* Update compatability_matrix.md by capjamesg in https://github.com/roboflow/inference/pull/129
* Model Validation On Load by paulguerrie in https://github.com/roboflow/inference/pull/131
* Use Simple Docker Commands in Tests by paulguerrie in https://github.com/roboflow/inference/pull/132
* No Exception Raised By Model Manager Remove Model by paulguerrie in https://github.com/roboflow/inference/pull/134
* Noted that inference stream only supports object detection by stellasphere in https://github.com/roboflow/inference/pull/136
* Fix URL in docs image by capjamesg in https://github.com/roboflow/inference/pull/138
* Deduce API keys from logs by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/140
* Fix problem with BGR->RGB and RGB->BGR conversions by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/137
* Update default API key parameter for get_roboflow_model function by SkalskiP in https://github.com/roboflow/inference/pull/142
* Documentation improvements by capjamesg in https://github.com/roboflow/inference/pull/133
* Hosted Inference Bug Fixes by paulguerrie in https://github.com/roboflow/inference/pull/143
* Introduce Active Learning by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/130
* Update HTTP inference docs by capjamesg in https://github.com/roboflow/inference/pull/145
* Speed Regression Fix - Remove Numpy Range Validation by paulguerrie in https://github.com/roboflow/inference/pull/146
* Introduce additional active learning sampling strategies by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/148
* Add stub endpoints to allow data collection without model by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/141
* Fix CLIP example by capjamesg in https://github.com/roboflow/inference/pull/150
* Fix outdated warning with 'inference' upgrade suggestion by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/154
* Allow setting cv2 camera capture props from .env file by sberan in https://github.com/roboflow/inference/pull/152
* Wrap pingback url by robiscoding in https://github.com/roboflow/inference/pull/151
* Introduce new stream interface by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/156
* Clarify Enterprise License by yeldarby in https://github.com/roboflow/inference/pull/158
* Async Model Manager by probicheaux in https://github.com/roboflow/inference/pull/111
* Peter/async model manager by probicheaux in https://github.com/roboflow/inference/pull/159
* Fix Critical and High Vulnerabilities in Docker Images by paulguerrie in https://github.com/roboflow/inference/pull/157
* Split Requirements For Unit vs. Integration Tests by paulguerrie in https://github.com/roboflow/inference/pull/160

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.3...v0.9.5.rc2

New `inference.Stream` interface

We are excited to introduce the upgraded version of our stream interface: `InferencePipeline`. Additionally, the `WebcamStream` class has evolved into a more versatile `VideoSource`.

This new abstraction is not only faster and more stable but also provides more granular control over the entire inference process.

Can I still use `inference.Stream`?

Absolutely! The old components remain unchanged for now. However, be aware that this abstraction is slated for deprecation over time. We encourage you to explore the new `InferencePipeline` interface and take advantage of its benefits.

What has been improved?

- **Performance:** Experience A significant boost in throughput, up to 5 times, and improved latency for online inference on video streams using the YOLOv8n model.
- **Stability:** `InferencePipeline` can now automatically re-establish a connection for online video streams if a connection is lost.
- **[Prediction Sinks](https://github.com/roboflow/inference/blob/main/inference/core/interfaces/stream/sinks.py):** Introducing prediction sinks, simplifying the utilization of predictions without the need for custom code.
- **Control Over Inference Process:** `InferencePipeline` intelligently adapts to the type of video source, whether a file or stream. Video files are processed frame by frame, while online streams prioritize real-time processing, dropping non-real-time frames.
- **Observability:** Gain insights into the processing state through events exposed by `InferencePipeline`. Reference implementations letting you to monitor processing are also [available](https://github.com/roboflow/inference/blob/main/inference/core/interfaces/stream/watchdog.py).

How to Migrate to the new Inference Stream interface?

You need to change a few lines of code to migrate to using the new Inference stream interface.

Below is an example that shows the old interface:

python
import inference

def on_prediction(predictions, image):
pass

inference.Stream(
source="webcam", or "rstp://0.0.0.0:8000/password" for RTSP stream, or "file.mp4" for video
model="rock-paper-scissors-sxsw/11", from Universe
output_channel_order="BGR",
use_main_thread=True, for opencv display
on_prediction=on_prediction,
)


Here is the same code expressed in the new interface:

python
from inference.core.interfaces.stream.inference_pipeline import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxes

pipeline = InferencePipeline.init(
model_id="rock-paper-scissors-sxsw/11",
video_reference=0,
on_prediction=render_boxes,
)
pipeline.start()
pipeline.join()



Note the slight change in the on_prediction handler, from:

python
def on_prediction(predictions: dict, image: np.ndarray) -> None:
pass


Into:

python
from inference.core.interfaces.camera.entities import VideoFrame

def on_prediction(predictions: dict, video_frame: VideoFrame) -> None:
pass


Want to know more?

Here are useful references:
* [Stream API documentation](https://inference.roboflow.com/quickstart/run_model_on_rtsp_webcam/#new-stream-interface)
* [InferencePipeline documention](https://github.com/roboflow/inference/blob/main/inference/core/interfaces/stream/inference_pipeline.py)
* [VideoSource documentation](https://github.com/roboflow/inference/blob/main/inference/core/interfaces/camera/video_source.py)

Parallel Robofolow Inference server

The Roboflow Inference Server supports concurrent processing. This version of the server accepts and processes requests asynchronously, running the web server, preprocessing, auto batching, inference, and post processing all in separate threads to increase server FPS throughput. Separate requests to the same model will be batched on the fly as allowed by `$MAX_BATCH_SIZE`, and then response handling will occurr independently. Images are passed via Python's SharedMemory module to maximize throughput.

These changes result in as much as a *76% speedup* on one measured workload.

> [!NOTE]
> Currently, only Object Detection, Instance Segmentation, and Classification models are supported by this module. Core models are not enabled.

> [!IMPORTANT]
> We require a Roboflow Enterprise License to use this in production. See inference/enterpise/LICENSE.txt for details.

How To Use Concurrent Processing
You can build the server using `./inference/enterprise/parallel/build.sh` and run it using `./inference/enterprise/parallel/run.sh`

We provide a container at Docker Hub that you can pull using `docker pull roboflow/roboflow-inference-server-gpu-parallel:latest`. If you are pulling a pinned tag, be sure to change the `$TAG` variable in `run.sh`.

This is a drop in replacement for the old server, so you can send requests using the [same API calls](https://inference.roboflow.com/quickstart/http_inference/#step-2-run-inference) you were using previously.


Performance
We measure and report performance across a variety of different task types by selecting random models found on Roboflow Universe.

Methodology

The following metrics are taken on a machine with eight cores and one gpu. The FPS metrics reflect best out of three trials. The column labeled 0.9.5.parallel reflects the latest concurrent FPS metrics. Instance segmentation metrics are calculated using `"mask_decode_mode": "fast"` in the request body. Requests are posted concurrently with a parallelism of 1000.

Results
| Workspace | Model | Model Type | split | 0.9.5.rc FPS| 0.9.5.parallel FPS |
| ----------|------ | ----------- |------|-------------| -------------------|
| senior-design-project-j9gpp | nbafootage/3| object-detection | train | 30.2 fps | 44.03 fps |
| niklas-bommersbach-jyjff | dart-scorer/8| object-detection | train | 26.6 fps | 47.0 fps |
| geonu | water-08xpr/1 | instance-segmentation | valid | 4.7 fps | 6.1 fps |
| university-of-bradford | detecting-drusen_1/2 | instance-segmentation | train | 6.2 fps | 7.2 fps |
| fy-project-y9ecd | cataract-detection-viwsu/2 | classification | train | 48.5 fps | 65.4 fps |
| hesunyu | playing-cards-ir0wr/1 | classification | train | 44.6 fps | 57.7 fps |

0.9.4

Not secure
**Summary**

This release includes new logic to validate models on load. This mitigates an issue seen when the model artifacts are corrupted during download.

0.9.3

Not secure
**Summary**
This release includes:

- [DocTR](https://github.com/roboflow/inference/blob/main/inference/models/doctr/doctr_model.py) for detecting and recognizing text
- Updates to our stream interface
- Some bug fixes and other maintenance

0.9.2

Not secure
**Summary**

- Bugfix parsing base64 image string when source is browser (was adding unnecessary prefix)
- Validate that equal or fewer than MAX_BATCH_SIZE images are being passed to object detection inference
- Default MAX_BATCH_SIZE to infinity
- Add batch regression tests
- Add CLI to readme
- Add generic stream object
- Add preprocess/predict/postprocess to Clip to match base interface
- Readme updates
- Landing page

Page 12 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.