Inference

Latest version: v0.29.1

Safety actively analyzes 685670 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 14

0.9.9rc23

Not secure
This is a draft release of `v0.9.9`.

0.9.8

Not secure
What's Changed
* Add changes that eliminate mistakes spotted while initial e2e tests by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/204
* Add ZoomInfo integration by capjamesg in https://github.com/roboflow/inference/pull/205
* Added Kubernetes helm chart by bigbitbus in https://github.com/roboflow/inference/pull/206
* Wrap lambda deployment with AL model manager by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/207
* Emable SSL on Redis connection based on env config (to enable AWS lambda connectivity) by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/209
* Add Grounding DINO to Inference by capjamesg in https://github.com/roboflow/inference/pull/107
* Extend inference SDK with client for (almost all) core models by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/212
* API Key Not Required by Methods by paulguerrie in https://github.com/roboflow/inference/pull/211
* Expose InferencePipeline at the top level by yeldarby in https://github.com/roboflow/inference/pull/210
* Built In Jupyter Notebook by paulguerrie in https://github.com/roboflow/inference/pull/213
* Fix problem with keyless access and Active Learning by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/214

Highlights

Grounding DINO
Support for a new core model, [Grounding DINO](https://inference.roboflow.com/foundation/grounding_dino/) has been added. Grounding DINO is a zero-shot object detection model that you can use to identify objects in images and videos using arbitrary text prompts.

Inference SDK For Core Models
You can now use the Inference SDK with core models (like CLIP). No more complicated request and payload formatting. [See the docs here](https://inference.roboflow.com/inference_sdk/http_client/#client-for-core-models).

Built In Jupyter Notebook
Roboflow Inference Server containers now include a built in Jupyter notebook for development and testing. This notebook can be accessed via the inference server landing page. To use it, go to `localhost:9001` in your browser after starting an inference server. Then select "Jump Into An Inference Enabled Notebook". This will open a new tab with a Jupyterlab session, preloaded with example notebooks and all of the `inference` dependancies.

New Contributors
* bigbitbus made their first contribution in https://github.com/roboflow/inference/pull/206

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.7...v0.9.8

0.9.7

Not secure
What's Changed
* Bump cuda version for parallel by probicheaux in https://github.com/roboflow/inference/pull/191
* Add stream management HTTP api by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/180
* Peter/fix orjson by probicheaux in https://github.com/roboflow/inference/pull/192
* Introduce model aliases by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/193
* Fix problem with device request not being list but tuple by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/197
* Add inference server stop command by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/194
* Inference server start takes env file by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/195
* Add pull image progress display by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/198
* Improve Inference documentation by capjamesg in https://github.com/roboflow/inference/pull/183
* Catch CLI Error When Docker Is Not Running by paulguerrie in https://github.com/roboflow/inference/pull/203
* Introduce unified batching by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/199
* Change the default value for 'only_top_classes' option of close-to-threshold sampling strategy of AL by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/200
* updated API_KEY to ROBOFLOW_API_KEY for clarity by josephofiowa in https://github.com/roboflow/inference/pull/202

Highlights
Stream Management API (Enterprise)
The stream management api is designed to cater to users requiring the execution of inference to generate predictions using Roboflow object-detection models, particularly when dealing with online video streams. It enhances the functionalities of the familiar inference.Stream() and InferencePipeline() interfaces, as found in the open-source version of the library, by introducing a sophisticated management layer. The inclusion of additional capabilities empowers users to remotely manage the state of inference pipelines through the HTTP management interface integrated into this package. [More info](https://github.com/roboflow/inference/tree/main/inference/enterprise/stream_management).

Model Aliases
Some common public models now have convenient aliases! The with this release, the COCO base weights for YOLOv8 models can be accessed with user friendly model IDs like `yolov8n-640`. See all available model aliases [here](https://github.com/roboflow/inference/blob/main/inference/models/aliases.py).

Other Improvements
- Improved inference CLI commands
- Unified batching APIs so that all model types can accept batch requests
- Speed improvements for HTTP interface

New Contributors
* josephofiowa made their first contribution in https://github.com/roboflow/inference/pull/202

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.6...v0.9.7

0.9.7.rc2

0.9.7.rc1

0.9.6

Not secure
What's Changed
* Automated Build for Parallel Interface by paulguerrie in https://github.com/roboflow/inference/pull/168
* Deprecate TRT Support by paulguerrie in https://github.com/roboflow/inference/pull/169
* Better API Key Docs and Error Handling by paulguerrie in https://github.com/roboflow/inference/pull/171
* Add true implementation for AL configuration getter by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/173
* Bug Fix for Numpy Inputs by paulguerrie in https://github.com/roboflow/inference/pull/172
* features/sv-from-roboflow-no-need-class-list-args by ShingoMatsuura in https://github.com/roboflow/inference/pull/149
* Add development documentation of Active Learning by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/167
* Refactor inference methods to use make_response directly by SkalskiP in https://github.com/roboflow/inference/pull/147
* Updated HTTP Quickstart by paulguerrie in https://github.com/roboflow/inference/pull/176
* Peter/cogvlm by probicheaux in https://github.com/roboflow/inference/pull/175
* Error Handling for Onnx Session Creation by paulguerrie in https://github.com/roboflow/inference/pull/177
* Slim Docker Images by paulguerrie in https://github.com/roboflow/inference/pull/178
* Rename cog to cogvlm by paulguerrie in https://github.com/roboflow/inference/pull/182
* Wheel and Setuptools Upgrade by paulguerrie in https://github.com/roboflow/inference/pull/184
* Finalize keypoint detection by SolomonLake in https://github.com/roboflow/inference/pull/174
* Parallel Entrypoint Cleanup by probicheaux in https://github.com/roboflow/inference/pull/179
* Peter/orjson by probicheaux in https://github.com/roboflow/inference/pull/166
* Remove Legacy Cache Path by paulguerrie in https://github.com/roboflow/inference/pull/185
* Multi-Stage Builds by paulguerrie in https://github.com/roboflow/inference/pull/186
* Revert "Peter/orjson" by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/190
* Accept numpy image in batch as base64 encoded string by sberan in https://github.com/roboflow/inference/pull/187
* Improve missing api key error handling by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/188

Highlights
CogVLM
Inference server users can now run CogVLM for a fully self hosted, multimodal LLM. [See the example here](https://github.com/roboflow/inference/blob/main/examples/cogvlm/cogvlm_client.py).

Slim Docker Images
For use cases that do not need Core Model functionality (e.g. CLIP), there are `-slim` docker images available which include fewer dependancies and are much smaller.
* roboflow/roboflow-inference-server-cpu-slim
* roboflow/roboflow-inference-server-gpu-slim

Breaking Changes
Infer API Update
The `infer()` method of Roboflow models now returns an `InferenceResponse` object instead of raw model output. This means that using models in application logic should feel similar to using models via the HTTP interface. In practice, programs that used the following pattern

python
...
model = get_roboflow_model(...)
results = model.infer(...)
results = model.make_response(...)
...


should be updated to

python
...
model = get_roboflow_model(...)
results = model.infer(...)
...


New Contributors
* ShingoMatsuura made their first contribution in https://github.com/roboflow/inference/pull/149

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.5...v0.9.6

Page 11 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.