Inference

Latest version: v0.29.1

Safety actively analyzes 685525 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 14

127.0.0.19001

result = client.infer_from_yolo_world(
inference_input=YOUR_IMAGE,
class_names=["dog", "cat"],
)


Active Learning 🀝 `workflows`
Active Learning data collection made simple with `workflows` πŸ”₯ Now, with just a little bit of configuration you can start data collection to improve your model over time. Just take look how easy it is:

<div align="center">
<video src="https://github.com/roboflow/inference/assets/146137186/06e0b355-51f3-486d-8a5b-07123284b0e9" />
</div>

Key features:
* works for all models supported at Roboflow platform, including the ones from Roboflow Universe - making it trivial to use off-the-shelf model during project kick-off stage to collect dataset while serving meaningful predictions
* combines well with multiple `workflows` blocks - including `DetectionsConsensus` - making it possible to sample based on predictions of models ensemble πŸ’₯
* Active Learning block may use project-level config of Active Learning or define Active Learning strategy directly in the block definition (refer to [Active Learning documentation πŸ“– ](https://github.com/roboflow/inference/blob/main/inference/core/active_learning/README.md) for details on how to configure data collection)

See [documentation πŸ“– ](https://github.com/roboflow/inference/tree/main/inference/enterprise/workflows#activelearningdatacollector) of new `ActiveLearningDataCollector` to find detailed info.

🌱 Changed
`InferencePipeline` now works with all models supported at Roboflow platform πŸŽ†
For a long time - `InferencePipeline` worked only with object-detection models. This is no longer the case - from now on, other type of models supported at Roboflow platform (including stubs - like `my-project/0`) work under `InferencePipeline`. No changes are required in existing code. Just put `model_id` of your model and the pipeline should work. Sinks suited for detection-only models were adjusted to ignore non-compliant formats of predictions and produce warnings notifying about incompatibility.

πŸ”¨ Fixed
* Bug in `yolact` model in https://github.com/roboflow/inference/pull/266

πŸ† Contributors
paulguerrie (Paul Guerrie), probicheaux (Peter Robicheaux), PawelPeczek-Roboflow (PaweΕ‚ PΔ™czek)


**Full Changelog**: https://github.com/roboflow/inference/compare/v0.9.10...v0.9.11

1.4.0

* **New Kind**: A [secret](https://inference.roboflow.com/workflows/kinds/secret/) kind for credentials is now available. No action needed for existing blocks, but future blocks should use it for secret parameters.

* **Serialization Fix**: Fixed a bug where non-batch outputs weren't being serialized in v1.3.0.

* **Execution Engine Fix**: Resolved an issue with empty inputs being passed to downstream blocks. This update ensures smoother workflow execution and may fix previous issues without any changes needed.

See [full changelog](https://inference.roboflow.com/workflows/execution_engine_changelog/#execution-engine-v140-inference-v0290) for more details.

🚧 Changed

Open Workflows on Roboflow Platform

We are moving towards shareable Workflow Definitions on Roboflow Platform - to reflect that yeldarby made the `api_key` optional in Workflows Run requests in https://github.com/roboflow/inference/pull/843

⛑️ Maintenance
* Update Docker Tag Logic by alexnorell in https://github.com/roboflow/inference/pull/840
* Make check_if_branch_is_mergeable.yml to succeed if merging to main by grzegorz-roboflow in https://github.com/roboflow/inference/pull/848
* Add workflow to check mergeable state executed on pull request by grzegorz-roboflow in https://github.com/roboflow/inference/pull/847

**Full Changelog**: https://github.com/roboflow/inference/compare/v0.28.2...v0.29.0

1.3.0

> [!TIP]
> Changes introduced in Execution Engine `v1.3.0` are non breaking, but we shipped couple of nice extensions and we **encourage** contributors to adopt them.
>
> Full details of the changes and migration guides available [here](https://inference.roboflow.com/workflows/execution_engine_changelog/#execution-engine-v130-inference-v0270).


βš™οΈ [Kinds](https://inference.roboflow.com/workflows/kinds/) with dynamic serializers and deserializers
* Added serializers/deserializers for each kind, enabling integration with external systems.
* Updated the Blocks Bundling page to reflect these changes.
* Enhanced `roboflow_core` kinds with suitable serializers/deserializers.

See our [updated blocks bundling guide](https://inference.roboflow.com/workflows/blocks_bundling/#serializers-and-deserializers-for-kinds) for more details.


πŸ†“ Any data can be now a Workflow input

We've added new Workflows input type `WorkflowBatchInput` - which is capable of accepting any `kind`, unlike our previous inputs like `WorkflowImage`. What's even nicer - you can also specify dimensionality level for `WorkflowBatchInput` - basically **making it possible to break down each workflow into single-steps executed in *debug* mode**.

Take a look at [πŸ“– docs](https://inference.roboflow.com/workflows/definitions/#generic-batch-oriented-inputs) to learn more

πŸ‹οΈ Easier blocks development

We got tired wondering if specific field in block manifest should be marked with `StepOutputSelector`, `WorkflowImageSelector`,
`StepOutputImageSelector` or `WorkflowParameterSelector` type annotation. That was **very confusing** and was effectively increasing the difficulty of contributions.

Since the selectors type annotations are **required** for the Execution Engine that block define *placeholders* for data of specific *kind* we could not eliminate those annotations, but we are making them easier to understand - introducing generic annotation called `Selector(...)`.

`Selector(...)` no longer tells Execution Engine that the block accept batch-oriented data - so we replaced old `block_manifest.accepts_batch_input()` method with two new:
* `block_manifest.get_parameters_accepting_batches()` - to return list of params that `WorkflowBlock.run(...)` method
accepts to be wrapped in `Batch[X]` container
* `block_manifest.get_parameters_accepting_batches_and_scalars()` - to return list of params that `WorkflowBlock.run(...)` method
accepts either to be wrapped in `Batch[X]` container or provided as stand-alone scalar values.

> [!TIP]
> To adopt changes while creating new block - visit our updated [blocks creation](https://inference.roboflow.com/workflows/create_workflow_block/) guide.
>
> To migrate existing blocks - take a look at [migration guide](https://inference.roboflow.com/workflows/execution_engine_changelog/#execution-engine-v130-inference-v0270).

πŸ–ŒοΈ Increased JPEG compression quality
`WorkflowImageData` has a property called `base64_image` which is auto-generated out from `numpy_image` associated to the object. In the previous version of `inference` - default compression level was `90%` - we increased it to `95%`. We expect that this change will generally improve the quality of images passed between steps, yet there is no guarantee of better results from the models (that depends on how models were trained). Details of change: https://github.com/roboflow/inference/pull/798

> [!CAUTION]
> Small changes in model predictions are expected due to this change - as it may happen that we are passing slightly different JPEG images into the models. If you are negatively affected, please let us know via GH Issues.

🧠 Change in Roboflow models blocks
We've changed the way on how Roboflow models blocks work on Roboflow hosted platform. Previously they were using `numpy_image` property of `WorkflowImageData` as an input to `InferenceHTTPClient` while executing remote calls - which usually caused that we are serialising numpy image to JPEG and then to `base64`, whereas usually on Roboflow hosted platform, we had `base64` representation of image already provided, so effectively we were:
* slowing down the processing
* artificially decreasing the quality of images

This is no longer the case, so we do only transform image representation (and apply lossy compression) when needed. Details of change: https://github.com/roboflow/inference/pull/798.

> [!CAUTION]
> Small changes in model predictions are expected due to this change - as it may happen that we are passing slightly different JPEG images into the models. If you are negatively affected, please let us know via GH Issues.

πŸ—’οΈ New kind `inference_id`

We've diagnosed the need to give a semantic meaning for inference identifiers that are used by external systems as correlation IDs.
That's why we introduce new kind - [`inference_id`](https://inference.roboflow.com/workflows/kinds/inference_id/).
We encourage blocks developer to use new kind.

πŸ—’οΈ New field available in `video_metadata` and `image` kinds

We've added new optional field to video metadata - `measured_fps` - take a look at [πŸ“– docs](https://inference.roboflow.com/workflows/internal_data_types/#videometadata)


πŸ—οΈ Changed
* Disable telemetry when running YOLO world by grzegorz-roboflow in https://github.com/roboflow/inference/pull/800
* Pass webrtc TURN config as request parameter when calling POST /inference_pipelines/initialise_webrtc by grzegorz-roboflow in https://github.com/roboflow/inference/pull/801
* Remove reset from YOLO settings by grzegorz-roboflow in https://github.com/roboflow/inference/pull/802
* Pin all dependencies and update to new versions of libs by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/803
* bumping owlv2 version and putting cache size in env by isaacrob-roboflow in https://github.com/roboflow/inference/pull/813

πŸ”§ Fixed
* Florence 2 - fixing model caching by probicheaux in https://github.com/roboflow/inference/pull/808
* Use measured fps when fetching frames from live stream by grzegorz-roboflow in https://github.com/roboflow/inference/pull/805
* Fix issue with label visualisation by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/811 and PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/814


**Full Changelog**: https://github.com/roboflow/inference/compare/v0.26.1...v0.27.0

0.29.1

πŸ› οΈ Fixed

`python-multipart` security issue fixed

> [!CAUTION]
> We are **removing** the following vulnerability detected recently in `python-multipart` library.
>
> **Issue summary**
> When parsing form data, python-multipart skips line breaks (CR `\r` or LF `\n`) in front of the first boundary and any tailing bytes after the last boundary. This happens one byte at a time and emits a log event each time, which may cause excessive logging for certain inputs.
>
> An attacker could abuse this by sending a malicious request with lots of data before the first or after the last boundary, causing high CPU load and stalling the processing thread for a significant amount of time. In case of ASGI application, this could stall the event loop and prevent other requests from being processed, resulting in a denial of service (DoS).
>
> **Impact**
> Applications that use python-multipart to parse form data (or use frameworks that do so) are affected.
>
> **Next steps**
> We advise all `inference` clients to migrate to version `0.29.1`, especially when `inference` docker image is in use. Clients using
> older versions of Python package may also upgrade the vulnerable dependency in their environment:
> bash
> pip install "python-multipart==0.0.19"
>
>
> **Details of the change:** https://github.com/roboflow/inference/pull/855

Remaining fixes
* Fix problem with docs rendering by PawelPeczek-Roboflow in https://github.com/roboflow/inference/pull/854
* Remove piexif dependency by iurisilvio in https://github.com/roboflow/inference/pull/851


**Full Changelog**: https://github.com/roboflow/inference/compare/v0.29.0...v0.29.1

0.29.0

πŸš€ Added

πŸ“§ Slack and Twilio notifications in Workflows

We've just added two notification blocks to Worfklows ecosystem - [Slack](https://inference.roboflow.com/workflows/blocks/slack_notification/) and [Twilio](https://inference.roboflow.com/workflows/blocks/twilio_sms_notification/). Now, there is nothing that can stop you from sending notifications from your Workflows!

https://github.com/user-attachments/assets/52ac8a94-69e4-4304-a0b8-8c77695e688f

`inference-cli` 🀝 Workflows

We are happy to share that `inference-cli` has now a new command - `inference workflows` that make it possible to process data with Workflows without any additional Python scripts needed πŸ˜„

πŸŽ₯ Video files processing
* Input a video path, specify an output directory, and run any workflow.
* Frame-by-frame results saved as CSV or JSONL.
* Your Workflow outputs images? Get an output video out from them if you wanted

πŸ–ΌοΈ Process images and directories of images πŸ“‚
* Outputs stored in subdirectories with JSONL/CSV aggregation available.
* Fault-tollerant processing:
* βœ… Resume after failure (tracked in logs).
* πŸ”„ Option to force re-processing.

Review our [πŸ“– docs](https://inference.roboflow.com/inference_helpers/cli_commands/workflows/) to discover all options!

<details>
<summary>πŸ‘‰ <b>Try the command</b></summary>

To try the command, simply run:
bash
pip install inference

inference workflows process-images-directory \
-i {your_input_directory} \
-o {your_output_directory} \
--workspace_name {your-roboflow-workspace-url} \
--workflow_id {your-workflow-id} \
--api-key {your_roboflow_api_key}

</details>


https://github.com/user-attachments/assets/383e5300-da44-4526-b99f-9a301d944557

πŸ”‘ Secrets provider block in Workflows

Many Workflows blocks require credential to work correctly, but so far, the ecosystem only provided one secure option for passing those credentials - using workflow parameters, forcing client applications to manipulate secret values.

Since this is not handy solution, we decided to create [Environment Secrets Store block](https://inference.roboflow.com/workflows/blocks/environment_secrets_store/) which is capable of fetching credentials from environmental variables of `inference` server. Thanks to that, admins can now set up the server and client's code do not need to handle secrets ✨

⚠️ Security Notice:
For enhanced security, always use secret providers or Workflow parameters to handle credentials. Hardcoding secrets into your Workflows is strongly discouraged.

πŸ”’ Limitations:
This block is designed for self-hosted inference servers only. Due to security concerns, exporting environment variables is not supported on the hosted Roboflow Platform.

🌐 OPC Workflow block πŸ“‘

The OPC Writer block provides a versatile set of integration options that enable enterprises to seamlessly connect with OPC-compliant systems and incorporate real-time data transfer into their workflows. Here’s how you can leverage the block’s flexibility for various integration scenarios that industry-class solutions require.

✨ Key features
* **Seamless OPC Integration:** Easily send data to OPC servers, whether on local networks or cloud environments, ensuring your workflows can interface with industrial control systems, IoT devices, and SCADA systems.
* **Cross-Platform Connectivity**: Built with [asyncua](https://github.com/FreeOpcUa/opcua-asyncio), the block enables smooth communication across multiple platforms, enabling integration with existing infrastructure and ensuring compatibility with a wide range of OPC standards.

> [!IMPORTANT]
> This Workflow block is released under [Roboflow Enterprise License](https://github.com/roboflow/inference/blob/main/inference/enterprise/LICENSE.txt) and is not available by default on Roboflow Hosted Platform.
> Anyone interested in integrating Workflows with industry systems through OPC - please [contact Roboflow Sales](https://roboflow.com/sales)

See grzegorz-roboflow's change in https://github.com/roboflow/inference/pull/842

πŸ› οΈ Fixed

0.28.2

πŸ”§ Fixed issue with `inference` package installation

26.11.2024 there was a release `0.20.4` of `tokenizers` library which is dependency of `inference` dependencies introducing breaking change for those `inference` clients who use Python 3.8 - causing the following errors while installation of recent (and older) versions of `inference`:

<details>
<summary>πŸ‘‰ MacOS</summary>


Downloading tokenizers-0.20.4.tar.gz (343 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error

Γ— Preparing metadata (pyproject.toml) did not run successfully.
β”‚ exit code: 1
╰─> [6 lines of output]

Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/

Checking for Rust toolchain....
[end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

Γ— Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details


</details>


<details>
<summary>πŸ‘‰ Linux</summary>

After installation, the following error was presented

/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1778: in _get_module
return importlib.import_module("." + module_name, self.__name__)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:[101](https://github.com/roboflow/inference/actions/runs/12049175470/job/33595408508#step:7:102)4: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:961: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
???
<frozen importlib._bootstrap>:1014: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:671: in _load_unlocked
???
<frozen importlib._bootstrap_external>:843: in exec_module
???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
???
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/__init__.py:15: in <module>
from . import (
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/mt5/__init__.py:36: in <module>
from ..t5.tokenization_t5_fast import T5TokenizerFast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py:23: in <module>
from ...tokenization_utils_fast import PreTrainedTokenizerFast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py:26: in <module>
import tokenizers.pre_tokenizers as pre_tokenizers_fast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/__init__.py:78: in <module>
from .tokenizers import (
E ImportError: /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/tokenizers.abi3.so: undefined symbol: PyInterpreterState_Get

The above exception was the direct cause of the following exception:
tests/inference/models_predictions_tests/test_owlv2.py:4: in <module>
from inference.models.owlv2.owlv2 import OwlV2
inference/models/owlv2/owlv2.py:11: in <module>
from transformers import Owlv2ForObjectDetection, Owlv2Processor
<frozen importlib._bootstrap>:[103](https://github.com/roboflow/inference/actions/runs/12049175470/job/33595408508#step:7:104)9: in _handle_fromlist
???
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1766: in __getattr__
module = self._get_module(self._class_to_module[name])
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1780: in _get_module
raise RuntimeError(
E RuntimeError: Failed to import transformers.models.owlv2 because of the following error (look up to see its traceback):
E /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/tokenizers.abi3.so: undefined symbol: PyInterpreterState_Get


</details>

> [!CAUTION]
> **We are fixing the problem in `inference` 0.28.2**, but it is not possible to be fixed older releases - for those who need to fix that
in their environments, please modify the build such that installing `inference` you also install `tokenizers<=0.20.3`.
> bash
> pip install inference "tokenizers<=0.20.3"
>

πŸ”§ Fixed issue with CUDA and stream management API

While running `inference` server and using [stream management API](https://inference.roboflow.com/workflows/video_processing/overview/) to run Workflows against video inside docker container, it was not possible to use CUDA due to bug present from the very start of the feature. We are fixing it now.



**Full Changelog**: https://github.com/roboflow/inference/compare/v0.28.1...v0.28.2

Page 1 of 14

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.