Safety vulnerability ID: 72684
The information on this page was manually curated by our Cybersecurity Intelligence Team.
Inference 0.16.0 updates its dependency 'setuptools' to include a security fix.
Latest version: 0.29.1
With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.
❗ In release `0.16.0` we introduced bug impacting `workflows` and `inference_sdk`
The mistake was introduced in https://github.com/roboflow/inference/pull/565 and fixed in https://github.com/roboflow/inference/pull/585 (both by PawelPeczek-Roboflow 😢 ) and was causing issues with order of results - regarding specific `workflows` blocks:
* blocks with Roboflow models, whenever used with batch input (for instance when workflow was run against multiple images, or Dynamic Crop was used) were mismatching order of predictions with respect to order of images
* the same was true for OpenAI block and GPT-4V block
* the problem was also introduced into `inference_sdk`, so whenever client was called with multiple images - results may have been missmatched
🚀 Added
Next bunch of updates for `workflows` 🥳
⚓ Versioning
From now on, both Execution Engine and `workflows` blocks are versioned to ensure greater stability across changes we make to improve ecosystem. Each workflow definition now declares `version` forcing the app to run against specific version of Execution Engine. If denoted version is `1.1.0`, then workflow would require Execution Engine `>=1.1.0,<2.0.0` and we gain ability to expose concurrently multiple major versions of EE in the library (doing our best to ensure that within a major version we only add features and support everything that was released earlier within the same major). On top of that:
* block manifest metadata field `name` now will be understood as name of blocks family with additional tag called `version` possible to be added; we propose the following naming conventions for block names: `namespace/family_namev1`. Thanks to those changes anyone could maintain multiple versions of the same block (appending new implementation to their plugin) ensuring backwards compatibilities on breaking changes
* each block manifest class may optionally expose class method `get_execution_engine_compatibility(...)` which would be used while model loading to ensure that selected Execution Engine is capable to run specific block
<details>
<summary> ✋ Example block manifest </summary>
python
class BlockManifest(WorkflowBlockManifest):
model_config = ConfigDict(
json_schema_extra={
"name": "My Block",
"version": "v1",
...
}
)
type: Literal["my_namespace/mu_blockv1"]
...
classmethod
def get_execution_engine_compatibility(cls) -> Optional[str]:
return ">=1.0.0,<2.0.0"
</details>
:rotating_light: ⚠️ BREAKING ⚠️ :rotating_light: Got rid of asyncio in Execution Engine
If you were tired of coroutines performing compute heavy tasks in `workflows`:
python
class MyBlock(WorkflowBlock):
async def run():
pass
we have great news. We've got rid of asyncio in favour of standard functions and methods which are much more intuitive in our setup. This change is obviously breaking all other steps, but worry not. Here is the example of what needs to be changed - usually you just need to remove `async` markers, but sometimes unfortunately pieces of asyncio code would need to be recreated.
python
class MyBlock(WorkflowBlock):
def run():
pass
Endpoint to expose workflow definition schema
Thanks to EmilyGavrilenko (https://github.com/roboflow/inference/pull/550) UI would now be able to verify syntax errors in workflows definitions automatically.
Roboflow Dedicated Deployment is closer and closer 😃
Thanks to PacificDou, `inference` server is getting ready to support new functionality which has a nickname Dedicated Deployment. Stay tuned to learn more details - we can tell that this is something worth waiting for. You may find some hints [in the PR](https://github.com/roboflow/inference/pull/570).
🔨 Fixed
:rotating_light: ⚠️ BREAKING ⚠️ :rotating_light: HTTP client of `inference` server changes default behaviour
The default value for flag client_downsizing_disabled was changed from `False` to `True` in release 0.16.0! For clients using models with input size above 1024x1024, running models on hosted platform it should improve predictions quality (as previous default behaviour was causing that input was downsized and then artificially upsized on the server side with worse image quality). There may be some clients that would like to remain previous settings to potentially improve speed (when internet connection is a bottleneck and large images are submitted despite small model input size).
If you liked the previous behaviour more - simply:
python
from inference_sdk import InferenceHTTPClient, InferenceConfiguration
client = InferenceHTTPClient(
"https://detect.roboflow.com",
api_key="XXX",
).configure(InferenceConfiguration(
client_downsizing_disabled=False,
))
`setuptools` were migrated to version above `70.0.0` to mitigate security issue
We've updated `rf-clip` package to support `setuptools>70.0.0` and bumped the version on `inference` side.
🌱 Changed
* 📖 Add documentation for ONNXRUNTIME_EXECUTION_PROVIDERS by grzegorz-roboflow in https://github.com/roboflow/inference/pull/562 - see [here](https://inference.roboflow.com/server_configuration/environmental_variables/)
* 📖 Update docs for easier quickstart by komyg in https://github.com/roboflow/inference/pull/544
* 📖 Add Inference Windows CUDA documentation by capjamesg in https://github.com/roboflow/inference/pull/502
* Add capjamesg to CODEOWNERS by capjamesg in https://github.com/roboflow/inference/pull/564
* Add persistent queue to usage collector by grzegorz-roboflow in https://github.com/roboflow/inference/pull/568
🏅 New Contributors
* komyg made their first contribution in https://github.com/roboflow/inference/pull/544
**Full Changelog**: https://github.com/roboflow/inference/compare/v0.15.2...v0.16.0
Scan your Python project for dependency vulnerabilities in two minutes
Scan your application