🚀 Added
YOLOWorld - new versions and Roboflow hosted inference 🤯
`inference` package now support 5 new versions of YOLOWorld model. We've added versions `x`, `v2-s`, `v2-m`, `v2-l`, `v2-x`. Versions with prefix `v2` have better performance than the previously published ones.
To use YOLOWorld in `inference`, use the following `model_id`: `yolo_world/<version>`, substituting `<version>` with one of `[s, m, l, x, v2-s, v2-m, v2-l, v2-x]`.
You can use the models in different contexts:
Roboflow hosted `inference` - easiest way to get your predictions :boom:
<details><summary> 💡 Please make sure you have inference-sdk installed </summary>
If you do not have the whole `inference` package installed, you will need to install at least`inference-sdk`:
bash
pip install inference-sdk
</details>
<details><summary> 💡 You need Roboflow account to use our hosted platform </summary>
* [Create account](https://roboflow.com/)
* [Get your API key](https://docs.roboflow.com/api-reference/authentication)
</details>
python
import cv2
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(api_url="https://infer.roboflow.com", api_key="<YOUR_ROBOFLOW_API_KEY>")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
image,
["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
model_version="s", <-- you do not need to provide `yolo_world/` prefix here
)
Self-hosted `inference` server
<details><summary> 💡 Please remember to clean up old version of docker image </summary>
If you ever used `inference` server before, please run:
bash
docker rmi roboflow/roboflow-inference-server-cpu:latest
or, if you have GPU on the machine
docker rmi roboflow/roboflow-inference-server-gpu:latest
in order to make sure the newest version of image is pulled.
</details>
<details><summary> 💡 Please make sure you run the server and have sdk installed </summary>
If you do not have the whole `inference` package installed, you will need to install at least `inference-cli` and `inference-sdk`:
bash
pip install inference-sdk inference-cli
Make sure you start local instance of `inference server` before running the code
bash
inference server start
</details>
python
import cv2
from inference_sdk import InferenceHTTPClient