Skip to content

v0.9.17

Compare
Choose a tag to compare
@PawelPeczek-Roboflow PawelPeczek-Roboflow released this 15 Mar 13:50
· 3088 commits to main since this release
a5dc38a

🚀 Added

YOLOWorld - new versions and Roboflow hosted inference 🤯

inference package now support 5 new versions of YOLOWorld model. We've added versions x, v2-s, v2-m, v2-l, v2-x. Versions with prefix v2 have better performance than the previously published ones.

To use YOLOWorld in inference, use the following model_id: yolo_world/<version>, substituting <version> with one of [s, m, l, x, v2-s, v2-m, v2-l, v2-x].

You can use the models in different contexts:

Roboflow hosted inference - easiest way to get your predictions 💥

💡 Please make sure you have inference-sdk installed

If you do not have the whole inference package installed, you will need to install at leastinference-sdk:

pip install inference-sdk
💡 You need Roboflow account to use our hosted platform
import cv2
from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(api_url="https://infer.roboflow.com", api_key="<YOUR_ROBOFLOW_API_KEY>")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
    image,
    ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
    model_version="s",  # <-- you do not need to provide `yolo_world/` prefix here
)

Self-hosted inference server

💡 Please remember to clean up old version of docker image

If you ever used inference server before, please run:

docker rmi roboflow/roboflow-inference-server-cpu:latest

# or, if you have GPU on the machine
docker rmi roboflow/roboflow-inference-server-gpu:latest

in order to make sure the newest version of image is pulled.

💡 Please make sure you run the server and have sdk installed

If you do not have the whole inference package installed, you will need to install at least inference-cli and inference-sdk:

pip install inference-sdk inference-cli

Make sure you start local instance of inference server before running the code

inference server start
import cv2
from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient(api_url="http://127.0.0.1:9001")
image = cv2.imread("<path_to_your_image>")
results = client.infer_from_yolo_world(
    image,
    ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
    model_version="s",  # <-- you do not need to provide `yolo_world/` prefix here
)

In inference Python package

💡 Please remember to install inference with yolo-world extras
pip install "inference[yolo-world]"
import cv2
from inference.models import YOLOWorld

image = cv2.imread("<path_to_your_image>")
model = YOLOWorld(model_id="yolo_world/s")
results = model.infer(
    image, 
    ["person", "backpack", "dog", "eye", "nose", "ear", "tongue"]
)

🌱 Changed

New Contributors

Full Changelog: v0.9.16...v0.9.17