Skip to content

Releases: roboflow/inference

v0.31.1

13 Dec 18:52
be906b2
Compare
Choose a tag to compare

🔧 Fixed

Full Changelog: v0.31.0...v0.31.1

v0.31.0

13 Dec 18:27
ac4787d
Compare
Choose a tag to compare

🚀 Added

📏 Easily create embeddings and compare them in Workflows

Thanks to @yeldarby, we have Clip Embedding and Cosine Similarity Workflows blocks. Just take a look what would now be possible.

💡 Application ideas

  • Visual Search: Match text queries (e.g., "red shoes") to the most relevant images without training a custom model.
  • Image Deduplication: Identify similar or duplicate images by calculating embeddings and measuring cosine similarity.
  • Zero-Shot Classification: Classify images into categories by comparing their embeddings to pre-defined text labels (e.g., "cat," "dog").
  • Outliers detection: Check which images do not match to general trend

gemini-2.0-flash 🤝 Workflows

Check out model card and start using new model, simply pointing new model type in Google Gemini Workflow block 😄 All thanks to @EmilyGavrilenko

🔥 Recent supervision versions are now supported

For a long time we had issue with not supporting up-to-date supervision releases. This is no longer the case thanks to @LinasKo and his contribution #881 🙏

🐕‍🦺 React on changes in Workflows

We have new Delta Filter block that optimizes workflows by triggering downstream steps only when input values change, reducing redundant processing.

📊 Key Features:

  • Value Changes Detection: Triggers actions only on value changes.
  • Flexibility: Hooks up to changes in numbers, strings, and more.
  • Per-Video Caching: Tracks changes using - changes for each video stream or batch element would be traced separately

💡 Use Case:

  • Detect changes (e.g., people count) in video analysis and trigger downstream actions efficiently.

🔧 Fixed

  • confidence threshold was not applied for multi-label classification models. @grzegorz-roboflow fixed the problem in #873
  • Active Learning Data collection finally works for multi-label classification models - see @grzegorz-roboflow work in #874
  • Fixed model_id bug with InferenceAggregator block by @robiscoding in #876
  • Security issue: nanoid from 3.3.7 to 3.3.8 - see #878
  • Fix measurement logic for segmentations in measurement block by @NickHerrig in #872

🚧 Changed

New Contributors

Full Changelog: v0.30.0...v0.31.0

v0.30.0

11 Dec 13:06
25aa233
Compare
Choose a tag to compare

🚀 Added

✨ Paligemma2 support!

Enhanced model support: We’re excited to introduce Paligemma2 integration, a next-generation model designed for more flexible and efficient inference. This upgrade facilitates smoother handling of multi-modal inputs like images and captions, offering better versatility in machine learning applications. Check out the implementation details and examples in this script to see how to get started.

Change added by @probicheaux in #864

Remaining changes

Full Changelog: v0.29.2...v0.30.0

v0.29.2

05 Dec 18:00
6abcb3c
Compare
Choose a tag to compare

ultralytics security issue fixed

Caution

Ultralytics maintainers notified the community, that code in the ultralytics wheel 8.3.41 is not what's in GitHub and appears to invoke mining. Users of ultralytics who install 8.3.41 will unknowingly execute an xmrig miner.
Please see this issue for more details

Remaining fixes

Full Changelog: v0.29.1...v0.29.2

v0.29.1

03 Dec 14:25
53f84a1
Compare
Choose a tag to compare

🛠️ Fixed

python-multipart security issue fixed

Caution

We are removing the following vulnerability detected recently in python-multipart library.

Issue summary
When parsing form data, python-multipart skips line breaks (CR \r or LF \n) in front of the first boundary and any tailing bytes after the last boundary. This happens one byte at a time and emits a log event each time, which may cause excessive logging for certain inputs.

An attacker could abuse this by sending a malicious request with lots of data before the first or after the last boundary, causing high CPU load and stalling the processing thread for a significant amount of time. In case of ASGI application, this could stall the event loop and prevent other requests from being processed, resulting in a denial of service (DoS).

Impact
Applications that use python-multipart to parse form data (or use frameworks that do so) are affected.

Next steps
We advise all inference clients to migrate to version 0.29.1, especially when inference docker image is in use. Clients using
older versions of Python package may also upgrade the vulnerable dependency in their environment:

pip install  "python-multipart==0.0.19"

Details of the change: #855

Remaining fixes

Full Changelog: v0.29.0...v0.29.1

v0.29.0

29 Nov 17:15
48a8c05
Compare
Choose a tag to compare

🚀 Added

📧 Slack and Twilio notifications in Workflows

We've just added two notification blocks to Worfklows ecosystem - Slack and Twilio. Now, there is nothing that can stop you from sending notifications from your Workflows!

slack_notification.mp4

inference-cli 🤝 Workflows

We are happy to share that inference-cli has now a new command - inference workflows that make it possible to process data with Workflows without any additional Python scripts needed 😄

🎥 Video files processing

  • Input a video path, specify an output directory, and run any workflow.
  • Frame-by-frame results saved as CSV or JSONL.
  • Your Workflow outputs images? Get an output video out from them if you wanted

🖼️ Process images and directories of images 📂

  • Outputs stored in subdirectories with JSONL/CSV aggregation available.
  • Fault-tollerant processing:
    • ✅ Resume after failure (tracked in logs).
    • 🔄 Option to force re-processing.

Review our 📖 docs to discover all options!

👉 Try the command

To try the command, simply run:

pip install inference

inference workflows process-images-directory \
    -i {your_input_directory} \
    -o {your_output_directory} \
    --workspace_name {your-roboflow-workspace-url} \
    --workflow_id {your-workflow-id} \
    --api-key {your_roboflow_api_key}
Screen.Recording.2024-11-26.at.18.19.23.mov

🔑 Secrets provider block in Workflows

Many Workflows blocks require credential to work correctly, but so far, the ecosystem only provided one secure option for passing those credentials - using workflow parameters, forcing client applications to manipulate secret values.

Since this is not handy solution, we decided to create Environment Secrets Store block which is capable of fetching credentials from environmental variables of inference server. Thanks to that, admins can now set up the server and client's code do not need to handle secrets ✨

⚠️ Security Notice:

For enhanced security, always use secret providers or Workflow parameters to handle credentials. Hardcoding secrets into your Workflows is strongly discouraged.

🔒 Limitations:

This block is designed for self-hosted inference servers only. Due to security concerns, exporting environment variables is not supported on the hosted Roboflow Platform.

🌐 OPC Workflow block 📡

The OPC Writer block provides a versatile set of integration options that enable enterprises to seamlessly connect with OPC-compliant systems and incorporate real-time data transfer into their workflows. Here’s how you can leverage the block’s flexibility for various integration scenarios that industry-class solutions require.

✨ Key features

  • Seamless OPC Integration: Easily send data to OPC servers, whether on local networks or cloud environments, ensuring your workflows can interface with industrial control systems, IoT devices, and SCADA systems.
  • Cross-Platform Connectivity: Built with asyncua, the block enables smooth communication across multiple platforms, enabling integration with existing infrastructure and ensuring compatibility with a wide range of OPC standards.

Important

This Workflow block is released under Roboflow Enterprise License and is not available by default on Roboflow Hosted Platform.
Anyone interested in integrating Workflows with industry systems through OPC - please contact Roboflow Sales

See @grzegorz-roboflow's change in #842

🛠️ Fixed

Workflows Execution Engine v1.4.0

  • New Kind: A secret kind for credentials is now available. No action needed for existing blocks, but future blocks should use it for secret parameters.

  • Serialization Fix: Fixed a bug where non-batch outputs weren't being serialized in v1.3.0.

  • Execution Engine Fix: Resolved an issue with empty inputs being passed to downstream blocks. This update ensures smoother workflow execution and may fix previous issues without any changes needed.

See full changelog for more details.

🚧 Changed

Open Workflows on Roboflow Platform

We are moving towards shareable Workflow Definitions on Roboflow Platform - to reflect that @yeldarby made the api_key optional in Workflows Run requests in #843

⛑️ Maintenance

Full Changelog: v0.28.2...v0.29.0

v0.28.2

27 Nov 14:02
3d15bcb
Compare
Choose a tag to compare

🔧 Fixed issue with inference package installation

26.11.2024 there was a release 0.20.4 of tokenizers library which is dependency of inference dependencies introducing breaking change for those inference clients who use Python 3.8 - causing the following errors while installation of recent (and older) versions of inference:

👉 MacOS
Downloading tokenizers-0.20.4.tar.gz (343 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error

× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [6 lines of output]

    Cargo, the Rust package manager, is not installed or is not on PATH.
    This package requires Rust and Cargo to compile extensions. Install it through
    the system's package manager or via https://rustup.rs/

    Checking for Rust toolchain....
    [end of output]

note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details
👉 Linux

After installation, the following error was presented

/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1778: in _get_module
  return importlib.import_module("." + module_name, self.__name__)
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/importlib/__init__.py:127: in import_module
  return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:[101](https://github.com/roboflow/inference/actions/runs/12049175470/job/33595408508#step:7:102)4: in _gcd_import
  ???
<frozen importlib._bootstrap>:991: in _find_and_load
  ???
<frozen importlib._bootstrap>:961: in _find_and_load_unlocked
  ???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
  ???
<frozen importlib._bootstrap>:1014: in _gcd_import
  ???
<frozen importlib._bootstrap>:991: in _find_and_load
  ???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
  ???
<frozen importlib._bootstrap>:671: in _load_unlocked
  ???
<frozen importlib._bootstrap_external>:843: in exec_module
  ???
<frozen importlib._bootstrap>:219: in _call_with_frames_removed
  ???
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/__init__.py:15: in <module>
  from . import (
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/mt5/__init__.py:36: in <module>
  from ..t5.tokenization_t5_fast import T5TokenizerFast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py:23: in <module>
  from ...tokenization_utils_fast import PreTrainedTokenizerFast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py:26: in <module>
  import tokenizers.pre_tokenizers as pre_tokenizers_fast
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/__init__.py:78: in <module>
  from .tokenizers import (
E   ImportError: /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/tokenizers.abi3.so: undefined symbol: PyInterpreterState_Get

The above exception was the direct cause of the following exception:
tests/inference/models_predictions_tests/test_owlv2.py:4: in <module>
  from inference.models.owlv2.owlv2 import OwlV2
inference/models/owlv2/owlv2.py:11: in <module>
  from transformers import Owlv2ForObjectDetection, Owlv2Processor
<frozen importlib._bootstrap>:[103](https://github.com/roboflow/inference/actions/runs/12049175470/job/33595408508#step:7:104)9: in _handle_fromlist
  ???
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1766: in __getattr__
  module = self._get_module(self._class_to_module[name])
/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1780: in _get_module
  raise RuntimeError(
E   RuntimeError: Failed to import transformers.models.owlv2 because of the following error (look up to see its traceback):
E   /opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/tokenizers/tokenizers.abi3.so: undefined symbol: PyInterpreterState_Get

Caution

We are fixing the problem in inference 0.28.2, but it is not possible to be fixed older releases - for those who need to fix that
in their environments, please modify the build such that installing inference you also install tokenizers<=0.20.3.

pip install inference "tokenizers<=0.20.3"

🔧 Fixed issue with CUDA and stream management API

While running inference server and using stream management API to run Workflows against video inside docker container, it was not possible to use CUDA due to bug present from the very start of the feature. We are fixing it now.

Full Changelog: v0.28.1...v0.28.2

v0.28.1

25 Nov 11:53
1ef4594
Compare
Choose a tag to compare

🔧 Fixed broken Workflows loader

Caution

In 0.28.0 we had bug causing this error:

ModuleNotFoundError: No module named 'inference.core.workflows.core_steps.sinks.roboflow.model_monitoring_inference_aggregator'

We've junked version 0.28.0 of inference, inference-core, inference-cpu and inference-gpu and we recommend our clients to upgrade.

What's Changed

Full Changelog: v0.28.0...v0.28.1

v0.28.0

22 Nov 13:44
33b17f2
Compare
Choose a tag to compare

🚀 Added

🎥 New Video Processing Cookbook! 💪

We’re excited to introduce a new cookbook showcasing a custom video-processing use case: Creating a Video-Based Fitness Trainer! 🚀 This is not only really nice example on how to use Roboflow tools, but also a great Open Source community contribution from @Matvezy 🥹. Just take look at the notebook.

gpt_coach_demo.mp4

🎯 Purpose

This cookbook demonstrates how inference enhances foundational models like GPT-4o by adding powerful vision capabilities for accurate, data-driven insights. Perfect for exploring fitness applications or custom video processing workflows.

🔍 What’s inside?

  • 🏃 Body Keypoint Tracking: Use inference to detect and track body keypoints in real time.
  • 📐 Joint Angle Calculation: Automatically compute and annotate joint angles on video frames.
  • 🤖 AI-Powered Fitness Advice: Integrates GPT to analyze movements and provide personalized fitness tips based on video data.
  • 🛠️ Built with supervision: for efficient annotation and processing.

✨ New Workflows Block for Model Monitoring! 📊

We’re thrilled to announce a new block that takes inference data reporting to the next level by integrating seamlessly with Roboflow Model Monitoring - all thanks to @robiscoding 🚀

Take look at 📖 documentation to learn more.

🏋️ Why to use?

  • 🏭 Monitor your model processing video
  • ⏱️ Track and validate model performance effortlessly over time
  • 🔧 Gain understanding on how to improve your models over time

🔧 Fixed

🏗️ Changed

🏅 New Contributors

Full Changelog: v0.27.0...v0.28.0

v0.27.0

15 Nov 12:44
bcff389
Compare
Choose a tag to compare

🚀 Added

🧠 Your own fine-tuned Florence 2 in Workflows 🔥

Have you been itching to dive into the world of Vision-Language Models (VLMs)? Maybe you've explored @SkalskiP's incredible tutorial on training your own VLM. Well, now you can take it a step further—train your own VLM directly on the Roboflow platform!

But that’s not all: thanks to @probicheaux, you can seamlessly integrate your VLM into Workflows for real-world applications.

Check out the 📖 docs and try it yourself!

Note

This workflow block is not available on Roboflow platform - you need to run inference server on your machine (preferably with GPU).

pip install inference-cli
inference server start

🎨 Classification results visualisation in Workflows

The Workflows ecosystem offers a variety of blocks to visualize model predictions, but we’ve been missing a dedicated option for classification—until now! 🎉

Thanks to the incredible work of @reiffd7, we’re excited to introduce the Classification Label Visualization block to the ecosystem.

Dive in and bring your classification results to life! 🚀

🚧 Changes in ecosystem - Execution Engine v1.3.0 🚧

Tip

Changes introduced in Execution Engine v1.3.0 are non breaking, but we shipped couple of nice extensions and we encourage contributors to adopt them.

Full details of the changes and migration guides available here.

⚙️ Kinds with dynamic serializers and deserializers

  • Added serializers/deserializers for each kind, enabling integration with external systems.
  • Updated the Blocks Bundling page to reflect these changes.
  • Enhanced roboflow_core kinds with suitable serializers/deserializers.

See our updated blocks bundling guide for more details.

🆓 Any data can be now a Workflow input

We've added new Workflows input type WorkflowBatchInput - which is capable of accepting any kind, unlike our previous inputs like WorkflowImage. What's even nicer - you can also specify dimensionality level for WorkflowBatchInput - basically making it possible to break down each workflow into single-steps executed in debug mode.

Take a look at 📖 docs to learn more

🏋️ Easier blocks development

We got tired wondering if specific field in block manifest should be marked with StepOutputSelector, WorkflowImageSelector,
StepOutputImageSelector or WorkflowParameterSelector type annotation. That was very confusing and was effectively increasing the difficulty of contributions.

Since the selectors type annotations are required for the Execution Engine that block define placeholders for data of specific kind we could not eliminate those annotations, but we are making them easier to understand - introducing generic annotation called Selector(...).

Selector(...) no longer tells Execution Engine that the block accept batch-oriented data - so we replaced old block_manifest.accepts_batch_input() method with two new:

  • block_manifest.get_parameters_accepting_batches() - to return list of params that WorkflowBlock.run(...) method
    accepts to be wrapped in Batch[X] container
  • block_manifest.get_parameters_accepting_batches_and_scalars() - to return list of params that WorkflowBlock.run(...) method
    accepts either to be wrapped in Batch[X] container or provided as stand-alone scalar values.

Tip

To adopt changes while creating new block - visit our updated blocks creation guide.

To migrate existing blocks - take a look at migration guide.

🖌️ Increased JPEG compression quality

WorkflowImageData has a property called base64_image which is auto-generated out from numpy_image associated to the object. In the previous version of inference - default compression level was 90% - we increased it to 95%. We expect that this change will generally improve the quality of images passed between steps, yet there is no guarantee of better results from the models (that depends on how models were trained). Details of change: #798

Caution

Small changes in model predictions are expected due to this change - as it may happen that we are passing slightly different JPEG images into the models. If you are negatively affected, please let us know via GH Issues.

🧠 Change in Roboflow models blocks

We've changed the way on how Roboflow models blocks work on Roboflow hosted platform. Previously they were using numpy_image property of WorkflowImageData as an input to InferenceHTTPClient while executing remote calls - which usually caused that we are serialising numpy image to JPEG and then to base64, whereas usually on Roboflow hosted platform, we had base64 representation of image already provided, so effectively we were:

  • slowing down the processing
  • artificially decreasing the quality of images

This is no longer the case, so we do only transform image representation (and apply lossy compression) when needed. Details of change: #798.

Caution

Small changes in model predictions are expected due to this change - as it may happen that we are passing slightly different JPEG images into the models. If you are negatively affected, please let us know via GH Issues.

🗒️ New kind inference_id

We've diagnosed the need to give a semantic meaning for inference identifiers that are used by external systems as correlation IDs.
That's why we introduce new kind - inference_id.
We encourage blocks developer to use new kind.

🗒️ New field available in video_metadata and image kinds

We've added new optional field to video metadata - measured_fps - take a look at 📖 docs

🏗️ Changed

🔧 Fixed

Full Changelog: v0.26.1...v0.27.0