supervision-0.14.0
π Added
- Support for SAHI inference technique with
sv.InferenceSlicer
. (#282)
>>> import cv2
>>> import supervision as sv
>>> import numpy as np
>>> from ultralytics import YOLO
>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)
>>> def callback(image_slice: np.ndarray) -> sv.Detections:
... result = model(image_slice)[0]
... return sv.Detections.from_ultralytics(result)
>>> slicer = sv.InferenceSlicer(callback = callback)
>>> detections = slicer(image)
inference-slicer.mov
-
Detections.from_deepsparse
to enable seamless integration with DeepSparse framework. (#297) -
sv.Classifications.from_ultralytics
to enable seamless integration with Ultralytics framework. This will enable you to use supervision with all models that Ultralytics supports. (#281)Warning
sv.Detections.from_yolov8
andsv.Classifications.from_yolov8
are now deprecated and will be removed withsupervision-0.16.0
release. -
First supervision usage example script showing how to detect and track objects on video using YOLOv8 + Supervision. (#341)
detect-and-track-objects-on-video.mov
π± Changed
sv.ClassificationDataset
andsv.DetectionDataset
now use image path (not image name) as dataset keys. (#296)
π οΈ Fixed
Detections.from_roboflow
to filter out polygons with less than 3 points. (#300)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @mayankagarwals (Mayank Agarwal), @rizavelioglu (Riza Velioglu), @arjun-234 (Arjun D.), @mwitiderrick (Derrick Mwiti), @ShubhamKanitkar32, @gasparitiago (Tiago De Gaspari), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)