Releases: roboflow/supervision
supervision-0.16.0
π Added
supervision-0.16.0-annotators.mp4
sv.BoxMaskAnnotator
allowing to annotate images and videos with mox masks. (#422)sv.HaloAnnotator
allowing to annotate images and videos with halo effect. (#433)
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> halo_annotator = sv.HaloAnnotator()
>>> annotated_frame = halo_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
sv.HeatMapAnnotator
allowing to annotate videos with heat maps. (#466)sv.DotAnnotator
allowing to annotate images and videos with dots. (#492)sv.draw_image
allowing to draw an image onto a given scene with specified opacity and dimensions. (#449)sv.FPSMonitor
for monitoring frames per second (FPS) to benchmark latency. (#280)- π€ Hugging Face Annotators space. (#454)
π± Changed
sv.LineZone.trigger
now returnTuple[np.ndarray, np.ndarray]
. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. (#482)- Annotator argument name from
color_map: str
tocolor_lookup: ColorLookup
enum to increase type safety. (#465) sv.MaskAnnotator
allowing 2x faster annotation. (#426)
π οΈ Fixed
- Poetry env definition allowing proper local installation. (#477)
sv.ByteTrack
to returnnp.array([], dtype=int)
whensvDetections
is empty. (#430)- YOLONAS detection missing predication part added & fixed (#416)
- SAM detection at Demo Notebook
MaskAnnotator(color_map="index")
color_map
set toindex
(#416)
ποΈ Deleted
Warning
Deletedsv.Detections.from_yolov8
andsv.Classifications.from_yolov8
as those are now replaced bysv.Detections.from_ultralytics
andsv.Classifications.from_ultralytics
. (#438)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @kapter, @keshav278 (Keshav Subramanian), @akashpambhar (Akash Pambhar), @AntonioConsiglio (Antonio Consiglio), @ashishdatta, @mario-dg (Mario da Graca), @ jayaBalaR (JAYABALAMBIKA.R), @abhishek7kalra (Abhishek Kalra), @PankajKrana (Pankaj Kumar Rana), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.15.0
π Added
supervision-0.15.0.mp4
-
sv.LabelAnnotator
allowing to annotate images and videos with text. (#170) -
sv.BoundingBoxAnnotator
allowing to annotate images and videos with bounding boxes. (#170) -
sv.BoxCornerAnnotator
allowing to annotate images and videos with just bounding box corners. (#170) -
sv.MaskAnnotator
allowing to annotate images and videos with segmentation masks. (#170) -
sv.EllipseAnnotator
allowing to annotate images and videos with ellipses (sports game style). (#170) -
sv.CircleAnnotator
allowing to annotate images and videos with circles. (#386) -
sv.TraceAnnotator
allowing to draw path of moving objects on videos. (#354) -
sv.BlurAnnotator
allowing to blur objects on images and videos. (#405)
>>> import supervision as sv
>>> image = ...
>>> detections = sv.Detections(...)
>>> bounding_box_annotator = sv.BoundingBoxAnnotator()
>>> annotated_frame = bounding_box_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )
- Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision. (#354)
traffic_analysis_result.mov
π± Changed
-
sv.Detections.from_roboflow
now does not requireclass_list
to be specified. Theclass_id
value can be extracted directly from the inference response. (#399) -
sv.VideoSink
now allows to customize the output codec. (#381) -
sv.InferenceSlicer
can now operate in multithreading mode. (#361)
π οΈ Fixed
sv.Detections.from_deepsparse
to allow processing empty deepsparse result object. (#348)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @Killua7362 (Akshay Bhat), @fcakyon (Fatih C. Akyon), @akashAD98 (Akash A Desai), @Rajarshi-Misra (Rajarshi Misra), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.14.0
π Added
- Support for SAHI inference technique with
sv.InferenceSlicer
. (#282)
>>> import cv2
>>> import supervision as sv
>>> import numpy as np
>>> from ultralytics import YOLO
>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)
>>> def callback(image_slice: np.ndarray) -> sv.Detections:
... result = model(image_slice)[0]
... return sv.Detections.from_ultralytics(result)
>>> slicer = sv.InferenceSlicer(callback = callback)
>>> detections = slicer(image)
inference-slicer.mov
-
Detections.from_deepsparse
to enable seamless integration with DeepSparse framework. (#297) -
sv.Classifications.from_ultralytics
to enable seamless integration with Ultralytics framework. This will enable you to use supervision with all models that Ultralytics supports. (#281)Warning
sv.Detections.from_yolov8
andsv.Classifications.from_yolov8
are now deprecated and will be removed withsupervision-0.16.0
release. -
First supervision usage example script showing how to detect and track objects on video using YOLOv8 + Supervision. (#341)
detect-and-track-objects-on-video.mov
π± Changed
sv.ClassificationDataset
andsv.DetectionDataset
now use image path (not image name) as dataset keys. (#296)
π οΈ Fixed
Detections.from_roboflow
to filter out polygons with less than 3 points. (#300)
π Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @mayankagarwals (Mayank Agarwal), @rizavelioglu (Riza Velioglu), @arjun-234 (Arjun D.), @mwitiderrick (Derrick Mwiti), @ShubhamKanitkar32, @gasparitiago (Tiago De Gaspari), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.13.0
π Added
- Support for mean average precision (mAP) for object detection models with
sv.MeanAveragePrecision
. (#236)
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> mean_average_precision.map50_95
0.433
- Support for
ByteTrack
for object tracking withsv.ByteTrack
. (#256)
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> model = YOLO(...)
>>> byte_tracker = sv.ByteTrack()
>>> annotator = sv.BoxAnnotator()
>>> def callback(frame: np.ndarray, index: int) -> np.ndarray:
... results = model(frame)[0]
... detections = sv.Detections.from_yolov8(results)
... detections = byte_tracker.update_from_detections(detections=detections)
... labels = [
... f"#{tracker_id} {model.model.names[class_id]} {confidence:0.2f}"
... for _, _, confidence, class_id, tracker_id
... in detections
... ]
... return annotator.annotate(scene=frame.copy(), detections=detections, labels=labels)
>>> sv.process_video(
... source_path='...',
... target_path='...',
... callback=callback
... )
byte_track_result_small.mp4
-
sv.Detections.from_ultralytics
to enable seamless integration with Ultralytics framework. This will enable you to usesupervision
with all models that Ultralytics supports. (#222)Warning
sv.Detections.from_yolov8
is now deprecated and will be removed withsupervision-0.15.0
release. -
sv.Detections.from_paddledet
to enable seamless integration with PaddleDetection framework. (#191) -
Support for loading PASCAL VOC segmentation datasets with
sv.DetectionDataset.
. (#245)
π Contributors
@hardikdava (Hardik Dava), @kirilllzaitsev (Kirill Zaitsev), @onuralpszr (Onuralp SEZER), @dbroboflow, @mayankagarwals (Mayank Agarwal), @danigarciaoca (Daniel M. GarcΓa-OcaΓ±a), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
supervision-0.12.0
Warning
With thesupervision-0.12.0
release, we are terminating official support for Python 3.7. (#179)
π Added
- Initial support for object detection model benchmarking with
sv.ConfusionMatrix
. (#177)
>>> import supervision as sv
>>> from ultralytics import YOLO
>>> dataset = sv.DetectionDataset.from_yolo(...)
>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
... result = model(image)[0]
... return sv.Detections.from_yolov8(result)
>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
... dataset = dataset,
... callback = callback
... )
>>> confusion_matrix.matrix
array([
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 1., 1., 0.],
[1., 1., 0., 0.]
])
-
Detections.from_mmdetection
to enable seamless integration with MMDetection framework. (#173) -
Ability to install package in
headless
ordesktop
mode. (#130)
π± Changed
- Packing method from
setup.py
topyproject.toml
. (#180)
π οΈ Fixed
sv.DetectionDataset.from_cooc
can't be loaded when there are images without annotations. (#188)sv.DetectionDataset.from_yolo
can't load background instances. (#226)
π Contributors
@kirilllzaitsev @hardikdava @onuralpszr @Ucag @SkalskiP @capjamesg
supervision-0.11.1
π οΈ Fixed
as_folder_structure
fails to savesv.ClassificationDataset
when it is result of inference. (#165)
π Contributors
supervision-0.11.0
π Added
- Ability to load and save
sv.DetectionDataset
in COCO format usingas_coco
andfrom_coco
methods. (#150)
>>> import supervision as sv
>>> ds = sv.DetectionDataset.from_coco(
... images_directory_path='...',
... annotations_path='...'
... )
>>> ds.as_coco(
... images_directory_path='...',
... annotations_path='...'
... )
- Ability to marge multiple
sv.DetectionDataset
together usingmerge
method. (#158)
>>> import supervision as sv
>>> ds_1 = sv.DetectionDataset(...)
>>> len(ds_1)
100
>>> ds_1.classes
['dog', 'person']
>>> ds_2 = sv.DetectionDataset(...)
>>> len(ds_2)
200
>>> ds_2.classes
['cat']
>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
>>> len(ds_merged)
300
>>> ds_merged.classes
['cat', 'dog', 'person']
- Additional
start
andend
arguments tosv.get_video_frames_generator
allowing to generate frames only for a selected part of the video. (#162)
π οΈ Fixed
- Incorrect loading of YOLO dataset class names from
data.yaml
. (#157)
π Contributors
supervision-0.10.0
π Added
- Ability to load and save
sv.ClassificationDataset
in a folder structure format. (#125)
>>> import supervision as sv
>>> cs = sv.ClassificationDataset.from_folder_structure(
... root_directory_path='...'
... )
>>> cs.as_folder_structure(
... root_directory_path='...'
... )
- Support for
sv.ClassificationDataset.split
allowing to dividesv.ClassificationDataset
into two parts. (#125)
>>> import supervision as sv
>>> cs = sv.ClassificationDataset(...)
>>> train_cs, test_cs = cs.split(split_ratio=0.7, random_state=42, shuffle=True)
>>> len(train_cs), len(test_cs)
(700, 300)
-
Ability to extract masks from Roboflow API results using
sv.Detections.from_roboflow
. (#110) -
Supervision Quickstart notebook where you can learn more about Detection, Dataset and Video APIs.
π± Changed
sv.get_video_frames_generator
documentation to better describe actual behavior. (#135)
π Contributors
supervision-0.9.0
π Added
- Ability to select
sv.Detections
by index, list of indexes or slice. Here is an example illustrating the new selection methods. (#118)
>>> import supervision as sv
>>> detections = sv.Detections(...)
>>> len(detections[0])
1
>>> len(detections[[0, 1]])
2
>>> len(detections[0:2])
2
- Ability to extract masks from YOLOv8 results using
sv.Detections.from_yolov8
. Here is an example illustrating how to extract boolean masks from the result of the YOLOv8 model inference. (#101)
>>> import cv2
>>> from ultralytics import YOLO
>>> import supervision as sv
>>> image = cv2.imread(...)
>>> image.shape
(640, 640, 3)
>>> model = YOLO('yolov8s-seg.pt')
>>> result = model(image)[0]
>>> detections = sv.Detections.from_yolov8(result)
>>> detections.mask.shape
(2, 640, 640)
- Ability to crop the image using
sv.crop
. Here is an example showing how to get a separate crop for each detection insv.Detections
. (#122)
>>> import cv2
>>> import supervision as sv
>>> image = cv2.imread(...)
>>> detections = sv.Detections(...)
>>> len(detections)
2
>>> crops = [
... sv.crop(image=image, xyxy=xyxy)
... for xyxy
... in detections.xyxy
... ]
>>> len(crops)
2
- Ability to conveniently save multiple images into directory using
sv.ImageSink
. An example shows how to save every tenth video frame as a separate image. (#120)
>>> import supervision as sv
>>> with sv.ImageSink(target_dir_path='target/directory/path') as sink:
... for image in sv.get_video_frames_generator(source_path='source_video.mp4', stride=10):
... sink.save_image(image=image)
π οΈ Fixed
- Inconvenient handling of
sv.PolygonZone
coordinates. Nowsv.PolygonZone
accepts coordinates in the form of[[x1, y1], [x2, y2], ...]
that can be both integers and floats. (#106)
π Contributors
supervision-0.8.0
π Added
- Support for dataset inheritance. The current
Dataset
got renamed toDetectionDataset
. NowDetectionDataset
inherits fromBaseDataset
. This change was made to enforce the future consistency of APIs of different types of computer vision datasets. (#100) - Ability to save datasets in YOLO format using
DetectionDataset.as_yolo
. (#100)
>>> import supervision as sv
>>> ds = sv.DetectionDataset(...)
>>> ds.as_yolo(
... images_directory_path='...',
... annotations_directory_path='...',
... data_yaml_path='...'
... )
- Support for
DetectionDataset.split
allowing to divideDetectionDataset
into two parts. (#102)
>>> import supervision as sv
>>> ds = sv.DetectionDataset(...)
>>> train_ds, test_ds = ds.split(split_ratio=0.7, random_state=42, shuffle=True)
>>> len(train_ds), len(test_ds)
(700, 300)
π± Changed
- Default value of
approximation_percentage
parameter from0.75
to0.0
inDetectionDataset.as_yolo
andDetectionDataset.as_pascal_voc
. (#100)