Skip to content

Releases: roboflow/supervision

supervision-0.16.0

19 Oct 08:26
f34993c
Compare
Choose a tag to compare

πŸš€ Added

supervision-0.16.0-annotators.mp4
>>> import supervision as sv

>>> image = ...
>>> detections = sv.Detections(...)

>>> halo_annotator = sv.HaloAnnotator()
>>> annotated_frame = halo_annotator.annotate(
...     scene=image.copy(),
...     detections=detections
... )

🌱 Changed

  • sv.LineZone.trigger now return Tuple[np.ndarray, np.ndarray]. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. (#482)
  • Annotator argument name from color_map: str to color_lookup: ColorLookup enum to increase type safety. (#465)
  • sv.MaskAnnotator allowing 2x faster annotation. (#426)

πŸ› οΈ Fixed

  • Poetry env definition allowing proper local installation. (#477)
  • sv.ByteTrack to return np.array([], dtype=int) when svDetections is empty. (#430)
  • YOLONAS detection missing predication part added & fixed (#416)
  • SAM detection at Demo Notebook MaskAnnotator(color_map="index") color_map set to index (#416)

πŸ—‘οΈ Deleted

Warning
Deleted sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 as those are now replaced by sv.Detections.from_ultralytics and sv.Classifications.from_ultralytics. (#438)

πŸ† Contributors

@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @kapter, @keshav278 (Keshav Subramanian), @akashpambhar (Akash Pambhar), @AntonioConsiglio (Antonio Consiglio), @ashishdatta, @mario-dg (Mario da Graca), @ jayaBalaR (JAYABALAMBIKA.R), @abhishek7kalra (Abhishek Kalra), @PankajKrana (Pankaj Kumar Rana), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

supervision-0.15.0

05 Oct 07:54
1bddf26
Compare
Choose a tag to compare

πŸš€ Added

supervision-0.15.0.mp4
>>> import supervision as sv

>>> image = ...
>>> detections = sv.Detections(...)

>>> bounding_box_annotator = sv.BoundingBoxAnnotator()
>>> annotated_frame = bounding_box_annotator.annotate(
...     scene=image.copy(),
...     detections=detections
... )
  • Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision. (#354)
traffic_analysis_result.mov

🌱 Changed

πŸ› οΈ Fixed

πŸ† Contributors

@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @Killua7362 (Akshay Bhat), @fcakyon (Fatih C. Akyon), @akashAD98 (Akash A Desai), @Rajarshi-Misra (Rajarshi Misra), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

supervision-0.14.0

31 Aug 13:23
f82f0fa
Compare
Choose a tag to compare

πŸš€ Added

>>> import cv2
>>> import supervision as sv
>>> import numpy as np
>>> from ultralytics import YOLO

>>> image = cv2.imread(SOURCE_IMAGE_PATH)
>>> model = YOLO(...)

>>> def callback(image_slice: np.ndarray) -> sv.Detections:
...     result = model(image_slice)[0]
...     return sv.Detections.from_ultralytics(result)

>>> slicer = sv.InferenceSlicer(callback = callback)

>>> detections = slicer(image)
inference-slicer.mov
detect-and-track-objects-on-video.mov

🌱 Changed

πŸ› οΈ Fixed

πŸ† Contributors

@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @mayankagarwals (Mayank Agarwal), @rizavelioglu (Riza Velioglu), @arjun-234 (Arjun D.), @mwitiderrick (Derrick Mwiti), @ShubhamKanitkar32, @gasparitiago (Tiago De Gaspari), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

supervision-0.13.0

08 Aug 09:17
4f79d29
Compare
Choose a tag to compare

πŸš€ Added

>>> import supervision as sv
>>> from ultralytics import YOLO

>>> dataset = sv.DetectionDataset.from_yolo(...)

>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
...     result = model(image)[0]
...     return sv.Detections.from_yolov8(result)

>>> mean_average_precision = sv.MeanAveragePrecision.benchmark(
...     dataset = dataset,
...     callback = callback
... )

>>> mean_average_precision.map50_95
0.433
>>> import supervision as sv
>>> from ultralytics import YOLO

>>> model = YOLO(...)
>>> byte_tracker = sv.ByteTrack()
>>> annotator = sv.BoxAnnotator()

>>> def callback(frame: np.ndarray, index: int) -> np.ndarray:
...     results = model(frame)[0]
...     detections = sv.Detections.from_yolov8(results)
...     detections = byte_tracker.update_from_detections(detections=detections)
...     labels = [
...         f"#{tracker_id} {model.model.names[class_id]} {confidence:0.2f}"
...         for _, _, confidence, class_id, tracker_id
...         in detections
...     ]
...     return annotator.annotate(scene=frame.copy(), detections=detections, labels=labels)

>>> sv.process_video(
...     source_path='...',
...     target_path='...',
...     callback=callback
... )
byte_track_result_small.mp4

πŸ† Contributors

@hardikdava (Hardik Dava), @kirilllzaitsev (Kirill Zaitsev), @onuralpszr (Onuralp SEZER), @dbroboflow, @mayankagarwals (Mayank Agarwal), @danigarciaoca (Daniel M. GarcΓ­a-OcaΓ±a), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

supervision-0.12.0

24 Jul 07:59
800e39d
Compare
Choose a tag to compare

Warning
With the supervision-0.12.0 release, we are terminating official support for Python 3.7. (#179)

πŸš€ Added

>>> import supervision as sv
>>> from ultralytics import YOLO

>>> dataset = sv.DetectionDataset.from_yolo(...)

>>> model = YOLO(...)
>>> def callback(image: np.ndarray) -> sv.Detections:
...     result = model(image)[0]
...     return sv.Detections.from_yolov8(result)

>>> confusion_matrix = sv.ConfusionMatrix.benchmark(
...     dataset = dataset,
...     callback = callback
... )

>>> confusion_matrix.matrix
array([
    [0., 0., 0., 0.],
    [0., 1., 0., 1.],
    [0., 1., 1., 0.],
    [1., 1., 0., 0.]
])

Snap (51)

🌱 Changed

  • Packing method from setup.py to pyproject.toml. (#180)

πŸ› οΈ Fixed

πŸ† Contributors

@kirilllzaitsev @hardikdava @onuralpszr @Ucag @SkalskiP @capjamesg

supervision-0.11.1

29 Jun 13:10
Compare
Choose a tag to compare

πŸ› οΈ Fixed

πŸ† Contributors

@capjamesg @SkalskiP

supervision-0.11.0

28 Jun 21:03
Compare
Choose a tag to compare

πŸš€ Added

>>> import supervision as sv

>>> ds = sv.DetectionDataset.from_coco(
...     images_directory_path='...',
...     annotations_path='...'
... )

>>> ds.as_coco(
...     images_directory_path='...',
...     annotations_path='...'
... )
>>> import supervision as sv

>>> ds_1 = sv.DetectionDataset(...)
>>> len(ds_1)
100
>>> ds_1.classes
['dog', 'person']

>>> ds_2 = sv.DetectionDataset(...)
>>> len(ds_2)
200
>>> ds_2.classes
['cat']

>>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2])
>>> len(ds_merged)
300
>>> ds_merged.classes
['cat', 'dog', 'person']

Snap (47)

πŸ› οΈ Fixed

  • Incorrect loading of YOLO dataset class names from data.yaml. (#157)

πŸ† Contributors

@SkalskiP @hardikdava

supervision-0.10.0

14 Jun 14:52
9cff624
Compare
Choose a tag to compare

πŸš€ Added

>>> import supervision as sv

>>> cs = sv.ClassificationDataset.from_folder_structure(
...     root_directory_path='...'
... )

>>> cs.as_folder_structure(
...     root_directory_path='...'
... )
>>> import supervision as sv

>>> cs = sv.ClassificationDataset(...)
>>> train_cs, test_cs = cs.split(split_ratio=0.7, random_state=42, shuffle=True)

>>> len(train_cs), len(test_cs)
(700, 300)

Screenshot 2023-06-14 at 15 33 27

🌱 Changed

  • sv.get_video_frames_generator documentation to better describe actual behavior. (#135)

Snap (45)

πŸ† Contributors

@capjamesg @dankresio @SkalskiP

supervision-0.9.0

07 Jun 11:17
Compare
Choose a tag to compare

πŸš€ Added

  • Ability to select sv.Detections by index, list of indexes or slice. Here is an example illustrating the new selection methods. (#118)
>>> import supervision as sv

>>> detections = sv.Detections(...)
>>> len(detections[0])
1
>>> len(detections[[0, 1]])
2
>>> len(detections[0:2])
2

supervision-0_9_0-Snap (4)

  • Ability to extract masks from YOLOv8 results using sv.Detections.from_yolov8. Here is an example illustrating how to extract boolean masks from the result of the YOLOv8 model inference. (#101)
>>> import cv2
>>> from ultralytics import YOLO
>>> import supervision as sv

>>> image = cv2.imread(...)
>>> image.shape
(640, 640, 3)

>>> model = YOLO('yolov8s-seg.pt')
>>> result = model(image)[0]
>>> detections = sv.Detections.from_yolov8(result)
>>> detections.mask.shape
(2, 640, 640)
  • Ability to crop the image using sv.crop. Here is an example showing how to get a separate crop for each detection in sv.Detections. (#122)
>>> import cv2
>>> import supervision as sv

>>> image = cv2.imread(...)
>>> detections = sv.Detections(...)
>>> len(detections)
2
>>> crops = [
...     sv.crop(image=image, xyxy=xyxy) 
...     for xyxy 
...     in detections.xyxy
... ]
>>> len(crops)
2
  • Ability to conveniently save multiple images into directory using sv.ImageSink. An example shows how to save every tenth video frame as a separate image. (#120)
>>> import supervision as sv

>>> with sv.ImageSink(target_dir_path='target/directory/path') as sink:
...     for image in sv.get_video_frames_generator(source_path='source_video.mp4', stride=10):
...         sink.save_image(image=image)

πŸ› οΈ Fixed

  • Inconvenient handling of sv.PolygonZone coordinates. Now sv.PolygonZone accepts coordinates in the form of [[x1, y1], [x2, y2], ...] that can be both integers and floats. (#106)

πŸ† Contributors

@SkalskiP @lomnes-atlast-food @hardikdava

supervision-0.8.0

17 May 19:28
Compare
Choose a tag to compare

πŸš€ Added

  • Support for dataset inheritance. The current Dataset got renamed to DetectionDataset. Now DetectionDataset inherits from BaseDataset. This change was made to enforce the future consistency of APIs of different types of computer vision datasets. (#100)
  • Ability to save datasets in YOLO format using DetectionDataset.as_yolo. (#100)
>>> import supervision as sv

>>> ds = sv.DetectionDataset(...)
>>> ds.as_yolo(
...     images_directory_path='...',
...     annotations_directory_path='...',
...     data_yaml_path='...'
... )
>>> import supervision as sv

>>> ds = sv.DetectionDataset(...)
>>> train_ds, test_ds = ds.split(split_ratio=0.7, random_state=42, shuffle=True)

>>> len(train_ds), len(test_ds)
(700, 300)

🌱 Changed

  • Default value of approximation_percentage parameter from 0.75 to 0.0 in DetectionDataset.as_yolo and DetectionDataset.as_pascal_voc. (#100)

1

πŸ† Contributors