π§βπ³ Cookbooks
[Supervision Cookbooks](https://supervision.roboflow.com/develop/cookbooks/) - A curated open-source collection crafted by the community, offering practical examples, comprehensive guides, and walkthroughs for leveraging Supervision alongside diverse Computer Vision models. ([#860](https://github.com/roboflow/supervision/pull/860))
π Added
- [`sv.CSVSink`](https://supervision.roboflow.com/develop/detection/tools/save_detections/#supervision.detection.tools.csv_sink.CSVSink) allowing for the straightforward saving of image, video, or stream inference results in a `.csv` file. ([818](https://github.com/roboflow/supervision/pull/818))
python
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with csv_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
https://github.com/roboflow/supervision/assets/26109316/621588f9-69a0-44fe-8aab-ab4b0ef2ea1b
- [`sv.JSONSink`](https://supervision.roboflow.com/develop/detection/tools/save_detections/#supervision.detection.tools.csv_sink.JSONSink) allowing for the straightforward saving of image, video, or stream inference results in a `.json` file. ([819](https://github.com/roboflow/supervision/pull/819))
python
import supervision as sv
from ultralytics import YOLO
model = YOLO(<SOURCE_MODEL_PATH>)
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)
with json_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})
- [`sv.mask_iou_batch`](https://supervision.roboflow.com/develop/detection/utils/#supervision.detection.utils.mask_iou_batch) allowing to compute Intersection over Union (IoU) of two sets of masks. ([847](https://github.com/roboflow/supervision/pull/847))
- [`sv.mask_non_max_suppression`](https://supervision.roboflow.com/develop/detection/utils/#supervision.detection.utils.mask_non_max_suppression) allowing to perform Non-Maximum Suppression (NMS) on segmentation predictions. ([847](https://github.com/roboflow/supervision/pull/847))
- [`sv.CropAnnotator`](https://supervision.roboflow.com/develop/annotators/#supervision.annotators.core.CropAnnotator) allowing users to annotate the scene with scaled-up crops of detections. ([888](https://github.com/roboflow/supervision/pull/888))
python
import cv2
import supervision as sv
from inference import get_model
image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")
result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)
crop_annotator = sv.CropAnnotator()
annotated_frame = crop_annotator.annotate(
scene=image.copy(),
detections=detections
)
https://github.com/roboflow/supervision/assets/26109316/0a5b67ce-55e7-4e26-9495-a68f9ad97ec7
π± Changed
- [`sv.ByteTrack.reset`](https://supervision.roboflow.com/develop/trackers/#supervision.tracker.byte_tracker.core.ByteTrack.reset) allowing users to clear trackers state, enabling the processing of multiple video files in sequence. ([827](https://github.com/roboflow/supervision/pull/827))
- [`sv.LineZoneAnnotator`](https://supervision.roboflow.com/develop/detection/tools/line_zone/#supervision.detection.line_zone.LineZone) allowing to hide in/out count using `display_in_count` and `display_out_count` properties. ([802](https://github.com/roboflow/supervision/pull/802))
- [`sv.ByteTrack`](https://supervision.roboflow.com/develop/trackers/#supervision.tracker.byte_tracker.core.ByteTrack) input arguments and docstrings updated to improve readability and ease of use. ([787](https://github.com/roboflow/supervision/pull/787))
> [!WARNING]
> The `track_buffer`, `track_thresh`, and `match_thresh` parameters in `sv.ByterTrack` are deprecated and will be removed in `supervision-0.23.0`. Use `lost_track_buffer,` `track_activation_threshold`, and `minimum_matching_threshold` instead.
- [`sv.PolygonZone`](https://supervision.roboflow.com/develop/detection/tools/polygon_zone/#supervision.detection.tools.polygon_zone.PolygonZone) to now accept a list of specific box anchors that must be in zone for a detection to be counted. ([910](https://github.com/roboflow/supervision/pull/910))
> [!WARNING]
> The `triggering_position ` parameter in `sv.PolygonZone` is deprecated and will be removed in `supervision-0.23.0`. Use `triggering_anchors` instead.
- Annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input. ([875](https://github.com/roboflow/supervision/pull/875))
π οΈ Fixed
- [`sv.DetectionsSmoother`](https://supervision.roboflow.com/develop/detection/tools/smoother/#supervision.detection.tools.smoother.DetectionsSmoother) removing `tracking_id` from `sv.Detections`. ([944](https://github.com/roboflow/supervision/pull/944))
- [`sv.DetectionDataset`](https://supervision.roboflow.com/develop/datasets/#supervision.dataset.core.DetectionDataset) which, after changes introduced in `supervision-0.18.0`, failed to load datasets in YOLO, PASCAL VOC, and COCO formats.
π Contributors
onuralpszr (Onuralp SEZER), LinasKo (Linas Kondrackis), LeviVasconcelos (Levi Vasconcelos), AdonaiVera (Adonai Vera), xaristeidou (Christoforos Aristeidou), Kadermiyanyedi (Kader Miyanyedi), NickHerrig (Nick Herrig), PacificDou (Shuyang Dou), iamhatesz (Tomasz Wrona), capjamesg (James Gallagher), sansyo, SkalskiP (Piotr Skalski)