Supervision

Latest version: v0.20.0

Safety actively analyzes 626983 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

0.433

- Support for `ByteTrack` for object tracking with [`sv.ByteTrack`](https://roboflow.github.io/supervision/tracker/core/#bytetrack). ([256](https://github.com/roboflow/supervision/pull/256))

python
>>> import supervision as sv
>>> from ultralytics import YOLO

>>> model = YOLO(...)
>>> byte_tracker = sv.ByteTrack()
>>> annotator = sv.BoxAnnotator()

>>> def callback(frame: np.ndarray, index: int) -> np.ndarray:
... results = model(frame)[0]
... detections = sv.Detections.from_yolov8(results)
... detections = byte_tracker.update_from_detections(detections=detections)
... labels = [
... f"{tracker_id} {model.model.names[class_id]} {confidence:0.2f}"
... for _, _, confidence, class_id, tracker_id
... in detections
... ]
... return annotator.annotate(scene=frame.copy(), detections=detections, labels=labels)

>>> sv.process_video(
... source_path='...',
... target_path='...',
... callback=callback
... )


https://github.com/roboflow/supervision/assets/26109316/d5d393f5-e577-474a-bc8c-82483ef8a578

- [`sv.Detections.from_ultralytics`](https://roboflow.github.io/supervision/detection/core/#supervision.detection.core.Detections.from_ultralytics) to enable seamless integration with [Ultralytics](https://github.com/ultralytics/ultralytics) framework. This will enable you to use `supervision` with all [models](https://docs.ultralytics.com/models/) that Ultralytics supports. ([#222](https://github.com/roboflow/supervision/pull/222))

> **Warning**
> [`sv.Detections.from_yolov8`](https://roboflow.github.io/supervision/detection/core/#supervision.detection.core.Detections.from_yolov8) is now deprecated and will be removed with `supervision-0.15.0` release.

- [`sv.Detections.from_paddledet`](https://roboflow.github.io/supervision/detection/core/#supervision.detection.core.Detections.from_paddledet) to enable seamless integration with [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) framework. ([#191](https://github.com/roboflow/supervision/pull/191))

- Support for loading PASCAL VOC segmentation datasets with [`sv.DetectionDataset.`](https://roboflow.github.io/supervision/dataset/core/#supervision.dataset.core.DetectionDataset.from_pascal_voc). ([245](https://github.com/roboflow/supervision/pull/245))

πŸ† Contributors

hardikdava (Hardik Dava), kirilllzaitsev (Kirill Zaitsev), onuralpszr (Onuralp SEZER), dbroboflow, mayankagarwals (Mayank Agarwal), danigarciaoca (Daniel M. GarcΓ­a-OcaΓ±a), capjamesg (James Gallagher), SkalskiP (Piotr Skalski)

0.20.0

πŸš€ Added

- [`sv.KeyPoints`](https://supervision.roboflow.com/develop/keypoint/core/#supervision.keypoint.core.KeyPoints) to provide initial support for pose estimation and broader keypoint detection models. ([1128](https://github.com/roboflow/supervision/pull/1128))

- [`sv.EdgeAnnotator`](https://supervision.roboflow.com/develop/keypoint/annotators/#supervision.keypoint.annotators.EdgeAnnotator) and [`sv.VertexAnnotator`](https://supervision.roboflow.com/develop/keypoint/annotators/#supervision.keypoint.annotators.VertexAnnotator) to enable rendering of results from keypoint detection models. ([1128](https://github.com/roboflow/supervision/pull/1128))

python
import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')

result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)

edge_annotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5)
annotated_image = edge_annotators.annotate(image.copy(), keypoints)


![edge-annotator-example](https://github.com/roboflow/supervision/assets/26109316/eefdd879-4949-4ea8-aec6-fa5273c5316d)

python
import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')

result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)

vertex_annotators = sv.VertexAnnotator(color=sv.Color.GREEN, radius=10)
annotated_image = vertex_annotators.annotate(image.copy(), keypoints)


![vertex-annotator-example](https://github.com/roboflow/supervision/assets/26109316/e931979b-c46d-4ad0-b474-19ebbf167895)

🌱 Changed

- [`sv.LabelAnnotator`](https://supervision.roboflow.com/develop/annotators/#supervision.annotators.core.LabelAnnotator) by adding an additional `corner_radius` argument that allows for rounding the corners of the bounding box. ([1037](https://github.com/roboflow/supervision/pull/1037))

- [`sv.PolygonZone`](https://supervision.roboflow.com/develop/detection/tools/polygon_zone/#supervision.detection.tools.polygon_zone.PolygonZone) such that the `frame_resolution_wh` argument is no longer required to initialize `sv.PolygonZone`. ([1109](https://github.com/roboflow/supervision/pull/1109))

> [!WARNING]
> The `frame_resolution_wh` parameter in `sv.PolygonZone` is deprecated and will be removed in `supervision-0.24.0`.

- [`sv.get_polygon_center`](https://supervision.roboflow.com/develop/utils/geometry/#supervision.geometry.core.utils.get_polygon_center) to calculate a more accurate polygon centroid. ([1084](https://github.com/roboflow/supervision/pull/1084))

- [`sv.Detections.from_transformers`](https://supervision.roboflow.com/develop/detection/core/#supervision.detection.core.Detections.from_transformers) by adding support for Transformers segmentation models and extract class names values. ([1069](https://github.com/roboflow/supervision/pull/1069))

python
import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForSegmentation

processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")

image = Image.open(<SOURCE_IMAGE_PATH>)
inputs = processor(images=image, return_tensors="pt")

with torch.no_grad():
outputs = model(**inputs)

width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor.post_process_segmentation(
outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(results, id2label=model.config.id2label)

mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)

annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)


πŸ› οΈ Fixed

- [`sv.ByteTrack.update_with_detections`](https://supervision.roboflow.com/develop/trackers/#supervision.tracker.byte_tracker.core.ByteTrack.update_with_detections) which was removing segmentation masks while tracking. Now, `ByteTrack` can be used alongside segmentation models. ([787](https://github.com/roboflow/supervision/pull/787))

πŸ† Contributors

onuralpszr (Onuralp SEZER), rolson24 (Raif Olson), xaristeidou (Christoforos Aristeidou), jeslinpjames (Jeslin P James), Griffin-Sullivan (Griffin Sullivan), PawelPeczek-Roboflow (PaweΕ‚ PΔ™czek), pirnerjonas (Jonas Pirner), sharingan000, macc-n, LinasKo (Linas Kondrackis), SkalskiP (Piotr Skalski)

0.19.0

πŸ§‘β€πŸ³ Cookbooks

[Supervision Cookbooks](https://supervision.roboflow.com/develop/cookbooks/) - A curated open-source collection crafted by the community, offering practical examples, comprehensive guides, and walkthroughs for leveraging Supervision alongside diverse Computer Vision models. ([#860](https://github.com/roboflow/supervision/pull/860))

πŸš€ Added

- [`sv.CSVSink`](https://supervision.roboflow.com/develop/detection/tools/save_detections/#supervision.detection.tools.csv_sink.CSVSink) allowing for the straightforward saving of image, video, or stream inference results in a `.csv` file. ([818](https://github.com/roboflow/supervision/pull/818))

python
import supervision as sv
from ultralytics import YOLO

model = YOLO(<SOURCE_MODEL_PATH>)
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)

with csv_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})


https://github.com/roboflow/supervision/assets/26109316/621588f9-69a0-44fe-8aab-ab4b0ef2ea1b

- [`sv.JSONSink`](https://supervision.roboflow.com/develop/detection/tools/save_detections/#supervision.detection.tools.csv_sink.JSONSink) allowing for the straightforward saving of image, video, or stream inference results in a `.json` file. ([819](https://github.com/roboflow/supervision/pull/819))

python
import supervision as sv
from ultralytics import YOLO

model = YOLO(<SOURCE_MODEL_PATH>)
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)

with json_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})


- [`sv.mask_iou_batch`](https://supervision.roboflow.com/develop/detection/utils/#supervision.detection.utils.mask_iou_batch) allowing to compute Intersection over Union (IoU) of two sets of masks. ([847](https://github.com/roboflow/supervision/pull/847))
- [`sv.mask_non_max_suppression`](https://supervision.roboflow.com/develop/detection/utils/#supervision.detection.utils.mask_non_max_suppression) allowing to perform Non-Maximum Suppression (NMS) on segmentation predictions. ([847](https://github.com/roboflow/supervision/pull/847))
- [`sv.CropAnnotator`](https://supervision.roboflow.com/develop/annotators/#supervision.annotators.core.CropAnnotator) allowing users to annotate the scene with scaled-up crops of detections. ([888](https://github.com/roboflow/supervision/pull/888))

python
import cv2
import supervision as sv
from inference import get_model

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")

result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)

crop_annotator = sv.CropAnnotator()
annotated_frame = crop_annotator.annotate(
scene=image.copy(),
detections=detections
)


https://github.com/roboflow/supervision/assets/26109316/0a5b67ce-55e7-4e26-9495-a68f9ad97ec7

🌱 Changed

- [`sv.ByteTrack.reset`](https://supervision.roboflow.com/develop/trackers/#supervision.tracker.byte_tracker.core.ByteTrack.reset) allowing users to clear trackers state, enabling the processing of multiple video files in sequence. ([827](https://github.com/roboflow/supervision/pull/827))
- [`sv.LineZoneAnnotator`](https://supervision.roboflow.com/develop/detection/tools/line_zone/#supervision.detection.line_zone.LineZone) allowing to hide in/out count using `display_in_count` and `display_out_count` properties. ([802](https://github.com/roboflow/supervision/pull/802))
- [`sv.ByteTrack`](https://supervision.roboflow.com/develop/trackers/#supervision.tracker.byte_tracker.core.ByteTrack) input arguments and docstrings updated to improve readability and ease of use. ([787](https://github.com/roboflow/supervision/pull/787))

> [!WARNING]
> The `track_buffer`, `track_thresh`, and `match_thresh` parameters in `sv.ByterTrack` are deprecated and will be removed in `supervision-0.23.0`. Use `lost_track_buffer,` `track_activation_threshold`, and `minimum_matching_threshold` instead.

- [`sv.PolygonZone`](https://supervision.roboflow.com/develop/detection/tools/polygon_zone/#supervision.detection.tools.polygon_zone.PolygonZone) to now accept a list of specific box anchors that must be in zone for a detection to be counted. ([910](https://github.com/roboflow/supervision/pull/910))

> [!WARNING]
> The `triggering_position ` parameter in `sv.PolygonZone` is deprecated and will be removed in `supervision-0.23.0`. Use `triggering_anchors` instead.

- Annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input. ([875](https://github.com/roboflow/supervision/pull/875))

πŸ› οΈ Fixed

- [`sv.DetectionsSmoother`](https://supervision.roboflow.com/develop/detection/tools/smoother/#supervision.detection.tools.smoother.DetectionsSmoother) removing `tracking_id` from `sv.Detections`. ([944](https://github.com/roboflow/supervision/pull/944))
- [`sv.DetectionDataset`](https://supervision.roboflow.com/develop/datasets/#supervision.dataset.core.DetectionDataset) which, after changes introduced in `supervision-0.18.0`, failed to load datasets in YOLO, PASCAL VOC, and COCO formats.

πŸ† Contributors

onuralpszr (Onuralp SEZER), LinasKo (Linas Kondrackis), LeviVasconcelos (Levi Vasconcelos), AdonaiVera (Adonai Vera), xaristeidou (Christoforos Aristeidou), Kadermiyanyedi (Kader Miyanyedi), NickHerrig (Nick Herrig), PacificDou (Shuyang Dou), iamhatesz (Tomasz Wrona), capjamesg (James Gallagher), sansyo, SkalskiP (Piotr Skalski)

0.18.0

πŸš€ Added

- [`sv.PercentageBarAnnotator`](https://supervision.roboflow.com/annotators/#percentagebarannotator) allowing to annotate images and videos with percentage values representing confidence or other custom property. ([720](https://github.com/roboflow/supervision/pull/720))

python
import supervision as sv

image = ...
detections = sv.Detections(...)

percentage_bar_annotator = sv.PercentageBarAnnotator()
annotated_frame = percentage_bar_annotator.annotate(
scene=image.copy(),
detections=detections
)


![percentage-bar-annotator-example-purple](https://github.com/roboflow/supervision/assets/26109316/6ef1fa4f-8587-4982-b225-4ae355806f93)

- [`sv.RoundBoxAnnotator`](https://supervision.roboflow.com/annotators/#roundboxannotator) allowing to annotate images and videos with rounded corners bounding boxes. ([702](https://github.com/roboflow/supervision/pull/702))
- [`sv.DetectionsSmoother`](https://supervision.roboflow.com/detection/tools/smoother/#detection-smoother) allowing for smoothing detections over multiple frames in video tracking. ([696](https://github.com/roboflow/supervision/pull/696))

https://github.com/roboflow/supervision/assets/26109316/4dd703ad-ffba-492b-97ff-1be84e237e83

- [`sv.OrientedBoxAnnotator`](https://supervision.roboflow.com/annotators/#orientedboxannotator) allowing to annotate images and videos with OBB (Oriented Bounding Boxes). ([770](https://github.com/roboflow/supervision/pull/770))

python
import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO("yolov8n-obb.pt")

result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)

oriented_box_annotator = sv.OrientedBoxAnnotator()
annotated_frame = oriented_box_annotator.annotate(
scene=image.copy(),
detections=detections
)


![oriented-box-annotator](https://github.com/roboflow/supervision/assets/26109316/ab5b608c-ad76-4c35-ba44-830886cc862c)

- [`sv.ColorPalette.from_matplotlib`](https://supervision.roboflow.com/draw/color/#supervision.draw.color.ColorPalette.from_matplotlib) allowing users to create a `sv.ColorPalette` instance from a Matplotlib color palette. ([769](https://github.com/roboflow/supervision/pull/769))

python
import supervision as sv

sv.ColorPalette.from_matplotlib('viridis', 5)
ColorPalette(colors=[Color(r=68, g=1, b=84), Color(r=59, g=82, b=139), ...])


![visualized_color_palette](https://github.com/roboflow/supervision/assets/26109316/9c450e0d-0547-4e05-9ac4-e8c625783a17)

🌱 Changed

- [`sv.Detections.from_ultralytics`](https://supervision.roboflow.com/detection/core/#supervision.detection.core.Detections.from_ultralytics) adding support for OBB (Oriented Bounding Boxes). ([770](https://github.com/roboflow/supervision/pull/770))
- [`sv.LineZone`](https://supervision.roboflow.com/detection/tools/line_zone/#linezone) to now accept a list of specific box anchors that must cross the line for a detection to be counted. This update marks a significant improvement from the previous requirement, where all four box corners were necessary. Users can now specify a single anchor, such as `sv.Position.BOTTOM_CENTER`, or any other combination of anchors defined as `List[sv.Position]`. ([735](https://github.com/roboflow/supervision/pull/735))
- [`sv.Detections`](https://supervision.roboflow.com/detection/core/#detections) to support custom payload. ([700](https://github.com/roboflow/supervision/pull/700))
- [`sv.Color`](https://supervision.roboflow.com/draw/color/#color)'s and [`sv.ColorPalette`](https://supervision.roboflow.com/draw/color/#colorpalette)'s method of accessing predefined colors, transitioning from a function-based approach (`sv.Color.red()`) to a more intuitive and conventional property-based method (`sv.Color.RED`). ([756](https://github.com/roboflow/supervision/pull/756)) ([#769](https://github.com/roboflow/supervision/pull/769))

> [!WARNING]
> `sv.ColorPalette.default()` is deprecated and will be removed in `supervision-0.21.0`. Use `sv.ColorPalette.DEFAULT` instead.


- [`sv.ColorPalette.DEFAULT`](https://supervision.roboflow.com/draw/color/#colorpalette) value, giving users a more extensive set of annotation colors. ([769](https://github.com/roboflow/supervision/pull/769))

![default-color-palette](https://github.com/roboflow/supervision/assets/26109316/c37dea57-6193-4b68-bbe6-85827d6e3cbf)

- `sv.Detections.from_roboflow` to [`sv.Detections.from_inference`](https://supervision.roboflow.com/detection/core/#supervision.detection.core.Detections.from_inference) streamlining its functionality to be compatible with both the both [inference](https://github.com/roboflow/inference) pip package and the Roboflow [hosted API](https://docs.roboflow.com/deploy/hosted-api). ([#677](https://github.com/roboflow/supervision/pull/677))

> [!WARNING]
> `Detections.from_roboflow()` is deprecated and will be removed in `supervision-0.21.0`. Use `Detections.from_inference` instead.

python
import cv2
import supervision as sv
from inference.models.utils import get_roboflow_model

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_roboflow_model(model_id="yolov8s-640")

result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)


πŸ› οΈ Fixed

- [`sv.LineZone`](https://supervision.roboflow.com/detection/tools/line_zone/#linezone) functionality to accurately update the counter when an object crosses a line from any direction, including from the side. This enhancement enables more precise tracking and analytics, such as calculating individual in/out counts for each lane on the road. ([735](https://github.com/roboflow/supervision/pull/735))

https://github.com/roboflow/supervision/assets/26109316/412c4d9c-b228-4bcc-a4c7-e6a0c8f2da6e

πŸ† Contributors

onuralpszr (Onuralp SEZER), HinePo (Rafael Levy), xaristeidou (Christoforos Aristeidou), revtheundead (Utku Γ–zbek), paulguerrie (Paul Guerrie), yeldarby (Brad Dwyer), capjamesg (James Gallagher), SkalskiP (Piotr Skalski)

0.17.1

πŸš€ Added

- Support for Python 3.12.

πŸ† Contributors

onuralpszr (Onuralp SEZER), SkalskiP (Piotr Skalski)

0.17.0

πŸš€ Added

- [`sv.PixelateAnnotator`](https://supervision.roboflow.com/annotators/#supervision.annotators.core.PixelateAnnotator) allowing to pixelate objects on images and videos. ([633](https://github.com/roboflow/supervision/pull/633))

https://github.com/roboflow/supervision/assets/26109316/c2d4b3b1-fd19-44bb-94ec-f21b28dfd05f

- [`sv.TriangleAnnotator`](https://supervision.roboflow.com/annotators/#supervision.annotators.core.TriangleAnnotator) allowing to annotate images and videos with triangle markers. ([652](https://github.com/roboflow/supervision/pull/652))

- [`sv.PolygonAnnotator`](https://supervision.roboflow.com/annotators/#supervision.annotators.core.PolygonAnnotator) allowing to annotate images and videos with segmentation mask outline. ([602](https://github.com/roboflow/supervision/pull/602))

python
>>> import supervision as sv

>>> image = ...
>>> detections = sv.Detections(...)

>>> polygon_annotator = sv.PolygonAnnotator()
>>> annotated_frame = polygon_annotator.annotate(
... scene=image.copy(),
... detections=detections
... )


https://github.com/roboflow/supervision/assets/26109316/c9236bf7-6ba4-4799-bf2a-b5532ad3591b

- [`sv.assets`](https://supervision.roboflow.com/assets/) allowing download of video files that you can use in your demos. ([#476](https://github.com/roboflow/supervision/pull/476))

python
>>> from supervision.assets import download_assets, VideoAssets
>>> download_assets(VideoAssets.VEHICLES)
"vehicles.mp4"


- [`Position.CENTER_OF_MASS`](https://supervision.roboflow.com/geometry/core/#position) allowing to place labels in center of mass of segmentation masks. ([605](https://github.com/roboflow/supervision/pull/605))

- [`sv.scale_boxes`](https://supervision.roboflow.com/detection/utils/#supervision.detection.utils.scale_boxes) allowing to scale [`sv.Detections.xyxy`](http://127.0.0.1:8000/detection/core/#supervision.detection.core.Detections) values. ([651](https://github.com/roboflow/supervision/pull/651))

- [`sv.calculate_dynamic_text_scale`](https://supervision.roboflow.com/draw/utils/#supervision.draw.utils.calculate_dynamic_text_scale) and [`sv.calculate_dynamic_line_thickness`](https://supervision.roboflow.com/draw/utils/#supervision.draw.utils.calculate_dynamic_line_thickness) allowing text scale and line thickness to match image resolution. ([637](https://github.com/roboflow/supervision/pull/637))

- [`sv.Color.as_hex`](https://supervision.roboflow.com/draw/color/#supervision.draw.color.Color.as_hex) allowing to extract color value in HEX format. ([620](https://github.com/roboflow/supervision/pull/620))

- [`sv.Classifications.from_timm`](https://supervision.roboflow.com/classification/core/#supervision.classification.core.Classifications.from_timm) allowing to load classification result from [timm](https://huggingface.co/docs/hub/timm) models. ([#572](https://github.com/roboflow/supervision/pull/572))

- [`sv.Classifications.from_clip`](https://supervision.roboflow.com/classification/core/#supervision.classification.core.Classifications.from_clip) allowing to load classification result from [clip](https://github.com/openai/clip) model. ([#478](https://github.com/roboflow/supervision/pull/478))

- [`sv.Detections.from_azure_analyze_image`](https://supervision.roboflow.com/detection/core/#supervision.detection.core.Detections.from_azure_analyze_image) allowing to load detection results from [Azure Image Analysis](https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/concept-object-detection-40). ([#571](https://github.com/roboflow/supervision/pull/571))

🌱 Changed

- `sv.BoxMaskAnnotator` renaming it to [`sv.ColorAnnotator`](https://supervision.roboflow.com/annotators/#supervision.annotators.core.ColorAnnotator). ([646](https://github.com/roboflow/supervision/pull/646))

- [`sv.MaskAnnotator`](https://supervision.roboflow.com/annotators/#supervision.annotators.core.MaskAnnotator) to make it **5x faster**. ([606](https://github.com/roboflow/supervision/pull/606))

![mask_annotator_speed](https://github.com/roboflow/supervision/assets/26109316/8c4b87d1-f257-489e-9e0e-2ffe9ccbc367)

πŸ› οΈ Fixed

- [`sv.DetectionDataset.from_yolo`](https://supervision.roboflow.com/datasets/#supervision.dataset.core.DetectionDataset.from_yolo) to ignore empty lines in annotation files. ([584](https://github.com/roboflow/supervision/pull/584))

- [`sv.BlurAnnotator`](https://supervision.roboflow.com/annotators/#supervision.annotators.core.BlurAnnotator) to trim negative coordinates before bluring detections. ([555](https://github.com/roboflow/supervision/pull/555))

- [`sv.TraceAnnotator`](https://supervision.roboflow.com/annotators/#supervision.annotators.core.TraceAnnotator) to respect trace position. ([511](https://github.com/roboflow/supervision/pull/511))

πŸ† Contributors

onuralpszr (Onuralp SEZER), hugoles (Hugo Dutra), karanjakhar (Karan Jakhar), kim-jeonghyun (Jeonghyun Kim), fdloopes (
Felipe Lopes), abhishek7kalra (Abhishek Kalra), SummitStudiosDev, xenteros capjamesg (James Gallagher), SkalskiP (Piotr Skalski)

Page 1 of 5

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.