Supervision

Latest version: v0.21.0

Safety actively analyzes 638466 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 5

0.3.2

🌱 Changed

- Drop requirement for `class_id` in `sv.Detections` (https://github.com/roboflow/supervision/pull/50) to make it more flexible

🏆 Contributors

- SkalskiP

0.3.1

🌱 Changed

- `Detections.wth_nms` support class agnostic and non-class agnostic case (https://github.com/roboflow/supervision/pull/36)

🛠️ Fixed

- `PolygonZone` throws an exception when the object touches the bottom edge of the image (https://github.com/roboflow/supervision/issues/41)
- `Detections.wth_nms` method throws an exception when `Detections` is empty (https://github.com/roboflow/supervision/issues/42)

🏆 Contributors

* SkalskiP

0.3.0

🚀 Added

New methods in `sv.Detections` API:
- `from_transformers` - convert Object Detection 🤗 Transformer result into `sv.Detections`
- `from_detectron2` - convert Detectron2 result into `sv.Detections`
- `from_coco_annotations` - convert COCO annotation into `sv.Detections`
- `area` - dynamically calculated property storing bbox area
- `with_nms` - initial implementation (only class agnostic) of `sv.Detections` NMS

🌱 Changed

- Make `sv.Detections.confidence` field `Optional`.

🏆 Contributors

* SkalskiP

0.2.0

🔪 Killer features

- Support for `PolygonZone` and `PolygonZoneAnnotator` 🔥

<details>
<summary>👉 Code example</summary>

python
import numpy as np
import supervision as sv
from ultralytics import YOLO

initiate polygon zone
polygon = np.array([
[1900, 1250],
[2350, 1250],
[3500, 2160],
[1250, 2160]
])
video_info = sv.VideoInfo.from_video_path(MALL_VIDEO_PATH)
zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh)

initiate annotators
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.white(), thickness=6, text_thickness=6, text_scale=4)

extract video frame
generator = sv.get_video_frames_generator(MALL_VIDEO_PATH)
iterator = iter(generator)
frame = next(iterator)

detect
model = YOLO('yolov8s.pt')
results = model(frame, imgsz=1280)[0]
detections = sv.Detections.from_yolov8(results)
detections = detections[detections.class_id == 0]
zone.trigger(detections=detections)

annotate
box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2)
labels = [f"{model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections]
frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels)
frame = zone_annotator.annotate(scene=frame)


</details>

![supervision-0-2-0](https://user-images.githubusercontent.com/26109316/217377845-76dcce5c-c247-4bc2-9221-69f9ace30631.png)

- Advanced `vs.Detections` filtering with pandas-like API.

python

0.1.0

🚀 Added

- ⓒ Add project license
- 🎨 `DEFAULT_COLOR_PALETTE`, `Color`, and `ColorPalette` classes
- 📐 initial implementation of `Point`, `Vector`, and `Rect` classes
- 🎬 `VideoInfo` and `VideoSink` classes as well as `get_video_frames_generator`
-📓 `show_frame_in_notebook` util
- 🖌️ `draw_line`, `draw_rectangle`, `draw_filled_rectangle` utils added
- 📦 Initial version `Detections` and `BoxAnnotator` added
- 🧮 initial implementation of `LineCounter` and `LineCounterAnnotator` classes

🏆 Contributors

SkalskiP

Page 5 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.