π
Timeline
The `supervision-0.21.0` release is around the corner. Here is the timeline:
- `5 Jun 2024 08:00 PM CEST (UTC +2) / 5 Jun 2024 11:00 AM PDT (UTC -7)` - merge `develop` into `main` - closing list `supervision-0.21.0` features
- `6 Jun 2024 11:00 AM CEST (UTC +2) / 6 Jun 2024 02:00 AM PDT (UTC -7)` - release `supervision-0.21.0`
πͺ΅ Changelog
π Added
- [`sv.Detections.with_nmm`](https://supervision.roboflow.com/develop/detection/core/#supervision.detection.core.Detections.with_nmm) to perform non-maximum merging on the current set of object detections. ([500](https://github.com/roboflow/supervision/pull/500))
![non-max-merging](https://github.com/roboflow/supervision/assets/26109316/9c5c21ed-6133-4f9c-9919-d3e6b8439629)
- [`sv.Detections.from_lmm`](https://supervision.roboflow.com/develop/detection/core/#supervision.detection.core.Detections.from_lmm) allowing to parse Large Multimodal Model (LMM) text result into [`sv.Detections`](https://supervision.roboflow.com/develop/detection/core/) object. For now `from_lmm` supports only [PaliGemma](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-finetune-paligemma-on-detection-dataset.ipynb) result parsing. ([#1221](https://github.com/roboflow/supervision/pull/1221))
python
import supervision as sv
paligemma_result = "<loc0256><loc0256><loc0768><loc0768> cat"
detections = sv.Detections.from_lmm(
sv.LMM.PALIGEMMA,
paligemma_result,
resolution_wh=(1000, 1000),
classes=['cat', 'dog']
)
detections.xyxy
array([[250., 250., 750., 750.]])
detections.class_id
array([0])
- [`sv.VertexLabelAnnotator`](https://supervision.roboflow.com/develop/keypoint/annotators/#supervision.keypoint.annotators.EdgeAnnotator.annotate) allowing to annotate every vertex of a keypoint skeleton with custom text and color. ([1236](https://github.com/roboflow/supervision/pull/1236))
python
import supervision as sv
image = ...
key_points = sv.KeyPoints(...)
LABELS = [
"nose", "left eye", "right eye", "left ear",
"right ear", "left shoulder", "right shoulder", "left elbow",
"right elbow", "left wrist", "right wrist", "left hip",
"right hip", "left knee", "right knee", "left ankle",
"right ankle"
]
COLORS = [
"FF6347", "FF6347", "FF6347", "FF6347",
"FF6347", "FF1493", "00FF00", "FF1493",
"00FF00", "FF1493", "00FF00", "FFD700",
"00BFFF", "FFD700", "00BFFF", "FFD700",
"00BFFF"
]
COLORS = [sv.Color.from_hex(color_hex=c) for c in COLORS]
vertex_label_annotator = sv.VertexLabelAnnotator(
color=COLORS,
text_color=sv.Color.BLACK,
border_radius=5
)
annotated_frame = vertex_label_annotator.annotate(
scene=image.copy(),
key_points=key_points,
labels=labels
)
![vertex-label-annotator-custom-example (1)](https://github.com/roboflow/supervision/assets/26109316/397a0c0a-47a1-449d-b128-470d2a571a66)
- [`sv.KeyPoints.from_inference`](https://supervision.roboflow.com/develop/keypoint/core/#supervision.keypoint.core.KeyPoints.from_inference) and [`sv.KeyPoints.from_yolo_nas`](https://supervision.roboflow.com/develop/keypoint/core/#supervision.keypoint.core.KeyPoints.from_yolo_nas) allowing to create [`sv.KeyPoints`](https://supervision.roboflow.com/develop/keypoint/core/#supervision.keypoint.core.KeyPoints) from [Inference](https://github.com/roboflow/inference) and [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) result. ([#1147](https://github.com/roboflow/supervision/pull/1147) and [#1138](https://github.com/roboflow/supervision/pull/1138))
- [`sv.mask_to_rle`](https://supervision.roboflow.com/develop/datasets/utils/#supervision.dataset.utils.rle_to_mask) and [`sv.rle_to_mask`](https://supervision.roboflow.com/develop/datasets/utils/#supervision.dataset.utils.rle_to_mask) allowing for easy conversion between mask and rle formats. ([1163](https://github.com/roboflow/supervision/pull/1163))
![mask-to-rle (1)](https://github.com/roboflow/supervision/assets/26109316/c4ba0eeb-2eff-4209-ac6b-03d7a6e2e312)
π± Changed
- [`sv.InferenceSlicer`](https://supervision.roboflow.com/develop/detection/tools/inference_slicer/) allowing to select overlap filtering strategy (`NONE`, `NON_MAX_SUPPRESSION` and `NON_MAX_MERGE`). ([#1236](https://github.com/roboflow/supervision/pull/1236))
- [`sv.InferenceSlicer`](https://supervision.roboflow.com/develop/detection/tools/inference_slicer/) adding instance segmentation model support. ([#1178](https://github.com/roboflow/supervision/pull/1178))
python
import cv2
import numpy as np
import supervision as sv
from inference import get_model
model = get_model(model_id="yolov8x-seg-640")
image = cv2.imread(<SOURCE_IMAGE_PATH>)
def callback(image_slice: np.ndarray) -> sv.Detections:
results = model.infer(image_slice)[0]
return sv.Detections.from_inference(results)
slicer = sv.InferenceSlicer(callback = callback)
detections = slicer(image)
mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator()
annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)
![inference-slicer-segmentation-example](https://github.com/roboflow/supervision/assets/26109316/36e28daf-a92d-4e4c-a627-d1f4a89ced0c)
- [`sv.LineZone`](https://supervision.roboflow.com/develop/detection/tools/line_zone/) making it 10-20 times faster, depending on the use case. ([#1228](https://github.com/roboflow/supervision/pull/1228))
![output](https://github.com/roboflow/supervision/assets/26109316/41b357f4-e825-4ba3-abb7-1c5aa0aec0a9)
- [`sv.DetectionDataset.from_coco`](https://supervision.roboflow.com/develop/datasets/core/#supervision.dataset.core.DetectionDataset.from_coco) and [`sv.DetectionDataset.as_coco`](https://supervision.roboflow.com/develop/datasets/core/#supervision.dataset.core.DetectionDataset.as_coco) adding support for run-length encoding (RLE) mask format. ([1163](https://github.com/roboflow/supervision/pull/1163))
π Contributors
onuralpszr (Onuralp SEZER), LinasKo (Linas Kondrackis), rolson24 (Raif Olson), xaristeidou (Christoforos Aristeidou), ManzarIMalik (Manzar Iqbal Malik), tc360950 (Tomasz CΔ
kaΕa), emSko, SkalskiP (Piotr Skalski)