Supervision

Latest version: v0.21.0

Safety actively analyzes 635974 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

0.433

- Support for `ByteTrack` for object tracking with [`sv.ByteTrack`](https://roboflow.github.io/supervision/tracker/core/#bytetrack). ([256](https://github.com/roboflow/supervision/pull/256))

python
>>> import supervision as sv
>>> from ultralytics import YOLO

>>> model = YOLO(...)
>>> byte_tracker = sv.ByteTrack()
>>> annotator = sv.BoxAnnotator()

>>> def callback(frame: np.ndarray, index: int) -> np.ndarray:
... results = model(frame)[0]
... detections = sv.Detections.from_yolov8(results)
... detections = byte_tracker.update_from_detections(detections=detections)
... labels = [
... f"{tracker_id} {model.model.names[class_id]} {confidence:0.2f}"
... for _, _, confidence, class_id, tracker_id
... in detections
... ]
... return annotator.annotate(scene=frame.copy(), detections=detections, labels=labels)

>>> sv.process_video(
... source_path='...',
... target_path='...',
... callback=callback
... )


https://github.com/roboflow/supervision/assets/26109316/d5d393f5-e577-474a-bc8c-82483ef8a578

- [`sv.Detections.from_ultralytics`](https://roboflow.github.io/supervision/detection/core/#supervision.detection.core.Detections.from_ultralytics) to enable seamless integration with [Ultralytics](https://github.com/ultralytics/ultralytics) framework. This will enable you to use `supervision` with all [models](https://docs.ultralytics.com/models/) that Ultralytics supports. ([#222](https://github.com/roboflow/supervision/pull/222))

> **Warning**
> [`sv.Detections.from_yolov8`](https://roboflow.github.io/supervision/detection/core/#supervision.detection.core.Detections.from_yolov8) is now deprecated and will be removed with `supervision-0.15.0` release.

- [`sv.Detections.from_paddledet`](https://roboflow.github.io/supervision/detection/core/#supervision.detection.core.Detections.from_paddledet) to enable seamless integration with [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection) framework. ([#191](https://github.com/roboflow/supervision/pull/191))

- Support for loading PASCAL VOC segmentation datasets with [`sv.DetectionDataset.`](https://roboflow.github.io/supervision/dataset/core/#supervision.dataset.core.DetectionDataset.from_pascal_voc). ([245](https://github.com/roboflow/supervision/pull/245))

πŸ† Contributors

hardikdava (Hardik Dava), kirilllzaitsev (Kirill Zaitsev), onuralpszr (Onuralp SEZER), dbroboflow, mayankagarwals (Mayank Agarwal), danigarciaoca (Daniel M. GarcΓ­a-OcaΓ±a), capjamesg (James Gallagher), SkalskiP (Piotr Skalski)

0.21.0

πŸ“… Timeline

The `supervision-0.21.0` release is around the corner. Here is the timeline:

- `5 Jun 2024 08:00 PM CEST (UTC +2) / 5 Jun 2024 11:00 AM PDT (UTC -7)` - merge `develop` into `main` - closing list `supervision-0.21.0` features
- `6 Jun 2024 11:00 AM CEST (UTC +2) / 6 Jun 2024 02:00 AM PDT (UTC -7)` - release `supervision-0.21.0`

πŸͺ΅ Changelog

πŸš€ Added

- [`sv.Detections.with_nmm`](https://supervision.roboflow.com/develop/detection/core/#supervision.detection.core.Detections.with_nmm) to perform non-maximum merging on the current set of object detections. ([500](https://github.com/roboflow/supervision/pull/500))

![non-max-merging](https://github.com/roboflow/supervision/assets/26109316/9c5c21ed-6133-4f9c-9919-d3e6b8439629)

- [`sv.Detections.from_lmm`](https://supervision.roboflow.com/develop/detection/core/#supervision.detection.core.Detections.from_lmm) allowing to parse Large Multimodal Model (LMM) text result into [`sv.Detections`](https://supervision.roboflow.com/develop/detection/core/) object. For now `from_lmm` supports only [PaliGemma](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-finetune-paligemma-on-detection-dataset.ipynb) result parsing. ([#1221](https://github.com/roboflow/supervision/pull/1221))

python
import supervision as sv

paligemma_result = "<loc0256><loc0256><loc0768><loc0768> cat"
detections = sv.Detections.from_lmm(
sv.LMM.PALIGEMMA,
paligemma_result,
resolution_wh=(1000, 1000),
classes=['cat', 'dog']
)
detections.xyxy
array([[250., 250., 750., 750.]])

detections.class_id
array([0])


- [`sv.VertexLabelAnnotator`](https://supervision.roboflow.com/develop/keypoint/annotators/#supervision.keypoint.annotators.EdgeAnnotator.annotate) allowing to annotate every vertex of a keypoint skeleton with custom text and color. ([1236](https://github.com/roboflow/supervision/pull/1236))

python
import supervision as sv

image = ...
key_points = sv.KeyPoints(...)

LABELS = [
"nose", "left eye", "right eye", "left ear",
"right ear", "left shoulder", "right shoulder", "left elbow",
"right elbow", "left wrist", "right wrist", "left hip",
"right hip", "left knee", "right knee", "left ankle",
"right ankle"
]

COLORS = [
"FF6347", "FF6347", "FF6347", "FF6347",
"FF6347", "FF1493", "00FF00", "FF1493",
"00FF00", "FF1493", "00FF00", "FFD700",
"00BFFF", "FFD700", "00BFFF", "FFD700",
"00BFFF"
]
COLORS = [sv.Color.from_hex(color_hex=c) for c in COLORS]

vertex_label_annotator = sv.VertexLabelAnnotator(
color=COLORS,
text_color=sv.Color.BLACK,
border_radius=5
)
annotated_frame = vertex_label_annotator.annotate(
scene=image.copy(),
key_points=key_points,
labels=labels
)


![vertex-label-annotator-custom-example (1)](https://github.com/roboflow/supervision/assets/26109316/397a0c0a-47a1-449d-b128-470d2a571a66)

- [`sv.KeyPoints.from_inference`](https://supervision.roboflow.com/develop/keypoint/core/#supervision.keypoint.core.KeyPoints.from_inference) and [`sv.KeyPoints.from_yolo_nas`](https://supervision.roboflow.com/develop/keypoint/core/#supervision.keypoint.core.KeyPoints.from_yolo_nas) allowing to create [`sv.KeyPoints`](https://supervision.roboflow.com/develop/keypoint/core/#supervision.keypoint.core.KeyPoints) from [Inference](https://github.com/roboflow/inference) and [YOLO-NAS](https://github.com/Deci-AI/super-gradients/blob/master/YOLONAS.md) result. ([#1147](https://github.com/roboflow/supervision/pull/1147) and [#1138](https://github.com/roboflow/supervision/pull/1138))

- [`sv.mask_to_rle`](https://supervision.roboflow.com/develop/datasets/utils/#supervision.dataset.utils.rle_to_mask) and [`sv.rle_to_mask`](https://supervision.roboflow.com/develop/datasets/utils/#supervision.dataset.utils.rle_to_mask) allowing for easy conversion between mask and rle formats. ([1163](https://github.com/roboflow/supervision/pull/1163))

![mask-to-rle (1)](https://github.com/roboflow/supervision/assets/26109316/c4ba0eeb-2eff-4209-ac6b-03d7a6e2e312)

🌱 Changed

- [`sv.InferenceSlicer`](https://supervision.roboflow.com/develop/detection/tools/inference_slicer/) allowing to select overlap filtering strategy (`NONE`, `NON_MAX_SUPPRESSION` and `NON_MAX_MERGE`). ([#1236](https://github.com/roboflow/supervision/pull/1236))

- [`sv.InferenceSlicer`](https://supervision.roboflow.com/develop/detection/tools/inference_slicer/) adding instance segmentation model support. ([#1178](https://github.com/roboflow/supervision/pull/1178))

python
import cv2
import numpy as np
import supervision as sv
from inference import get_model

model = get_model(model_id="yolov8x-seg-640")
image = cv2.imread(<SOURCE_IMAGE_PATH>)

def callback(image_slice: np.ndarray) -> sv.Detections:
results = model.infer(image_slice)[0]
return sv.Detections.from_inference(results)

slicer = sv.InferenceSlicer(callback = callback)
detections = slicer(image)

mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator()

annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)


![inference-slicer-segmentation-example](https://github.com/roboflow/supervision/assets/26109316/36e28daf-a92d-4e4c-a627-d1f4a89ced0c)

- [`sv.LineZone`](https://supervision.roboflow.com/develop/detection/tools/line_zone/) making it 10-20 times faster, depending on the use case. ([#1228](https://github.com/roboflow/supervision/pull/1228))

![output](https://github.com/roboflow/supervision/assets/26109316/41b357f4-e825-4ba3-abb7-1c5aa0aec0a9)

- [`sv.DetectionDataset.from_coco`](https://supervision.roboflow.com/develop/datasets/core/#supervision.dataset.core.DetectionDataset.from_coco) and [`sv.DetectionDataset.as_coco`](https://supervision.roboflow.com/develop/datasets/core/#supervision.dataset.core.DetectionDataset.as_coco) adding support for run-length encoding (RLE) mask format. ([1163](https://github.com/roboflow/supervision/pull/1163))

πŸ† Contributors

onuralpszr (Onuralp SEZER), LinasKo (Linas Kondrackis), rolson24 (Raif Olson), xaristeidou (Christoforos Aristeidou), ManzarIMalik (Manzar Iqbal Malik), tc360950 (Tomasz CΔ…kaΕ‚a), emSko, SkalskiP (Piotr Skalski)

0.20.0

πŸš€ Added

- [`sv.KeyPoints`](https://supervision.roboflow.com/develop/keypoint/core/#supervision.keypoint.core.KeyPoints) to provide initial support for pose estimation and broader keypoint detection models. ([1128](https://github.com/roboflow/supervision/pull/1128))

- [`sv.EdgeAnnotator`](https://supervision.roboflow.com/develop/keypoint/annotators/#supervision.keypoint.annotators.EdgeAnnotator) and [`sv.VertexAnnotator`](https://supervision.roboflow.com/develop/keypoint/annotators/#supervision.keypoint.annotators.VertexAnnotator) to enable rendering of results from keypoint detection models. ([1128](https://github.com/roboflow/supervision/pull/1128))

python
import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')

result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)

edge_annotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5)
annotated_image = edge_annotators.annotate(image.copy(), keypoints)


![edge-annotator-example](https://github.com/roboflow/supervision/assets/26109316/eefdd879-4949-4ea8-aec6-fa5273c5316d)

python
import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO('yolov8l-pose')

result = model(image, verbose=False)[0]
keypoints = sv.KeyPoints.from_ultralytics(result)

vertex_annotators = sv.VertexAnnotator(color=sv.Color.GREEN, radius=10)
annotated_image = vertex_annotators.annotate(image.copy(), keypoints)


![vertex-annotator-example](https://github.com/roboflow/supervision/assets/26109316/e931979b-c46d-4ad0-b474-19ebbf167895)

🌱 Changed

- [`sv.LabelAnnotator`](https://supervision.roboflow.com/develop/annotators/#supervision.annotators.core.LabelAnnotator) by adding an additional `corner_radius` argument that allows for rounding the corners of the bounding box. ([1037](https://github.com/roboflow/supervision/pull/1037))

- [`sv.PolygonZone`](https://supervision.roboflow.com/develop/detection/tools/polygon_zone/#supervision.detection.tools.polygon_zone.PolygonZone) such that the `frame_resolution_wh` argument is no longer required to initialize `sv.PolygonZone`. ([1109](https://github.com/roboflow/supervision/pull/1109))

> [!WARNING]
> The `frame_resolution_wh` parameter in `sv.PolygonZone` is deprecated and will be removed in `supervision-0.24.0`.

- [`sv.get_polygon_center`](https://supervision.roboflow.com/develop/utils/geometry/#supervision.geometry.core.utils.get_polygon_center) to calculate a more accurate polygon centroid. ([1084](https://github.com/roboflow/supervision/pull/1084))

- [`sv.Detections.from_transformers`](https://supervision.roboflow.com/develop/detection/core/#supervision.detection.core.Detections.from_transformers) by adding support for Transformers segmentation models and extract class names values. ([1069](https://github.com/roboflow/supervision/pull/1069))

python
import torch
import supervision as sv
from PIL import Image
from transformers import DetrImageProcessor, DetrForSegmentation

processor = DetrImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic")
model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")

image = Image.open(<SOURCE_IMAGE_PATH>)
inputs = processor(images=image, return_tensors="pt")

with torch.no_grad():
outputs = model(**inputs)

width, height = image.size
target_size = torch.tensor([[height, width]])
results = processor.post_process_segmentation(
outputs=outputs, target_sizes=target_size)[0]
detections = sv.Detections.from_transformers(results, id2label=model.config.id2label)

mask_annotator = sv.MaskAnnotator()
label_annotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)

annotated_image = mask_annotator.annotate(
scene=image, detections=detections)
annotated_image = label_annotator.annotate(
scene=annotated_image, detections=detections)


πŸ› οΈ Fixed

- [`sv.ByteTrack.update_with_detections`](https://supervision.roboflow.com/develop/trackers/#supervision.tracker.byte_tracker.core.ByteTrack.update_with_detections) which was removing segmentation masks while tracking. Now, `ByteTrack` can be used alongside segmentation models. ([787](https://github.com/roboflow/supervision/pull/787))

πŸ† Contributors

onuralpszr (Onuralp SEZER), rolson24 (Raif Olson), xaristeidou (Christoforos Aristeidou), jeslinpjames (Jeslin P James), Griffin-Sullivan (Griffin Sullivan), PawelPeczek-Roboflow (PaweΕ‚ PΔ™czek), pirnerjonas (Jonas Pirner), sharingan000, macc-n, LinasKo (Linas Kondrackis), SkalskiP (Piotr Skalski)

0.19.0

πŸ§‘β€πŸ³ Cookbooks

[Supervision Cookbooks](https://supervision.roboflow.com/develop/cookbooks/) - A curated open-source collection crafted by the community, offering practical examples, comprehensive guides, and walkthroughs for leveraging Supervision alongside diverse Computer Vision models. ([#860](https://github.com/roboflow/supervision/pull/860))

πŸš€ Added

- [`sv.CSVSink`](https://supervision.roboflow.com/develop/detection/tools/save_detections/#supervision.detection.tools.csv_sink.CSVSink) allowing for the straightforward saving of image, video, or stream inference results in a `.csv` file. ([818](https://github.com/roboflow/supervision/pull/818))

python
import supervision as sv
from ultralytics import YOLO

model = YOLO(<SOURCE_MODEL_PATH>)
csv_sink = sv.CSVSink(<RESULT_CSV_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)

with csv_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
csv_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})


https://github.com/roboflow/supervision/assets/26109316/621588f9-69a0-44fe-8aab-ab4b0ef2ea1b

- [`sv.JSONSink`](https://supervision.roboflow.com/develop/detection/tools/save_detections/#supervision.detection.tools.csv_sink.JSONSink) allowing for the straightforward saving of image, video, or stream inference results in a `.json` file. ([819](https://github.com/roboflow/supervision/pull/819))

python
import supervision as sv
from ultralytics import YOLO

model = YOLO(<SOURCE_MODEL_PATH>)
json_sink = sv.JSONSink(<RESULT_JSON_FILE_PATH>)
frames_generator = sv.get_video_frames_generator(<SOURCE_VIDEO_PATH>)

with json_sink:
for frame in frames_generator:
result = model(frame)[0]
detections = sv.Detections.from_ultralytics(result)
json_sink.append(detections, custom_data={<CUSTOM_LABEL>:<CUSTOM_DATA>})


- [`sv.mask_iou_batch`](https://supervision.roboflow.com/develop/detection/utils/#supervision.detection.utils.mask_iou_batch) allowing to compute Intersection over Union (IoU) of two sets of masks. ([847](https://github.com/roboflow/supervision/pull/847))
- [`sv.mask_non_max_suppression`](https://supervision.roboflow.com/develop/detection/utils/#supervision.detection.utils.mask_non_max_suppression) allowing to perform Non-Maximum Suppression (NMS) on segmentation predictions. ([847](https://github.com/roboflow/supervision/pull/847))
- [`sv.CropAnnotator`](https://supervision.roboflow.com/develop/annotators/#supervision.annotators.core.CropAnnotator) allowing users to annotate the scene with scaled-up crops of detections. ([888](https://github.com/roboflow/supervision/pull/888))

python
import cv2
import supervision as sv
from inference import get_model

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_model(model_id="yolov8n-640")

result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)

crop_annotator = sv.CropAnnotator()
annotated_frame = crop_annotator.annotate(
scene=image.copy(),
detections=detections
)


https://github.com/roboflow/supervision/assets/26109316/0a5b67ce-55e7-4e26-9495-a68f9ad97ec7

🌱 Changed

- [`sv.ByteTrack.reset`](https://supervision.roboflow.com/develop/trackers/#supervision.tracker.byte_tracker.core.ByteTrack.reset) allowing users to clear trackers state, enabling the processing of multiple video files in sequence. ([827](https://github.com/roboflow/supervision/pull/827))
- [`sv.LineZoneAnnotator`](https://supervision.roboflow.com/develop/detection/tools/line_zone/#supervision.detection.line_zone.LineZone) allowing to hide in/out count using `display_in_count` and `display_out_count` properties. ([802](https://github.com/roboflow/supervision/pull/802))
- [`sv.ByteTrack`](https://supervision.roboflow.com/develop/trackers/#supervision.tracker.byte_tracker.core.ByteTrack) input arguments and docstrings updated to improve readability and ease of use. ([787](https://github.com/roboflow/supervision/pull/787))

> [!WARNING]
> The `track_buffer`, `track_thresh`, and `match_thresh` parameters in `sv.ByterTrack` are deprecated and will be removed in `supervision-0.23.0`. Use `lost_track_buffer,` `track_activation_threshold`, and `minimum_matching_threshold` instead.

- [`sv.PolygonZone`](https://supervision.roboflow.com/develop/detection/tools/polygon_zone/#supervision.detection.tools.polygon_zone.PolygonZone) to now accept a list of specific box anchors that must be in zone for a detection to be counted. ([910](https://github.com/roboflow/supervision/pull/910))

> [!WARNING]
> The `triggering_position ` parameter in `sv.PolygonZone` is deprecated and will be removed in `supervision-0.23.0`. Use `triggering_anchors` instead.

- Annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input. ([875](https://github.com/roboflow/supervision/pull/875))

πŸ› οΈ Fixed

- [`sv.DetectionsSmoother`](https://supervision.roboflow.com/develop/detection/tools/smoother/#supervision.detection.tools.smoother.DetectionsSmoother) removing `tracking_id` from `sv.Detections`. ([944](https://github.com/roboflow/supervision/pull/944))
- [`sv.DetectionDataset`](https://supervision.roboflow.com/develop/datasets/#supervision.dataset.core.DetectionDataset) which, after changes introduced in `supervision-0.18.0`, failed to load datasets in YOLO, PASCAL VOC, and COCO formats.

πŸ† Contributors

onuralpszr (Onuralp SEZER), LinasKo (Linas Kondrackis), LeviVasconcelos (Levi Vasconcelos), AdonaiVera (Adonai Vera), xaristeidou (Christoforos Aristeidou), Kadermiyanyedi (Kader Miyanyedi), NickHerrig (Nick Herrig), PacificDou (Shuyang Dou), iamhatesz (Tomasz Wrona), capjamesg (James Gallagher), sansyo, SkalskiP (Piotr Skalski)

0.18.0

πŸš€ Added

- [`sv.PercentageBarAnnotator`](https://supervision.roboflow.com/annotators/#percentagebarannotator) allowing to annotate images and videos with percentage values representing confidence or other custom property. ([720](https://github.com/roboflow/supervision/pull/720))

python
import supervision as sv

image = ...
detections = sv.Detections(...)

percentage_bar_annotator = sv.PercentageBarAnnotator()
annotated_frame = percentage_bar_annotator.annotate(
scene=image.copy(),
detections=detections
)


![percentage-bar-annotator-example-purple](https://github.com/roboflow/supervision/assets/26109316/6ef1fa4f-8587-4982-b225-4ae355806f93)

- [`sv.RoundBoxAnnotator`](https://supervision.roboflow.com/annotators/#roundboxannotator) allowing to annotate images and videos with rounded corners bounding boxes. ([702](https://github.com/roboflow/supervision/pull/702))
- [`sv.DetectionsSmoother`](https://supervision.roboflow.com/detection/tools/smoother/#detection-smoother) allowing for smoothing detections over multiple frames in video tracking. ([696](https://github.com/roboflow/supervision/pull/696))

https://github.com/roboflow/supervision/assets/26109316/4dd703ad-ffba-492b-97ff-1be84e237e83

- [`sv.OrientedBoxAnnotator`](https://supervision.roboflow.com/annotators/#orientedboxannotator) allowing to annotate images and videos with OBB (Oriented Bounding Boxes). ([770](https://github.com/roboflow/supervision/pull/770))

python
import cv2
import supervision as sv
from ultralytics import YOLO

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = YOLO("yolov8n-obb.pt")

result = model(image)[0]
detections = sv.Detections.from_ultralytics(result)

oriented_box_annotator = sv.OrientedBoxAnnotator()
annotated_frame = oriented_box_annotator.annotate(
scene=image.copy(),
detections=detections
)


![oriented-box-annotator](https://github.com/roboflow/supervision/assets/26109316/ab5b608c-ad76-4c35-ba44-830886cc862c)

- [`sv.ColorPalette.from_matplotlib`](https://supervision.roboflow.com/draw/color/#supervision.draw.color.ColorPalette.from_matplotlib) allowing users to create a `sv.ColorPalette` instance from a Matplotlib color palette. ([769](https://github.com/roboflow/supervision/pull/769))

python
import supervision as sv

sv.ColorPalette.from_matplotlib('viridis', 5)
ColorPalette(colors=[Color(r=68, g=1, b=84), Color(r=59, g=82, b=139), ...])


![visualized_color_palette](https://github.com/roboflow/supervision/assets/26109316/9c450e0d-0547-4e05-9ac4-e8c625783a17)

🌱 Changed

- [`sv.Detections.from_ultralytics`](https://supervision.roboflow.com/detection/core/#supervision.detection.core.Detections.from_ultralytics) adding support for OBB (Oriented Bounding Boxes). ([770](https://github.com/roboflow/supervision/pull/770))
- [`sv.LineZone`](https://supervision.roboflow.com/detection/tools/line_zone/#linezone) to now accept a list of specific box anchors that must cross the line for a detection to be counted. This update marks a significant improvement from the previous requirement, where all four box corners were necessary. Users can now specify a single anchor, such as `sv.Position.BOTTOM_CENTER`, or any other combination of anchors defined as `List[sv.Position]`. ([735](https://github.com/roboflow/supervision/pull/735))
- [`sv.Detections`](https://supervision.roboflow.com/detection/core/#detections) to support custom payload. ([700](https://github.com/roboflow/supervision/pull/700))
- [`sv.Color`](https://supervision.roboflow.com/draw/color/#color)'s and [`sv.ColorPalette`](https://supervision.roboflow.com/draw/color/#colorpalette)'s method of accessing predefined colors, transitioning from a function-based approach (`sv.Color.red()`) to a more intuitive and conventional property-based method (`sv.Color.RED`). ([756](https://github.com/roboflow/supervision/pull/756)) ([#769](https://github.com/roboflow/supervision/pull/769))

> [!WARNING]
> `sv.ColorPalette.default()` is deprecated and will be removed in `supervision-0.21.0`. Use `sv.ColorPalette.DEFAULT` instead.


- [`sv.ColorPalette.DEFAULT`](https://supervision.roboflow.com/draw/color/#colorpalette) value, giving users a more extensive set of annotation colors. ([769](https://github.com/roboflow/supervision/pull/769))

![default-color-palette](https://github.com/roboflow/supervision/assets/26109316/c37dea57-6193-4b68-bbe6-85827d6e3cbf)

- `sv.Detections.from_roboflow` to [`sv.Detections.from_inference`](https://supervision.roboflow.com/detection/core/#supervision.detection.core.Detections.from_inference) streamlining its functionality to be compatible with both the both [inference](https://github.com/roboflow/inference) pip package and the Roboflow [hosted API](https://docs.roboflow.com/deploy/hosted-api). ([#677](https://github.com/roboflow/supervision/pull/677))

> [!WARNING]
> `Detections.from_roboflow()` is deprecated and will be removed in `supervision-0.21.0`. Use `Detections.from_inference` instead.

python
import cv2
import supervision as sv
from inference.models.utils import get_roboflow_model

image = cv2.imread(<SOURCE_IMAGE_PATH>)
model = get_roboflow_model(model_id="yolov8s-640")

result = model.infer(image)[0]
detections = sv.Detections.from_inference(result)


πŸ› οΈ Fixed

- [`sv.LineZone`](https://supervision.roboflow.com/detection/tools/line_zone/#linezone) functionality to accurately update the counter when an object crosses a line from any direction, including from the side. This enhancement enables more precise tracking and analytics, such as calculating individual in/out counts for each lane on the road. ([735](https://github.com/roboflow/supervision/pull/735))

https://github.com/roboflow/supervision/assets/26109316/412c4d9c-b228-4bcc-a4c7-e6a0c8f2da6e

πŸ† Contributors

onuralpszr (Onuralp SEZER), HinePo (Rafael Levy), xaristeidou (Christoforos Aristeidou), revtheundead (Utku Γ–zbek), paulguerrie (Paul Guerrie), yeldarby (Brad Dwyer), capjamesg (James Gallagher), SkalskiP (Piotr Skalski)

0.17.1

πŸš€ Added

- Support for Python 3.12.

πŸ† Contributors

onuralpszr (Onuralp SEZER), SkalskiP (Piotr Skalski)

Page 1 of 5

Β© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.