Sahi

Latest version: v0.11.16

Safety actively analyzes 642283 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 15

0.5.1

- add [predict_fiftyone](https://github.com/obss/sahi/blob/main/scripts/predict_fiftyone.py) script to perform `sliced/standard inference` over `yolov5/mmdetection models` and visualize the incorrect prediction over the `fiftyone ui`.

![sahi_fiftyone](https://user-images.githubusercontent.com/34196005/124338330-e6085e80-dbaf-11eb-891e-650aeb3ed8dc.png)

- fix mot utils (152)

0.5.0

- add check for image size in slice_image (147)
- refactor prediction output (148)
- fix slice_image in readme (149)

refactor prediction output
python
perform standard or sliced prediction
result = get_prediction(image, detection_model)
result = get_sliced_prediction(image, detection_model)

export prediction visuals to "demo_data/"
result.export_visuals(export_dir="demo_data/")

convert predictions to coco annotations
result.to_coco_annotations()

convert predictions to coco predictions
result.to_coco_predictions(image_id=1)

convert predictions to [imantics](https://github.com/jsbroks/imantics) annotation format
result.to_imantics_annotations()

convert predictions to [fiftyone](https://github.com/voxel51/fiftyone) detection format
result.to_fiftyone_detections()


- Check more in colab notebooks:

`YOLOv5` + `SAHI` demo: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_yolov5.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

`MMDetection` + `SAHI` demo: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_mmdetecion.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

0.4.8

- update mot utils (143)
- add fiftyone utils (144)

<div align="center">FiftyOne Utilities</div>

<details open>
<summary>
<big><b>Explore COCO dataset via FiftyOne app:</b></big>
</summary>

python
from sahi.utils.fiftyone import launch_fiftyone_app
launch fiftyone app:
session = launch_fiftyone_app(coco_image_dir, coco_json_path)
close fiftyone app:
session.close()


</details>

<details open>
<summary>
<big><b>Convert predictions to FiftyOne detection:</b></big>
</summary>

python
from sahi import get_sliced_prediction
perform sliced prediction
result = get_sliced_prediction(
image,
detection_model,
slice_height = 256,
slice_width = 256,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2
)
convert first object into fiftyone detection format
object_prediction = result["object_prediction_list"][0]
fiftyone_detection = object_prediction.to_fiftyone_detection(image_height=720, image_width=1280)


</details>

0.4.6

new feature
- add more mot utils (133)
<details closed>
<summary>
<big><b>MOT Challenge formatted ground truth dataset creation:</b></big>
</summary>

- import required classes:

python
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo


- init video:

python
mot_video = MotVideo(name="sequence_name")


- init first frame:

python
mot_frame = MotFrame()


- add annotations to frame:

python
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)


- add frame to video:

python
mot_video.add_frame(mot_frame)


- export in MOT challenge format:

python
mot_video.export(export_dir="mot_gt", type="gt")


- your MOT challenge formatted ground truth files are ready under `mot_gt/sequence_name/` folder.
</details>

<details closed>
<summary>
<big><b>Advanced MOT Challenge formatted ground truth dataset creation:</b></big>
</summary>

- you can customize tracker while initializing mot video object:

python
tracker_params = {
'max_distance_between_points': 30,
'min_detection_threshold': 0,
'hit_inertia_min': 10,
'hit_inertia_max': 12,
'point_transience': 4,
}
for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)


- you can omit automatic track id generation and directly provide track ids of annotations:


python
create annotations with track ids:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)

add frame to video:
mot_video.add_frame(mot_frame)

export in MOT challenge format without automatic track id generation:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=False)


- you can overwrite the results into already present directory by adding `exist_ok=True`:

python
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)

</details>

<details closed>
<summary>
<big><b>MOT Challenge formatted tracker output creation:</b></big>
</summary>

- import required classes:

python
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo


- init video by providing video name:

python
mot_video = MotVideo(name="sequence_name")


- init first frame:

python
mot_frame = MotFrame()


- add tracker outputs to frame:

python
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)


- add frame to video:

python
mot_video.add_frame(mot_frame)


- export in MOT challenge format:

python
mot_video.export(export_dir="mot_test", type="test")


- your MOT challenge formatted ground truth files are ready as `mot_test/sequence_name.txt`.
</details>

<details closed>
<summary>
<big><b>Advanced MOT Challenge formatted tracker output creation:</b></big>
</summary>

- you can enable tracker and directly provide object detector output:

python
add object detector outputs:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)

add frame to video:
mot_video.add_frame(mot_frame)

export in MOT challenge format by applying a kalman based tracker:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=True)


- you can customize tracker while initializing mot video object:

python
tracker_params = {
'max_distance_between_points': 30,
'min_detection_threshold': 0,
'hit_inertia_min': 10,
'hit_inertia_max': 12,
'point_transience': 4,
}
for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)


- you can overwrite the results into already present directory by adding `exist_ok=True`:

python
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)

</details>

documentation
- update coco docs (134)
- add colab links into readme (135)

Check `YOLOv5` + `SAHI` demo: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_yolov5.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

Check `MMDetection` + `SAHI` demo: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_mmdetecion.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

bug fixes
- fix demo notebooks (136)

0.4.5

enhancement
- add colab demo support (127)
- add warning for image files without suffix (129)
- seperate mmdet/yolov5 utils (130)

0.4.4

new feature
- add mot utils (122)
[read mot utils doc for more detail](https://github.com/obss/sahi/blob/de685e2a272bf993835a54c3dc10b0afd9b4812a/docs/MOT.md)

documentation
- update installation (118)
- add details for coco2yolov5 usage (120)

bug fixes
- fix typo (117)
- update coco2yolov5.py (115)

breaking changes
- drop python 3.6 support (123)

Page 11 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.