Sahi

Latest version: v0.11.19

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 16

0.6.0

enhancement
- add coco_evaluation script, refactor coco_error_analysis script (162)
`coco_evaluation.py` script usage:

bash
python scripts/coco_evaluation.py dataset.json results.json


will calculate coco evaluation and export them to given output folder directory.

If you want to specify mAP metric type, set it as `--metric bbox mask`.

If you want to also calculate classwise scores add `--classwise` argument.

If you want to specify max detections, set it as `--proposal_nums 10 100 500`.

If you want to specify a psecific IOU threshold, set it as `--iou_thrs 0.5`. Default includes `0.50:0.95` and `0.5` scores.

If you want to specify export directory, set it as `--out_dir output/folder/directory `.

`coco_error_analysis.py` script usage:

bash
python scripts/coco_error_analysis.py dataset.json results.json


will calculate coco error plots and export them to given output folder directory.

If you want to specify mAP result type, set it as `--types bbox mask`.

If you want to export extra mAP bar plots and annotation area stats add `--extraplots` argument.

If you want to specify area regions, set it as `--areas 1024 9216 10000000000`.

If you want to specify export directory, set it as `--out_dir output/folder/directory `.

bugfixes
- prevent empty bbox coco json creation (164)
- dont create mot info while when type='det' (163)

breaking changes
- refactor predict (161)
By default, scripts apply both standard and sliced prediction (multi-stage inference). If you don't want to perform sliced prediction add `--no_sliced_pred` argument. If you don't want to perform standard prediction add `--no_standard_pred` argument.

0.5.2

- fix negative bbox coord error (160)

0.5.1

- add [predict_fiftyone](https://github.com/obss/sahi/blob/main/scripts/predict_fiftyone.py) script to perform `sliced/standard inference` over `yolov5/mmdetection models` and visualize the incorrect prediction over the `fiftyone ui`.

![sahi_fiftyone](https://user-images.githubusercontent.com/34196005/124338330-e6085e80-dbaf-11eb-891e-650aeb3ed8dc.png)

- fix mot utils (152)

0.5.0

- add check for image size in slice_image (147)
- refactor prediction output (148)
- fix slice_image in readme (149)

refactor prediction output
python
perform standard or sliced prediction
result = get_prediction(image, detection_model)
result = get_sliced_prediction(image, detection_model)

export prediction visuals to "demo_data/"
result.export_visuals(export_dir="demo_data/")

convert predictions to coco annotations
result.to_coco_annotations()

convert predictions to coco predictions
result.to_coco_predictions(image_id=1)

convert predictions to [imantics](https://github.com/jsbroks/imantics) annotation format
result.to_imantics_annotations()

convert predictions to [fiftyone](https://github.com/voxel51/fiftyone) detection format
result.to_fiftyone_detections()


- Check more in colab notebooks:

`YOLOv5` + `SAHI` demo: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_yolov5.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

`MMDetection` + `SAHI` demo: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_mmdetecion.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

0.4.8

- update mot utils (143)
- add fiftyone utils (144)

<div align="center">FiftyOne Utilities</div>

<details open>
<summary>
<big><b>Explore COCO dataset via FiftyOne app:</b></big>
</summary>

python
from sahi.utils.fiftyone import launch_fiftyone_app
launch fiftyone app:
session = launch_fiftyone_app(coco_image_dir, coco_json_path)
close fiftyone app:
session.close()


</details>

<details open>
<summary>
<big><b>Convert predictions to FiftyOne detection:</b></big>
</summary>

python
from sahi import get_sliced_prediction
perform sliced prediction
result = get_sliced_prediction(
image,
detection_model,
slice_height = 256,
slice_width = 256,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2
)
convert first object into fiftyone detection format
object_prediction = result["object_prediction_list"][0]
fiftyone_detection = object_prediction.to_fiftyone_detection(image_height=720, image_width=1280)


</details>

0.4.6

new feature
- add more mot utils (133)
<details closed>
<summary>
<big><b>MOT Challenge formatted ground truth dataset creation:</b></big>
</summary>

- import required classes:

python
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo


- init video:

python
mot_video = MotVideo(name="sequence_name")


- init first frame:

python
mot_frame = MotFrame()


- add annotations to frame:

python
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)


- add frame to video:

python
mot_video.add_frame(mot_frame)


- export in MOT challenge format:

python
mot_video.export(export_dir="mot_gt", type="gt")


- your MOT challenge formatted ground truth files are ready under `mot_gt/sequence_name/` folder.
</details>

<details closed>
<summary>
<big><b>Advanced MOT Challenge formatted ground truth dataset creation:</b></big>
</summary>

- you can customize tracker while initializing mot video object:

python
tracker_params = {
'max_distance_between_points': 30,
'min_detection_threshold': 0,
'hit_inertia_min': 10,
'hit_inertia_max': 12,
'point_transience': 4,
}
for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)


- you can omit automatic track id generation and directly provide track ids of annotations:


python
create annotations with track ids:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)

add frame to video:
mot_video.add_frame(mot_frame)

export in MOT challenge format without automatic track id generation:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=False)


- you can overwrite the results into already present directory by adding `exist_ok=True`:

python
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)

</details>

<details closed>
<summary>
<big><b>MOT Challenge formatted tracker output creation:</b></big>
</summary>

- import required classes:

python
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo


- init video by providing video name:

python
mot_video = MotVideo(name="sequence_name")


- init first frame:

python
mot_frame = MotFrame()


- add tracker outputs to frame:

python
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)


- add frame to video:

python
mot_video.add_frame(mot_frame)


- export in MOT challenge format:

python
mot_video.export(export_dir="mot_test", type="test")


- your MOT challenge formatted ground truth files are ready as `mot_test/sequence_name.txt`.
</details>

<details closed>
<summary>
<big><b>Advanced MOT Challenge formatted tracker output creation:</b></big>
</summary>

- you can enable tracker and directly provide object detector output:

python
add object detector outputs:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)

add frame to video:
mot_video.add_frame(mot_frame)

export in MOT challenge format by applying a kalman based tracker:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=True)


- you can customize tracker while initializing mot video object:

python
tracker_params = {
'max_distance_between_points': 30,
'min_detection_threshold': 0,
'hit_inertia_min': 10,
'hit_inertia_max': 12,
'point_transience': 4,
}
for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)


- you can overwrite the results into already present directory by adding `exist_ok=True`:

python
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)

</details>

documentation
- update coco docs (134)
- add colab links into readme (135)

Check `YOLOv5` + `SAHI` demo: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_yolov5.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

Check `MMDetection` + `SAHI` demo: <a href="https://colab.research.google.com/github/obss/sahi/blob/main/demo/inference_for_mmdetecion.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

bug fixes
- fix demo notebooks (136)

Page 11 of 16

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.