Sahi

Latest version: v0.11.22

Safety actively analyzes 723685 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 16

0.7.0

- refactor predict api (170)

breaking changes

in `predict` and `predict_fiftyone` funtions:
- replaced `model_name` arg with `model_type`
- replaced `model_parameters` arg with `model_path, model_config_path, model_confidence_threshold, model_device, model_category_mapping, model_category_remapping`

in DetectionModel base class:
- replaced `prediction_score_threshold` arg with `confidence_threshold`

updated demo notebooks accordingly

0.6.2

- add input size parameter for inference (169)

Example usages:

python

detection_model = Yolov5DetectionModel(
model_path=yolov5_model_path,
device="cpu", or 'cuda'
)

result = detection_model.perform_inference(
image,
image_size=1280
)

result = get_prediction(
"demo_data/small-vehicles1.jpeg",
detection_model,
image_size=1280
)

result = get_sliced_prediction(
"demo_data/small-vehicles1.jpeg",
detection_model,
image_size=1280,
slice_height = 256,
slice_width = 256,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2
)

0.6.1

- refactor slice_coco script (165)
- make default for ignore_negative_samples, False (166)

0.6.0

enhancement
- add coco_evaluation script, refactor coco_error_analysis script (162)
`coco_evaluation.py` script usage:

bash
python scripts/coco_evaluation.py dataset.json results.json


will calculate coco evaluation and export them to given output folder directory.

If you want to specify mAP metric type, set it as `--metric bbox mask`.

If you want to also calculate classwise scores add `--classwise` argument.

If you want to specify max detections, set it as `--proposal_nums 10 100 500`.

If you want to specify a psecific IOU threshold, set it as `--iou_thrs 0.5`. Default includes `0.50:0.95` and `0.5` scores.

If you want to specify export directory, set it as `--out_dir output/folder/directory `.

`coco_error_analysis.py` script usage:

bash
python scripts/coco_error_analysis.py dataset.json results.json


will calculate coco error plots and export them to given output folder directory.

If you want to specify mAP result type, set it as `--types bbox mask`.

If you want to export extra mAP bar plots and annotation area stats add `--extraplots` argument.

If you want to specify area regions, set it as `--areas 1024 9216 10000000000`.

If you want to specify export directory, set it as `--out_dir output/folder/directory `.

bugfixes
- prevent empty bbox coco json creation (164)
- dont create mot info while when type='det' (163)

breaking changes
- refactor predict (161)
By default, scripts apply both standard and sliced prediction (multi-stage inference). If you don't want to perform sliced prediction add `--no_sliced_pred` argument. If you don't want to perform standard prediction add `--no_standard_pred` argument.

0.5.2

- fix negative bbox coord error (160)

0.5.1

- add [predict_fiftyone](https://github.com/obss/sahi/blob/main/scripts/predict_fiftyone.py) script to perform `sliced/standard inference` over `yolov5/mmdetection models` and visualize the incorrect prediction over the `fiftyone ui`.

![sahi_fiftyone](https://user-images.githubusercontent.com/34196005/124338330-e6085e80-dbaf-11eb-891e-650aeb3ed8dc.png)

- fix mot utils (152)

Page 11 of 16

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.