We are excited to announce the initial release of ODLabel, a powerful tool for zero-shot object detection, labeling, and visualization. ODLabel provides an intuitive graphical user interface that enables users to efficiently label objects in images using the YOLO-World model.
Key features in this release:
- Support for selecting from various YOLO-World model options, including yolov8s-world, yolov8m-world, yolov8l-world, and yolov8x-world.
- Ability to choose an images folder for labeling and specify an output directory for the annotated data.
- Flexibility to define the object categories you want to detect.
- Integration of Slicing Adaptive Inference (SAHI) for improved detection of small objects.
- Option to select the device type (CPU or GPU) for inference.
- Customization of the train/validation split ratio.
- Adjustment of confidence level and non-maximum suppression (IoU) threshold.
- Comprehensive dashboard with figures and visualizations to explore input image data and detection results.
We believe ODLabel will be a valuable tool for researchers, developers, and anyone working on object detection and labeling tasks. This initial release lays the foundation for a powerful and user-friendly application, and we look forward to receiving feedback and continued improvement in future updates.
For detailed installation and usage instructions, please refer to the [project's README](https://github.com/Ziad-Algrafi/ODLabel/edit/main/README.md).
We hope you find ODLabel helpful in your work! If you have any questions or feedback, feel free to reach out to us at ZiadAlgrafigmail.com.