🚀 Added
- The VLMs field is evolving rapidly, making it tough to keep up. Maestro currently supports [Florence-2](https://maestro.roboflow.com/latest/models/florence_2/), [PaliGemma 2](https://maestro.roboflow.com/latest/models/paligemma_2/), and [Qwen2.5-VL](https://maestro.roboflow.com/latest/models/qwen_2_5_vl/), and we'll strive to add key VLMs ASAP.
- VLMs fine-tuning can be costly. Maestro's built-in support for LoRA, QLoRA, and graph freezing allows training larger models even on less powerful hardware.
- VLMs fine-tuning requires lots of code. Maestro simplifies this complexity with single CLI/SDK calls.
- VLMs lack a unified approach. Maestro uses a consistent input format (JSONL now, COCO and YOLO coming soon) to minimize data formatting headaches.
🏆 Contributors
onuralpszr (Onuralp SEZER), SkalskiP (Piotr Skalski)