[assets]: https://github.com/ultralytics/yolov5/releases
[previous]: https://github.com/ultralytics/yolov5/releases/tag/v6.2
[current]: https://github.com/ultralytics/yolov5/releases/tag/v7.0
[TTA]: https://github.com/ultralytics/yolov5/issues/303
<div align="center">
<a align="center" href="https://ultralytics.com/yolov5" target="_blank">
<img width="850" src="https://github.com/ultralytics/assets/blob/main/yolov5/v70/splash.png"></a>
</div>
<br>
Our new YOLOv5 v7.0 instance segmentation models are the fastest and most accurate in the world, beating all current [SOTA benchmarks](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco). We've made them super simple to train, validate and deploy. See full details in our [Release Notes](https://github.com/ultralytics/yolov5/releases/v7.0) and visit our [YOLOv5 Segmentation Colab Notebook](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) for quickstart tutorials.
<div align="center">
<a align="center" href="https://ultralytics.com/yolov5" target="_blank">
<img width="800" src="https://user-images.githubusercontent.com/61612323/204180385-84f3aca9-a5e9-43d8-a617-dda7ca12e54a.png"></a>
</div>
<br>
Our primary goal with this release is to introduce super simple YOLOv5 segmentation workflows just like our existing object detection models. The new v7.0 YOLOv5-seg models below are just a start, we will continue to improve these going forward together with our existing detection and classification models. We'd love your feedback and [contributions](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) on this effort!
This release incorporates [**280 PRs** from **41 contributors**](https://github.com/ultralytics/yolov5/compare/v6.2...v7.0) since our last [release][previous] in August 2022.
Important Updates
- **Segmentation Models ⭐ NEW**: SOTA YOLOv5-seg COCO-pretrained segmentation models are now available for the first time (https://github.com/ultralytics/yolov5/pull/9052 by glenn-jocher, AyushExel and Laughing-q)
- **Paddle Paddle Export**: Export any YOLOv5 model (cls, seg, det) to Paddle format with python export.py --include paddle (https://github.com/ultralytics/yolov5/pull/9459 by glenn-jocher)
- **YOLOv5 AutoCache**: Use `python train.py --cache ram` will now scan available memory and compare against predicted dataset RAM usage. This reduces risk in caching and should help improve adoption of the dataset caching feature, which can significantly speed up training. (https://github.com/ultralytics/yolov5/pull/10027 by glenn-jocher)
- **Comet Logging and Visualization Integration:** Free forever, [Comet](https://bit.ly/yolov5-readme-comet) lets you save YOLOv5 models, resume training, and interactively visualise and debug predictions. (https://github.com/ultralytics/yolov5/pull/9232 by DN6)
New Segmentation Checkpoints
We trained YOLOv5 segmentations models on COCO for 300 epochs at image size 640 using A100 GPUs. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google [Colab Pro](https://colab.research.google.com/signup) notebooks for easy reproducibility.
| Model | size<br><sup>(pixels) | mAP<sup>box<br>50-95 | mAP<sup>mask<br>50-95 | Train time<br><sup>300 epochs<br>A100 (hours) | Speed<br><sup>ONNX CPU<br>(ms) | Speed<br><sup>TRT A100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>640 (B) |
|----------------------------------------------------------------------------------------------------|-----------------------|----------------------|-----------------------|-----------------------------------------------|--------------------------------|--------------------------------|--------------------|------------------------|