[assets]: https://github.com/ultralytics/yolov5/releases
[previous]: https://github.com/ultralytics/yolov5/releases/tag/v6.1
[current]: https://github.com/ultralytics/yolov5/releases/tag/v6.2
[TTA]: https://github.com/ultralytics/yolov5/issues/303
<div align="center">
<a align="center" href="https://ultralytics.com/yolov5" target="_blank">
<img width="850" src="https://github.com/ultralytics/assets/blob/master/yolov5/v62/splash_readme.png"></a>
</div>
<br>
This release incorporates [**401 PRs** from **41 contributors**](https://github.com/ultralytics/yolov5/compare/v6.1...v6.2) since our last [release][previous] in February 2022. It adds [Classification](https://github.com/ultralytics/yolov5/pull/8956) training, validation, prediction and export (to all 11 [formats](https://github.com/ultralytics/yolov5/issues/251)), and also provides ImageNet-pretrained [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v6.2/yolov5m-cls.pt), [ResNet](https://github.com/ultralytics/yolov5/releases/download/v6.2/ResNet50.pt) (18, 34, 50, 101) and [EfficientNet](https://github.com/ultralytics/yolov5/releases/download/v6.2/efficientnet_b0.pt) (b0-b3) models.
My main goal with this release is to introduce super simple YOLOv5 classification workflows just like our existing object detection models. The new v6.2 YOLOv5-cls models below are just a start, we will continue to improve these going forward together with our existing detection models. We'd love your [contributions](https://github.com/ultralytics/yolov5/blob/master/CONTRIBUTING.md) on this effort!
Our next release, v6.3 is scheduled for September and will bring official **instance segmentation** support to YOLOv5, with a major v7.0 release later this year **updating architectures** across all 3 tasks - classification, detection and segmentation.
Important Updates
- **Classification Models ⭐ NEW**: YOLOv5-cls ImageNet-pretrained classification models are now available for the first time (https://github.com/ultralytics/yolov5/pull/8956 by glenn-jocher)
- **ClearML logging ⭐ NEW**: Integration with the open-source experiment tracker [ClearML](https://cutt.ly/yolov5-readme-clearml). Installing with `pip install clearml` will enable the integration and allow users to track every training run in ClearML. This in turn allows users to track and compare runs and even schedule runs remotely. (https://github.com/ultralytics/yolov5/pull/8620 by thepycoder)
- **Deci.ai optimization ⭐ NEW**: Automatically compile and quantize YOLOv5 for better inference performance in one click at [Deci](https://bit.ly/yolov5-deci-platform) (https://github.com/ultralytics/yolov5/pull/8956 by glenn-jocher).
- **GPU Export Benchmarks**: Benchmark (mAP and speed) all YOLOv5 export formats with `python utils/benchmarks.py --weights yolov5s.pt --device 0` for GPU benchmarks or `--device cpu` for CPU benchmarks (https://github.com/ultralytics/yolov5/pull/6963 by glenn-jocher).
- **Training Reproducibility**: Single-GPU YOLOv5 training with `torch>=1.12.0` is now fully reproducible, and a new `--seed` argument can be used (default seed=0) (https://github.com/ultralytics/yolov5/pull/8213 by AyushExel).
- **Apple Metal Performance Shader (MPS) Support**: MPS support for Apple M1/M2 devices with `--device mps` (full functionality is pending torch updates in https://github.com/pytorch/pytorch/issues/77764) (https://github.com/ultralytics/yolov5/pull/7878 by glenn-jocher)
New Classification Checkpoints
We trained YOLOv5-cls classification models on ImageNet for 90 epochs using a 4xA100 instance, and we trained ResNet and EfficientNet models alongside with the same default training settings to compare. We exported all models to ONNX FP32 for CPU speed tests and to TensorRT FP16 for GPU speed tests. We ran all speed tests on Google [Colab Pro](https://colab.research.google.com/signup) for easy reproducibility.
| Model | size<br><sup>(pixels) | accuracy<br><sup>top1 | accuracy<br><sup>top5 | Train time<br><sup>90 epochs<br>4x A100 (hours) | Speed<br><sup>ONNX-CPU<br>(ms) | Speed<br><sup>TensorRT-V100<br>(ms) | params<br><sup>(M) | FLOPs<br><sup>224 (B) |
|----------------------------------------------------------------------------------------------------|-----------------------|-----------------------|-----------------------|-------------------------------------------------|--------------------------------|-------------------------------------|--------------------|------------------------|