[assets]: https://github.com/ultralytics/yolov5/releases
[previous]: https://github.com/ultralytics/yolov5/releases/tag/v6.0
[current]: https://github.com/ultralytics/yolov5/releases/tag/v6.1
[TTA]: https://github.com/ultralytics/yolov5/issues/303
This release incorporates many new features and bug fixes ([**271 PRs** from **48 contributors**](https://github.com/ultralytics/yolov5/compare/v6.0...v6.1)) since our last [release][previous] in October 2021. It adds [TensorRT](https://github.com/ultralytics/yolov5/pull/5699), [Edge TPU](https://github.com/ultralytics/yolov5/pull/3630) and [OpenVINO](https://github.com/ultralytics/yolov5/pull/6057) support, and provides retrained models at `--batch-size 128` with new default one-cycle linear LR [scheduler](https://github.com/ultralytics/yolov5/pull/6729). YOLOv5 now officially supports 11 different formats, not just for export but for inference (both detect.py and PyTorch Hub), and validation to profile mAP and speed results after export.
Format | `export.py --include` | Model
:--- | --: | :--
[PyTorch](https://pytorch.org/) | - | `yolov5s.pt`
[TorchScript](https://pytorch.org/docs/stable/jit.html) | `torchscript` | `yolov5s.torchscript`
[ONNX](https://onnx.ai/) | `onnx` | `yolov5s.onnx`
[OpenVINO](https://docs.openvino.ai/latest/index.html) | `openvino` | `yolov5s_openvino_model/`
[TensorRT](https://developer.nvidia.com/tensorrt) | `engine` | `yolov5s.engine`
[CoreML](https://github.com/apple/coremltools) | `coreml` | `yolov5s.mlmodel`
[TensorFlow SavedModel](https://www.tensorflow.org/guide/saved_model) | `saved_model` | `yolov5s_saved_model/`
[TensorFlow GraphDef](https://www.tensorflow.org/api_docs/python/tf/Graph) | `pb` | `yolov5s.pb`
[TensorFlow Lite](https://www.tensorflow.org/lite) | `tflite` | `yolov5s.tflite`
[TensorFlow Edge TPU](https://coral.ai/docs/edgetpu/models-intro/) | `edgetpu` | `yolov5s_edgetpu.tflite`
[TensorFlow.js](https://www.tensorflow.org/js) | `tfjs` | `yolov5s_web_model/`
Usage examples (ONNX shown):
bash
Export: python export.py --weights yolov5s.pt --include onnx
Detect: python detect.py --weights yolov5s.onnx
PyTorch Hub: model = torch.hub.load('ultralytics/yolov5', 'custom', 'yolov5s.onnx')
Validate: python val.py --weights yolov5s.onnx
Visualize: https://netron.app
Important Updates
- **TensorRT support**: TensorFlow, Keras, TFLite, TF.js model export now fully integrated using `python export.py --include saved_model pb tflite tfjs` (https://github.com/ultralytics/yolov5/pull/5699 by imyhxy)
- **Tensorflow Edge TPU support ⭐ NEW**: New smaller YOLOv5n (1.9M params) model below YOLOv5s (7.5M params), exports to 2.1 MB INT8 size, ideal for ultralight mobile solutions. (https://github.com/ultralytics/yolov5/pull/3630 by zldrobit)
- **OpenVINO support**: YOLOv5 ONNX models are now compatible with both OpenCV DNN and ONNX Runtime (https://github.com/ultralytics/yolov5/pull/6057 by glenn-jocher).
- **Export Benchmarks**: Benchmark (mAP and speed) all YOLOv5 export formats with `python utils/benchmarks.py --weights yolov5s.pt`. Currently operates on CPU, future updates will implement GPU support. (https://github.com/ultralytics/yolov5/pull/6613 by glenn-jocher).
- **Architecture:** no changes
- **Hyperparameters:** minor change
- hyp-scratch-large.yaml `lrf` reduced from 0.2 to 0.1 (https://github.com/ultralytics/yolov5/pull/6525 by glenn-jocher).
- **Training:** Default Learning Rate (LR) scheduler updated
- One-cycle with cosine replace with one-cycle linear for improved results (https://github.com/ultralytics/yolov5/pull/6729 by glenn-jocher).
New Results
All model trainings logged to https://wandb.ai/glenn-jocher/YOLOv5_v61_official
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040763-93c22a27-347c-4e3c-847a-8094621d3f4e.png"></p>
<details>
<summary>YOLOv5-P5 640 Figure (click to expand)</summary>
<p align="left"><img width="800" src="https://user-images.githubusercontent.com/26833433/155040757-ce0934a3-06a6-43dc-a979-2edbbd69ea0e.png"></p>
</details>
<details>
<summary>Figure Notes (click to expand)</summary>
* **COCO AP val** denotes mAP0.5:0.95 metric measured on the 5000-image [COCO val2017](http://cocodataset.org) dataset over various inference sizes from 256 to 1536.
* **GPU Speed** measures average inference time per image on [COCO val2017](http://cocodataset.org) dataset using a [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100 instance at batch-size 32.
* **EfficientDet** data from [google/automl](https://github.com/google/automl) at batch size 8.
* **Reproduce** by `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
</details>
Example YOLOv5l before and after metrics:
|YOLOv5l<br><sup>Large|size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>CPU b1<br>(ms) |Speed<br><sup>V100 b1<br>(ms) |Speed<br><sup>V100 b32<br>(ms) |params<br><sup>(M) |FLOPs<br><sup> 640 (B)
--- |--- |--- |--- |--- |--- |--- |--- |---