Yolov5

Latest version: v7.0.13

Safety actively analyzes 638466 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 12 of 16

5.0.1

- Update to ultralytics/yolov5 24.04.21

5.0

This release implements **YOLOv5-P6** models and retrained **YOLOv5-P5** models. All model sizes YOLOv5s/m/l/x are now available in both P5 and P6 architectures:

* **YOLOv5-P5** models (same architecture as v4.0 release): **3 output layers** P3, P4, P5 at strides 8, 16, 32, trained at `--img 640`
bash
python detect.py --weights yolov5s.pt P5 models
yolov5m.pt
yolov5l.pt
yolov5x.pt

* **YOLOv5-P6** models: **4 output layers** P3, P4, P5, P6 at strides 8, 16, 32, 64 trained at `--img 1280`
bash
python detect.py --weights yolov5s6.pt P6 models
yolov5m6.pt
yolov5l6.pt
yolov5x6.pt


Example usage:
bash
Command Line
python detect.py --weights yolov5m.pt --img 640 P5 model at 640
python detect.py --weights yolov5m6.pt --img 640 P6 model at 640
python detect.py --weights yolov5m6.pt --img 1280 P6 model at 1280

python
PyTorch Hub
model = torch.hub.load('ultralytics/yolov5', 'yolov5m6') P6 model
results = model(imgs, size=1280) inference at 1280


Notable Updates

- **YouTube Inference**: Direct inference from YouTube videos, i.e. `python detect.py --source 'https://youtu.be/NUsoVlDFqZg'`. Live streaming videos and normal videos supported. (https://github.com/ultralytics/yolov5/pull/2752)
- **AWS Integration**: Amazon AWS integration and new [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart) for simple EC2 instance YOLOv5 training and resuming of interrupted Spot instances. (https://github.com/ultralytics/yolov5/pull/2185)
- **Supervise.ly Integration**: New integration with the [Supervisely Ecosystem](https://github.com/supervisely-ecosystem) for training and deploying YOLOv5 models with Supervise.ly (https://github.com/ultralytics/yolov5/issues/2518)
- **Improved W&B Integration:** Allows saving datasets and models directly to [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme). This allows for --resume directly from W&B (useful for temporary environments like Colab), as well as enhanced visualization tools. See this [blog](https://wandb.ai/cayush/yolov5-dsviz-demo/reports/Object-Detection-with-YOLO-and-Weights-Biases--Vmlldzo0NTgzMjk) by AyushExel for details. (https://github.com/ultralytics/yolov5/pull/2125)


Updated Results

P6 models include an extra P6/64 output layer for detection of larger objects, and benefit the most from training at higher resolution. For this reason we trained all P5 models at 640, and all P6 models at 1280.

<p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313216-f0a5e100-9af5-11eb-8445-c682b60da2e3.png"></p>
<details>
<summary>YOLOv5-P5 640 Figure (click to expand)</summary>

<p align="center"><img width="800" src="https://user-images.githubusercontent.com/26833433/114313219-f1d70e00-9af5-11eb-9973-52b1f98d321a.png"></p>
</details>
<details>
<summary>Figure Notes (click to expand)</summary>

* GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS.
* EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8.
* **Reproduce** by `python test.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`
</details>

- **April 11, 2021**: [v5.0 release](https://github.com/ultralytics/yolov5/releases/tag/v5.0): YOLOv5-P6 1280 models, [AWS](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart), [Supervise.ly](https://github.com/ultralytics/yolov5/issues/2518) and [YouTube](https://github.com/ultralytics/yolov5/pull/2752) integrations.
- **January 5, 2021**: [v4.0 release](https://github.com/ultralytics/yolov5/releases/tag/v4.0): nn.SiLU() activations, [Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_readme) logging, [PyTorch Hub](https://pytorch.org/hub/ultralytics_yolov5/) integration.
- **August 13, 2020**: [v3.0 release](https://github.com/ultralytics/yolov5/releases/tag/v3.0): nn.Hardswish() activations, data autodownload, native AMP.
- **July 23, 2020**: [v2.0 release](https://github.com/ultralytics/yolov5/releases/tag/v2.0): improved model definition, training and mAP.


Pretrained Checkpoints

[assets]: https://github.com/ultralytics/yolov5/releases

Model |size<br><sup>(pixels) |mAP<sup>val<br>0.5:0.95 |mAP<sup>test<br>0.5:0.95 |mAP<sup>val<br>0.5 |Speed<br><sup>V100 (ms) | |params<br><sup>(M) |FLOPS<br><sup>640 (B)
--- |--- |--- |--- |--- |--- |---|--- |---
[YOLOv5s][assets] |640 |36.7 |36.7 |55.4 |**2.0** | |7.3 |17.0
[YOLOv5m][assets] |640 |44.5 |44.5 |63.1 |2.7 | |21.4 |51.3
[YOLOv5l][assets] |640 |48.2 |48.2 |66.9 |3.8 | |47.0 |115.4
[YOLOv5x][assets] |640 |**50.4** |**50.4** |**68.8** |6.1 | |87.7 |218.8
| | | | | | || |
[YOLOv5s6][assets] |1280 |43.3 |43.3 |61.9 |**4.3** | |12.7 |17.4
[YOLOv5m6][assets] |1280 |50.5 |50.5 |68.7 |8.4 | |35.9 |52.4
[YOLOv5l6][assets] |1280 |53.4 |53.4 |71.1 |12.3 | |77.2 |117.7
[YOLOv5x6][assets] |1280 |**54.4** |**54.4** |**72.0** |22.4 | |141.8 |222.9
| | | | | | || |

5.0.0

Basic Usage

python
import yolov5

model
model = yolov5.load('yolov5s')

image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'

inference
results = model(img)

inference with larger input size
results = model(img, size=1280)

inference with test time augmentation
results = model(img, augment=True)

show results
results.show()

save results
results.save(save_dir='results/')



Scripts

You can call yolo_train, yolo_detect and yolo_test commands after installing the package via `pip`:

Training

Run commands below to reproduce results on [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest `--batch-size` your GPU allows (batch sizes shown for 16 GB devices).

bash
$ yolo_train --data coco.yaml --cfg yolov5s.yaml --weights '' --batch-size 64
yolov5m 40
yolov5l 24
yolov5x 16


Inference

yolo_detect command runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`.

bash
$ yolo_detect --source 0 webcam
file.jpg image
file.mp4 video
path/ directory
path/*.jpg glob
rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa rtsp stream
rtmp://192.168.1.105/live/test rtmp stream
http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream


To run inference on example images in `data/images`:

bash
$ yolo_detect --source data/images --weights yolov5s.pt --conf 0.25

4.0.14

- update yolov5.utils.google_utils.attempt_download

4.0.13

- fix windows installation

4.0.12

- include `models/*.yml` files in package

Page 12 of 16

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.