Breaking Changes
- Supports FLOAT32, FLOAT16 ONNX export, and TensorRT inference
- Supports YOLOv5, YOLOv8, PP-YOLOE, and PP-YOLOE+
- Integrates EfficientNMS TensorRT plugin for accelerated post-processing
- Utilizes CUDA kernel functions to accelerate preprocessing
- Supports Python inference
Bug Fixes
- Fix pycuda.driver.CompileError on Jetson (1)
- Fix Engine Deserialization Failed using YOLOv8 Exported Engine (2)
- Fix Precision Anomalies in YOLOv8 FP16 Engine (3)
- Fix YOLOv8 EfficientNMS output shape abnormality (0e542ee732b176590732fa013693cfc2417a8c5c)
- Fix trtexec Conversion Failure for YOLOv5 and YOLOv8 ONNX Models on Linux) (4)
- Fix Inference Anomaly Caused by preprocess.cu on Linux (5)
**Full Changelog**: https://github.com/laugh12321/TensorRT-YOLO/commits/v1.0