This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
1. Lots of bug fixes and performance improvements. 2. TPU-MLIR supports importing Pytorch models (no need to convert to ONNX). 3. Unified pre-processing for bm168x and cv18xx chips. 4. Support for the bm1684 chip is underway.
0.9beta.0
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
- Resolved pre-processing performance issues. - Added shape inference for dynamic input shapes. - Implemented constant folding to simplify the graph. - Improved performance, still working on optimizations.
0.8
Welcome to TPU-MLIR. To get a start, you can: 1. Follow the Readme to understand how to use TPU-MLIR: https://github.com/sophgo/tpu-mlir
0.8beta.4
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
1. The image pre-processing will be offloaded to TPU, improving performance. 2. Many bug fixes allow TPU-MLIR to support more neural networks.
* fix pool sign error in v0.8-beta.3
0.8beta.3
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
1. The image pre-processing will be offloaded to TPU, improving performance. 2. Many bug fixes allow TPU-MLIR to support more neural networks.
* Fix pre-processing conversion bug in v0.8-beta.2
0.8beta.2
This beta version of TPU-MLIR is for testing purposes only—do not use it in production.
Notable changes:
1. The image pre-processing will be offloaded to TPU, improving performance. 2. Many bug fixes allow TPU-MLIR to support more neural networks.