Axelerate

Latest version: v0.7.5

Safety actively analyzes 688568 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.7.5

Major changes:
- YOLOv2 is replaced with cut-down version of YOLOv3. Additional layers in the second branch are removed to fit K210 memory limit. Testing will continue to see how large impact does it have on accuracy. Currently it seems that for small number of classes YOLOv3 performs significantly better than YOLOv2 (especially with not square input size, e.g. 320x240), but for PASCAL VOC 20 classes model often mixes up minor classes (cat vs. dog, sheep vs. horse, etc.). YOLOv2 is still available in legacy-yolov2 branch.

Minor changes:
- Evaluation script added. For classification task confusion matrix, precision, recall and f1-score are calculated. For detection, mAP, precision, recall and f1-score are calculated. For segmentation task, mean IoU and class-wise IoU are calculated. Evaluation script creates report.txt file in project folder, where config and evaluation results are saved.

**Note** YOLOv3 currently only works for MobileNet feature extractors. In the next version, my plan is to completely re-work the backend system, making it more concise and tailored for embedded systems.

0.7

Major changes:
- All scripts are migrated to use tf.keras and TensorFlow 2.3. Works with tf 2.4 as well, but mostly tested with 2.3 at this point. All training scripts are working properly, K210/tflite conversion scripts are tested and fully-functioning, Edge TPU conversion partially working. Due to tensorflow changing default converter to MLIR (Multi-Level Intermediate Representation) converter and multiple breaking changes in API, onnx/openVINO converters are not functioning yet, but will be fixed by next release.
- Augmentation pipeline is improved, now default augmentation pipeline is "softer" - previously too many augmentations were applied simultaneously, which might have hindered model training and lead to under-fitting.
- Warm-up learning rate scheduler has been added and made default lerning rate control callback, as opposed to Reduce Learning rate on Plateau from previous versions. Read more about it in https://arxiv.org/abs/1812.01187v2 . Preliminary testing on people detection task, shows significant increase in validation mAP.

Minor changes:
- Plots of accuracy/loss/mAP are depreciated and replaced with Tensorboard logs. Colab supports interactive Tensorboard interface for tf >2.0, so overall it is better choice for training monitoring/analysis.
- Added scripts for Edge TPU (this section to be expanded)
- TODO tasks moved to Github projects.

0.6.0

Major changes:
- Another task crossed off the TO-DO list. Now image classifier, object detector and segmentation networks use the same unified image augmentation pipeline, leveraging imgaug 0.4.0. For full list of augmentations used have a look at
https://github.com/AIWintermuteAI/aXeleRate/blob/ef2a96d934f748e6bdd54af145cd86b554a175ab/axelerate/networks/common_utils/augment.py#L137

In the future I plan to add different levels of augmentation - mild, normal and hardcore. That is not implemented yet, if you'd like to give a hand, don't hesitate to PR!
You can check how images look after resizing and augmentation by running augment.py in axelerate/networks/commmon_utils or use Colab Notebooks, where corresponding function is used in the second cell of every example notebook.
- Added OpenVINO converter for OpenCV AI kit, currently tested with YOLO v2 detection network, more network types coming soon. OpenVINO converter will output both IR fromat model and .blob file - which you can use for inference with OAK boards.

Minor changes:
- Added dynamic range and full integer with fallback options for .tflite model conversion. For details on how to use the converted models for faster inference with Raspberry Pi, consult this article
https://www.hackster.io/dmitrywat/raspberry-pi-hq-camera-module-review-demo-7462eb
- Bash scripts and commands now have stdout shown in Colab notebook - meaning you will see the output of install and converter scripts, which is useful for debugging.
- Matplotlib visualization problem fixed - hopefully once and for all. The images should be normally displayed both in Colab and when running on a local machine.

My next milestone will probably will take some time - upgrading code to TF 2.0 and TF-Keras. That will bring lots of new features and optimizations and hopefully will solve conversion problems with Edge TPU.

0.5.9a

Major changes:

- Fixed the preprocessing issues for K210 model conversion calibration procedure - the issue was causing about 10-30 percent decrease in accuracy of .kmodel compared to .h5 model. If you were using aXeleRate to train and convert models for K210 chip, consider converting your Keras model again and check the performance after that! Now you can use axelerate/networks/common_utils/convert.py as a standalone script with arguments
--model_path
--converter_type
--dataset_path
--backend (network feature extractor, e.g. Mobilenet/YOLO/NASNet/etc !really important!)

Minor changes:
- changed requirement for Keras to "==2.3.1" to avoid upgrading to later Keras versions, that don't support tensorflow 1.15
- changed depreciated sklearn.utils.linear_assignment_ to linear_sum_assignment
- added .onnx converter and example script for on-device model optimization for Nvidia Development Boards (Nano, Xavier)
- completed the Colab Notebook Tutorials with publishing of Human Parsing Image Segmentation Colab Notebook. The three notebooks you see on the main page are guaranteed to be updated to latest API, the miscellaneous ones are still can be found in /resources, but might be outdated.

I've been swamped with freelance jobs recently, so in the summer I mostly will be doing bug fixes and support. Hopefully by the end of August I can add some more features.

0.5.8

Major changes:
- Added custom input sizes, usable with pre-trained models as well. In config file you can specify input size as integer (224 for example) or as list ([320,240]). For YOLO v2 detector this is somewhat experimental(due to its complicated loss function) and I encourage users to test the performance of detectors with different input sizes.
- Changed VGG16 to NASNetMobile and Inception to DenseNet121. Motivation behind dropping support for VGG16 and Inception is that these two networks are hardly suitable for inference on the edge, because of their size. Out of a few hardware accelerators, that aXeleRate supports or will support, only Nvidia Jetson series theoretically can run inference with these two networks.

Minor changes:
- When picking the correct grid cell in YOLO v2 detector, rounding to integer was changed to numpy.floor. Logically, this makes more sense - otherwise object with center located at [0.6, 0.6] would get assigned to grid cells [1, 1] instead of [0, 0].
- K210 converter reverted back to beta2
- Image augmentation pipeline restored for detector

0.5.7

- Added mAP to training graph visualization
- Added following options to config: valid_metric(choice of val_loss/val_accuracy for classifier and segnet, and val/loss/mAP for detector), backend weights(imagenet/None/path to backend weight file), save_bottleneck(only for classifier, True/False, saves bottlneck weights after training is finished to the project folder. Later the weights can be used as backend weights for the model with the same backend --- i.e. train a classifier model, save bottleneck weights and then load them for training of detector/segnet model).
- Experimental Edge TPU conversion (only tested with Mobilenet classifier now)
- Fixed preprocessing for inference (different backends use different image preprocessing, to find out more, search for "keras application preprocessing" or look inside of feature.py file in aXeleRate)

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.