Qualia-core

Latest version: v2.3.0

Safety actively analyzes 681881 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

2.3.0

New features
- Bump maximum Python version to 3.12.
- `learningframework.PyTorch`: custom metric name for checkpoints with `checkpoint_metric` param.
- `learningframework.PyTorch`: optionally disable confusion matrix with `enable_confusion_matrix` param.
- `learningframework.PyTorch`: custom loss with `loss` param.
- `learningframework.PyTorch`: custom metric selection with `metrics` param.
- Add `BrainMIX` dataset.
- Add `Amplitude` DataAugmentation.
- Add `CopySet` PreProcessing.
- `postprocessing.Keras2TFLite`, `postprocessing.RemoveKerasSoftmax`, `postprocessing.Torch2Keras`: add support for Keras 3.x.
- Add `VisualizeFeatureMaps` PostProcessing.
- `postprocessing.FuseBatchNorm`: add `evaluate` param to optionally disable evaluation.
- `learningmodel.pytorch.Quantizer`: add `v` tensor type for more flexibility with Qualia-Plugin-SNN.

Bug fixes
- `preprocessing.DatasetSplitterBySubjects`: add `dest` set to available sets.
- `preprocessing.DatasetSplitter`: update `dest` info set.
- `learningframework.PyTorch`: fix default metric for checkpoint (`valavgclsacc`).
- `learningframework.PyTorch`: fix seeding after several training (integer overflow).
- `learningframework.PyTorch`: micro computation for prec, rec, and f1 metrics.
- Root logger stdout stream can output DEBUG level so that child logger can log debug messages. Default level is still INFO.
- `learningmodel.pytorch.layers.QuantizedLayer`: fix multiple inheritance Protocol on Python 3.10.
- Fix parsing of `[[parameter_research]]` section in configuration file. Actual behaviour of `parameter_research` is still untested.

Breaking changes
- Some metrics previously computed as macro (prec, rec, f1) are now computed as micro, results will be different on unbalanced datasets.

2.2.0

New features
- Pass separate scale factor for bias to Qualia-CodeGen.
- `bench.use_test_as_valid` configuration option to use the test dataset for validation metrics when no validation dataset is available.
- Add `QuantizationAwareFX` PostProcessing module as an alternative to `QuantizationAwareTraining` using `torch.fx` to build the quantized model, replacing layers with their quantized alternatives when possible.
- QuantizedLayer: add `from_module()` method to build a quantized layer from a non-quantized one with same configuration and weights.
- `TorchVisionModel` LearningModel: allow using original classifier.
- `TorchVisionModel` LearningModel: allow choosing the last layer for the feature extractor.
- Add CIFAR-10 TorchVision's MobileNetv2 configuration for float32 training and int16 quantization.
- Add `Normalize` DataAugmentation using `torchvision.transforms.Normalize`.
- Add `TorchVisionModelTransforms` DataAugmentation for use with `TorchVisionModel` to adapt input data.
- `PyTorch` LearningFramework: show loss in progress bar.
- Colored console logging to stderr for warnings and errors.
- `qualia_codegen.NucleoL452REP`: use CMake project instead of STM32CubeIDE.


Bug fixes
- Fix detection of seperate quantizer for bias.
- Fix for no `[[preprocessing]]` section in configuration file.
- Fix `TorchVisionModel` LearningModel construction.
- `qualia_codegen.Linux` Deployer: try to fix overflow detection.

Breaking changes
- `activations_range.txt` file: introduce bias_q column.
Existing models will have to be re-quantized in order to be deployed using Qualia-CodeGen, this does not change the classification results if seperate quantizer for bias was not used.
- Symmetric quantization uses the full range down to `-2^{b-1}` instead of restricting lower bound to `-2^{b-1} - 1`. Existing models will have to be re-quantized and this may slightly change the classification results.

2.1.0

New features

- Deployment Qualia-CodeGen: Add support for nearest rounding mode (in addition to floor rounding mode) on Linux, NucleoL452REP (incl. CMSIS-NN), SparkFun Edge (incl. CMSIS-NN) and Longan Nano (incl. NMSIS-NN).

Bug fixes

- Fix importing qualia_core packages after plugins initialization.
- Fix some Python 3.9 compatibility issues.
- LearningModel MLP/QuantizedMLP: Fix layers instanciation.
- PostProcessing QuantizationAwareTraining: Use validation set instead of test set for validation with.

Other changes

- Various refactor, cleanup and typing fixes (quantized layer inheritance, `qualia_core.deployment`, `qualia_core.postprocessing.QualiaCodeGen`, `qualia_core.evaluation`).

Breaking changes

- `activations_range.txt` file: remove unused global_max columns and introduced round_mode columns.
Existing models will have to be re-quantized in order to be deployed using Qualia-CodeGen, this does not change the classification results.
- Nearest rounding mode for quantization with PyTorch now rounds upwards for half tie-breaker instead of round half to even in order to match Qualia-CodeGen.
Existing models using nearest rounding mode will have to be re-quantized and this may slightly change the classification results.

2.0.0

Initial release of Qualia-Core.

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.