Qualia-core

Latest version: v2.2.0

Safety actively analyzes 623694 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

2.2.0

New features
- Pass separate scale factor for bias to Qualia-CodeGen.
- `bench.use_test_as_valid` configuration option to use the test dataset for validation metrics when no validation dataset is available.
- Add `QuantizationAwareFX` PostProcessing module as an alternative to `QuantizationAwareTraining` using `torch.fx` to build the quantized model, replacing layers with their quantized alternatives when possible.
- QuantizedLayer: add `from_module()` method to build a quantized layer from a non-quantized one with same configuration and weights.
- `TorchVisionModel` LearningModel: allow using original classifier.
- `TorchVisionModel` LearningModel: allow choosing the last layer for the feature extractor.
- Add CIFAR-10 TorchVision's MobileNetv2 configuration for float32 training and int16 quantization.
- Add `Normalize` DataAugmentation using `torchvision.transforms.Normalize`.
- Add `TorchVisionModelTransforms` DataAugmentation for use with `TorchVisionModel` to adapt input data.
- `PyTorch` LearningFramework: show loss in progress bar.
- Colored console logging to stderr for warnings and errors.
- `qualia_codegen.NucleoL452REP`: use CMake project instead of STM32CubeIDE.


Bug fixes
- Fix detection of seperate quantizer for bias.
- Fix for no `[[preprocessing]]` section in configuration file.
- Fix `TorchVisionModel` LearningModel construction.
- `qualia_codegen.Linux` Deployer: try to fix overflow detection.

Breaking changes
- `activations_range.txt` file: introduce bias_q column.
Existing models will have to be re-quantized in order to be deployed using Qualia-CodeGen, this does not change the classification results if seperate quantizer for bias was not used.
- Symmetric quantization uses the full range down to `-2^{b-1}` instead of restricting lower bound to `-2^{b-1} - 1`. Existing models will have to be re-quantized and this may slightly change the classification results.

2.1.0

New features

- Deployment Qualia-CodeGen: Add support for nearest rounding mode (in addition to floor rounding mode) on Linux, NucleoL452REP (incl. CMSIS-NN), SparkFun Edge (incl. CMSIS-NN) and Longan Nano (incl. NMSIS-NN).

Bug fixes

- Fix importing qualia_core packages after plugins initialization.
- Fix some Python 3.9 compatibility issues.
- LearningModel MLP/QuantizedMLP: Fix layers instanciation.
- PostProcessing QuantizationAwareTraining: Use validation set instead of test set for validation with.

Other changes

- Various refactor, cleanup and typing fixes (quantized layer inheritance, `qualia_core.deployment`, `qualia_core.postprocessing.QualiaCodeGen`, `qualia_core.evaluation`).

Breaking changes

- `activations_range.txt` file: remove unused global_max columns and introduced round_mode columns.
Existing models will have to be re-quantized in order to be deployed using Qualia-CodeGen, this does not change the classification results.
- Nearest rounding mode for quantization with PyTorch now rounds upwards for half tie-breaker instead of round half to even in order to match Qualia-CodeGen.
Existing models using nearest rounding mode will have to be re-quantized and this may slightly change the classification results.

2.0.0

Initial release of Qualia-Core.

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.