Brevitas

Latest version: v0.10.2

Safety actively analyzes 634750 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 3

0.3.1

Changelog:

- Important bugfix affecting ollection of activation statistics when retraining with BREVITAS_IGNORE_MISSING_KEYS=1. Statistics where not being collected and instead it was using the default baseline value of 1.0 to initialize the scale factor. The problem doesn't affect using load_state_dict(strict=False), which is an alternative to the flag above.
- Refactor proxies and mixins and simplify a bit the assumptions under which an injector proxy can be created (i.e. always within a quantized layer).
- Release tutorial on quantizers.

0.3.0

Release version 0.3.0.

Changelog:
- Enum and shape solvers are now implemented through extended dependency injectors. This finally makes declarative quantizers self-contained.
- Reorganize CI.
- Various smaller features and fixes.

0.2.1

Release version 0.2.1.

Changelog:
- Fix a few issues when using QuantTensors w/ zero point.
- Fix Hadamard layer, the implementation had fallen behind w.r.t QuantLayer and QuantTensor semantics.
- Make sure that the training flag in a QuantTensor is always set by the Module generating it.

0.2.0

First release on PyPI, version 0.2.0.

bnn_pynq-r1
Updated *FC networks from maltanar with TensorNorm instead of BatchNorm as last year, to ease deployment to FINN.

quant_mobilenet_v1_4b-r2
Update pretrained MobileNet V1 w/ 4b weights in the first layer.

cnv_test_ref-r0
Reference tests vectors for CNV models, r0.

bnn_pynq-r0
CNV, LFC, SFC, TFC topologies, originally designed for BNN-PYNQ, trained with Brevitas. Thanks to maltanar and ussamazahid96 .
Matching txt files contain batch-by-batch accuracy results, taken directly from the evaluation scripts.

quant_quartznet_4b-r0
Pretrained 4b QuartzNet for automatic speech recognition.

quant_quartznet_8b-r0
Pretrained 8b QuartzNet encoder and decoder for automatic speech recognition.

quant_melgan_8b-r0
Pretrained quantized 8b MelGAN vocoder on LJSpeech.

quant_proxylessnas_mobile14_hadamard_4b-r0
Pretrained quantized ProxylessNAS Mobile14 with everything at 4b (except input and weights of the first layer at 8 bits) and an Hadamard classifier as the last layer.

quant_proxylessnas_mobile14_4b-r0
Pretrained quantized ProxylessNAS Mobile14 with everything at 4b (except input and weights of the first layer at 8 bits).

quant_proxylessnas_mobile14_4b5b-r0
Pretrained quantized ProxylessNAS Mobile14 with 5b inputs and weights in depthwise layers, and everything else at 4b (except the weights of the first layer at 8 bits).

quant_mobilenet_v1_4b-r1
Re-release pretrained quantized 4b MobileNet V1 with proper naming so that it can be downloaded automatically.


examples-0.0.1
Add pretrained .pth for quantized 4b MobileNet V1.

0.1.0alpha

This is a preview of the new version of pytorch-quantization, named Brevitas.

Page 3 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.