Bayesian-torch

Latest version: v0.5.0

Safety actively analyzes 685525 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.5.0

This release includes support for quantization of all the Bayesian Convolutional layers listed below in addition to Conv2dReparameterization and Conv2dFlipout.

Conv1dReparameterization,
Conv3dReparameterization,
ConvTranspose1dReparameterization,
ConvTranspose2dReparameterization,
ConvTranspose3dReparameterization,
Conv1dFlipout,
Conv3dFlipout,
ConvTranspose1dFlipout,
ConvTranspose2dFlipout,
ConvTranspose3dFlipout

This release also includes the fixes for the following issues:
Issue https://github.com/IntelLabs/bayesian-torch/issues/27
Issue https://github.com/IntelLabs/bayesian-torch/issues/21
Issue https://github.com/IntelLabs/bayesian-torch/issues/24
Issue https://github.com/IntelLabs/bayesian-torch/issues/34



What's Changed
* Add quant prepare functions 342ca39b61814d702a6a6bef15981ca2e139dd8f
* Fix bug in post-training quantization evaluation due to Jit trace f5c7126cb80ed7dc86b3c6dd55bc5c006d64e25a
* Add quantization example for ImageNet/ResNet-50 3e749142f9ff9eaf2e93701348884d88cc2b6375
* Correcting the order of group and dilation parameters in Conv transpose layers 97ba16ad044022035ba22b17aa279f2d389129eb


**Full Changelog**: https://github.com/IntelLabs/bayesian-torch/compare/v0.4.0...v0.5.0

0.4.0

New feature: Quantization framework in Bayesian-Torch.
Support Post Training Quantization of Bayesian deep neural networks, enables optimized and efficient INT8 inference on Intel platforms.

What's Changed
* Add support for Bayesian neural network quantization | PR: https://github.com/IntelLabs/bayesian-torch/pull/23
* Include example for performing post training quantization of Bayesian neural network models | commit https://github.com/IntelLabs/bayesian-torch/commit/c3e9a0f3d4ae55c83edf743bf939d568eed6fc0b
* Add support for output padding in flipout layers | PR: https://github.com/IntelLabs/bayesian-torch/pull/20

Contributors
* junliang-lin made their first contribution in https://github.com/IntelLabs/bayesian-torch/pull/20
* ranganathkrishnan

**Full Changelog**: https://github.com/IntelLabs/bayesian-torch/compare/v0.3.0...v0.4.0

0.3.0

support arbitrary kernel sizes in the Bayesian convolutional layers

0.2.0

Includes dnn_to_bnn new feature:
An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition i.e. drop-in replacements of Convolutional, Linear and LSTM layers to corresponding Bayesian layers. This will enable seamless conversion of existing topology of larger models to Bayesian deep neural network models for extending towards uncertainty-aware applications.

**Full Changelog**: https://github.com/IntelLabs/bayesian-torch/compare/v0.1...v0.2.0

0.2.0alpha

Includes dnn_to_bnn new feature:
An API to convert deterministic deep neural network (dnn) model of any architecture to Bayesian deep neural network (bnn) model, simplifying the model definition i.e. drop-in replacements of Convolutional, Linear and LSTM layers to corresponding Bayesian layers. This will enable seamless conversion of existing topology of larger models to Bayesian deep neural network models for extending towards uncertainty-aware applications.

0.1

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.