Gradient-accumulator

Latest version: v0.5.2

Safety actively analyzes 623586 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.5.2

Summary
The main feature of this patch release is that AccumBN can now be used as drop-in replacement for any BatchNormalization layer, even for pretrained networks. Old weights are sufficiently transferred and documentations have been updated to include how to do this.

What's Changed
* Docs: Support tf 2.2-2.12 by andreped in https://github.com/andreped/GradientAccumulator/pull/100
* Allow poorer approximation for older tf versions in model test by andreped in https://github.com/andreped/GradientAccumulator/pull/101
* Fixed typo in setup.cfg by andreped in https://github.com/andreped/GradientAccumulator/pull/104
* Ignore .pyc [no ci] by andreped in https://github.com/andreped/GradientAccumulator/pull/106
* Delete redundant .pyc file [no ci] by andreped in https://github.com/andreped/GradientAccumulator/pull/107
* Added Applications to README by andreped in https://github.com/andreped/GradientAccumulator/pull/109
* Fixed whl installation in test CI by andreped in https://github.com/andreped/GradientAccumulator/pull/110
* Added method to replace BN layers by andreped in https://github.com/andreped/GradientAccumulator/pull/112


**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.5.1...v0.5.2

0.5.1

Announcement
This patch release adds support for all tf versions `2.2-2.12` and Python `3.6-3.11`. The model wrapper should work as intended for all combinations, whereas the optimizer is only compatible with `tf>=2.8` and with poorer performance for `tf>=2.10`.

What's Changed
* v0.5.0 zenodo + cite by andreped in https://github.com/andreped/GradientAccumulator/pull/89
* Added opt distribute unit test + added model distribute test to CI by andreped in https://github.com/andreped/GradientAccumulator/pull/91
* Further refined the bug report template by andreped in https://github.com/andreped/GradientAccumulator/pull/97
* Fixed dynamic optimizer wrapper inheritance + support tf >= 2.8 by andreped in https://github.com/andreped/GradientAccumulator/pull/95
* Fixed tensorflow-datasets protobuf issue by andreped in https://github.com/andreped/GradientAccumulator/pull/98
* Added model wrapper test for tf<2.8 + refactored tests by andreped in https://github.com/andreped/GradientAccumulator/pull/99


**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.5.0...v0.5.1

0.5.0

New feature!
* Multi-GPU support has now been added! Support has been added for both optimizer and model wrappers.
* Note that only SGD works with the model wrapper, due to challenges controlling the optimizer state during gradient accumulatiom

What's Changed
* DOI badge + bibtex update in README by andreped in https://github.com/andreped/GradientAccumulator/pull/77
* Fixed DOI in README by andreped in https://github.com/andreped/GradientAccumulator/pull/78
* batch size vs mini-batch size by mhoibo in https://github.com/andreped/GradientAccumulator/pull/79
* Docs: README + redirect + tf <=2.10 + python <= 3.11 by andreped in https://github.com/andreped/GradientAccumulator/pull/80
* Added multi-gpu support (model wrapper) by tno123 in https://github.com/andreped/GradientAccumulator/pull/82
* Multi-GPU works + macOS-11 CI + Docs update by andreped in https://github.com/andreped/GradientAccumulator/pull/83
* added parameter count test by tno123 in https://github.com/andreped/GradientAccumulator/pull/84
* Added linting + author update + v0.5.0 by andreped in https://github.com/andreped/GradientAccumulator/pull/86
* Reduced sphinx version + fixed import by andreped in https://github.com/andreped/GradientAccumulator/pull/87
* Docs: added urllib3==1.26.15 to docs req by andreped in https://github.com/andreped/GradientAccumulator/pull/88

New Contributors
* mhoibo made their first contribution in https://github.com/andreped/GradientAccumulator/pull/79
* tno123 made their first contribution in https://github.com/andreped/GradientAccumulator/pull/82

**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.4.2...v0.5.0

0.4.2

What's Changed
* Add seemless support for mixed precision in AccumBatchNormalization by andreped in https://github.com/andreped/GradientAccumulator/pull/66
* Cleanup and refactored unit tests by andreped in https://github.com/andreped/GradientAccumulator/pull/67
* CI test dispatch + README improvements by andreped in https://github.com/andreped/GradientAccumulator/pull/69
* Minor fix for Accum BN in 3D [skip ci] by dbouget in https://github.com/andreped/GradientAccumulator/pull/70
* Added mixed precision AccumBN CI test by andreped in https://github.com/andreped/GradientAccumulator/pull/74
* Fixed AccumBN to work ND by andreped in https://github.com/andreped/GradientAccumulator/pull/75
* README technique order update by andreped in https://github.com/andreped/GradientAccumulator/pull/76


**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.4.1...v0.4.2

0.4.1

What's Changed
* Added issue templates by andreped in https://github.com/andreped/GradientAccumulator/pull/59
* Fixed bug in AccumBatchNormalizer - identical results to Keras BN by andreped in https://github.com/andreped/GradientAccumulator/pull/61
* Docs: Added AccumBN example + docs README + minor fixes by andreped in https://github.com/andreped/GradientAccumulator/pull/62
* bump v0.4.1 by andreped in https://github.com/andreped/GradientAccumulator/pull/63

New API
You can now use gradient accumulation with the `AccumBatchNormalization` layer:

from gradient_accumulator import GradientAccumulateModel, AccumBatchNormalization
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

define model and add accum BN layer
model = Sequential()
model.add(Dense(32, activation="relu"))
model.add(AccumBatchNormalization(accum_steps=8))
model.add(Dense(10))

add gradient accumulation to the rest of the model
model = GradientAccumulateModel(accum_steps=8, inputs=model.input, outputs=model.output)


More information about remarks and usage can be found at [gradientaccumulator.readthedocs.io](https://gradientaccumulator.readthedocs.io/en/latest/examples/batch_normalization.html)

**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.4.0...v0.4.1

0.4.0

What's Changed
* Added custom `AccumBatchNormalization` layer with gradient accumulation support.
* Added more unit tests -> code coverage = 99%
* Made proper documentations which is hosted at [gradientaccumulator.readthedocs.io/](https://gradientaccumulator.readthedocs.io/)
* Reduced runtime on several unit tests to make CI jobs faster
* Added fix for `protobuf` for `tfds` in CIs
* Reworked README - moved most stuff to the Documentations + added CI section w/badges
* Header image by jpdefrutos in https://github.com/andreped/GradientAccumulator/pull/51

New Contributors
* jpdefrutos made their first contribution in https://github.com/andreped/GradientAccumulator/pull/51

New API feature

from gradient_accumulator import AccumBatchNormalization

layer = AccumBatchNormalization(accum_steps=4)


Can be used as a regular Keras `BatchNormalization` layer, but with reduced functionality.

**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.3.2...v0.4.0

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.