Gradient-accumulator

Latest version: v0.5.2

Safety actively analyzes 688313 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 3

0.1.5

**Changes:**
- Added mixed precision support (only `float16` currently, which is compatible with NVIDIA GPUs)
- Added adaptive gradient clipping support (normalization-free approach which works with GA)
- Added CI test for AGC

**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.1.4...v0.1.5

0.1.4

Zenodo DOI release and updated README to contain updated documentation regarding installation and usage.

**Changes:**
- Renamed `n_gradients` to `accum_steps`.
- Added citation policy and Zenodo citation

**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.1.3...v0.1.4

0.1.3

GradientAccumulator is now available on PyPI :
https://pypi.org/project/gradient-accumulator/#files

**Changes:**
- Added experimental mixed precision support
- Added support for TF >= 2.2
- Added support for Python >3.6
- Added pytest to CI for unit testing
- Added CI test for mixed precision
- Added CI test for multi-input-output models
- Added CI test for optimizer invariance
- Added CI test for basic mnist training
- Added CI test to verify that we get expected result for GA vs regular batch training

**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.1.2...v0.1.3

0.1.2

**Changes:**
- Fixed critical bug regarding gradient updates (use MEAN reduction, instead of SUM reduction)
- Now, GA yields identical results compared to regular batch training
- Added unit tests with pytest to yield AssertionError if results are different
- Added compatibility with `sample_weight` - now GAModelWrapper should be _fully_ compatible with model.compile/fit

**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.1.1...v0.1.2

0.1.1

**Changes:**
- Swapped optimizer wrapper solution with Model wrapper solution
- Enables adding gradient accumulation support for "_any_" tf.keras.Model by simply overloading the train_step method
- Added convenience class GAModelWrapper that handles all this for you - just provide the model!
- Solution should also be more compatible with older TF versions, as train_step overloading was added already in TF 2.2.

**Full Changelog**: https://github.com/andreped/GradientAccumulator/compare/v0.1.0...v0.1.1

0.1.0

First release of the GradientAccumulator package that enables usage of accumulated gradients in TensorFlow 2.x by simply wrapping an optimizer.

Currently, compatible with Python 3.7-3.9, tested with TensorFlow 2.8.0 and 2.9.1, and cross-platform compatible (Windows, Ubuntu, and macOS).

**Full Changelog**: https://github.com/andreped/GradientAccumulator/commits/v0.1.0

Page 3 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.