Accumulated Gradients for TensorFlow 2
MIT License
Bot releases are visible (Hide)
The main feature of this patch release is that AccumBN can now be used as drop-in replacement for any BatchNormalization layer, even for pretrained networks. Old weights are sufficiently transferred and documentations have been updated to include how to do this.
import tensorflow as tf
from gradient_accumulator import GradientAccumulateModel
from gradient_accumulator.layers import AccumBatchNormalization
from gradient_accumulator.utils import replace_batchnorm_layers
accum_steps = 4
# replace BN layer with AccumBatchNormalization
model = tf.keras.applications.MobileNetV2(input_shape(28, 28, 3))
model = replace_batchnorm_layers(model, accum_steps=accum_steps)
# add gradient accumulation to existing model
model = GradientAccumulateModel(accum_steps=accum_steps, inputs=model.input, outputs=model.output)
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.5.1...v0.5.2
Published by andreped over 1 year ago
This patch release adds support for all tf versions 2.2-2.12
and Python 3.6-3.11
. The model wrapper should work as intended for all combinations, whereas the optimizer is only compatible with tf>=2.8
and with poorer performance for tf>=2.10
.
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.5.0...v0.5.1
Published by andreped over 1 year ago
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.4.2...v0.5.0
Published by andreped over 1 year ago
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.4.1...v0.4.2
Published by andreped over 1 year ago
You can now use gradient accumulation with the AccumBatchNormalization
layer:
from gradient_accumulator import GradientAccumulateModel, AccumBatchNormalization
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
# define model and add accum BN layer
model = Sequential()
model.add(Dense(32, activation="relu"))
model.add(AccumBatchNormalization(accum_steps=8))
model.add(Dense(10))
# add gradient accumulation to the rest of the model
model = GradientAccumulateModel(accum_steps=8, inputs=model.input, outputs=model.output)
More information about remarks and usage can be found at gradientaccumulator.readthedocs.io
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.4.0...v0.4.1
Published by andreped over 1 year ago
AccumBatchNormalization
layer with gradient accumulation support.protobuf
for tfds
in CIsfrom gradient_accumulator import AccumBatchNormalization
layer = AccumBatchNormalization(accum_steps=4)
Can be used as a regular Keras BatchNormalization
layer, but with reduced functionality.
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.3.2...v0.4.0
Published by andreped over 1 year ago
GradientAccumulateOptimizer
to support tf >= 2.10
by dynamically inheriting from (legacy) Optimizer related to https://github.com/andreped/GradientAccumulator/issues/37
tensorflow-addons
(now only tensorflow
is required) related to https://github.com/andreped/GradientAccumulator/issues/40
tf-addons
removal), see https://github.com/andreped/GradientAccumulator/issues/44
tensorflor-datasets
versioning in unit tests to work across all relevant setups, see https://github.com/andreped/GradientAccumulator/issues/41
AccumBatchNormalization
layer demonstrating similar results to regular keras' BN, see here
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.3.1...v0.3.2
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.3.1...v0.3.2
pip install gradient-accumulator==0.3.2
from gradient_accumulator.layers import AccumBatchNormalization
model = Sequential()
model.add(AccumBatchNormalization())
Published by andreped over 1 year ago
GAModelWrapper
-> GradientAccumulateModel
GAOptimizerWrapper
-> GradientAccumulateOptimizer
tensorflow==2.2
, due to tensorflow-addons
incompatiblity. Now tf >= 2.3
supported.Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.3.0...v0.3.1
pip install gradient-accumulator==0.3.1
from gradient_accumulator import GradientAccumulateModel
model = Model(...)
model = GradientAccumulateModel(accum_steps=4, inputs=model.input, outputs=model.output)
from gradient_accumulator import GradientAccumulateModel
opt = tf.keras.optimizers.SGD(1e-2)
opt = GradientAccumulateOptimizer(accum_steps=4, optimizer=opt)
Published by andreped over 1 year ago
GAOptimizerWrapper
by @andreped in https://github.com/andreped/GradientAccumulator/pull/28
GAModelWrapperV2
by @andreped in https://github.com/andreped/GradientAccumulator/pull/28
pip install gradient-accumulator==0.3.0
Method | Usage |
---|---|
GAModelWrapper |
model = GAModelWrapper(accum_steps=4, inputs=model.input, outputs=model.output) |
GAOptimizerWrapper |
opt = GAOptimizerWrapper(accum_steps=4, optimizer=tf.keras.optimizers.Adam(1e-3)) |
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.2.2...v0.3.0
Published by andreped almost 2 years ago
This is a minor patch release.
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.2.1...v0.2.2
Published by andreped about 2 years ago
This is a minor patch release.
What's changed:
use_acg
to use_agc
.Published by andreped over 2 years ago
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.1.5...v0.2.0
Published by andreped over 2 years ago
Changes:
float16
currently, which is compatible with NVIDIA GPUs)Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.1.4...v0.1.5
Published by andreped over 2 years ago
Zenodo DOI release and updated README to contain updated documentation regarding installation and usage.
Changes:
n_gradients
to accum_steps
.Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.1.3...v0.1.4
Published by andreped over 2 years ago
GradientAccumulator is now available on PyPI :
https://pypi.org/project/gradient-accumulator/#files
Changes:
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.1.2...v0.1.3
Published by andreped over 2 years ago
Changes:
sample_weight
- now GAModelWrapper should be fully compatible with model.compile/fitFull Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.1.1...v0.1.2
Published by andreped over 2 years ago
Changes:
Full Changelog: https://github.com/andreped/GradientAccumulator/compare/v0.1.0...v0.1.1
Published by andreped over 2 years ago
First release of the GradientAccumulator package that enables usage of accumulated gradients in TensorFlow 2.x by simply wrapping an optimizer.
Currently, compatible with Python 3.7-3.9, tested with TensorFlow 2.8.0 and 2.9.1, and cross-platform compatible (Windows, Ubuntu, and macOS).
Full Changelog: https://github.com/andreped/GradientAccumulator/commits/v0.1.0