Keras Implementation of Mode Normalization (Lucas Deecke, Iain Murray, Hakan Bilen, 2018)
MIT License
ModeNormalization(axis=-1, k=2, momentum=0.99, epsilon=1e-3, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None)
Mode Normalization Lucas Deecke, Iain Murray, Hakan Bilen - 2018
Normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1 for K different modes.
Conv2D
layer with data_format="channels_first"
, set axis=1
in ModeNormalization
.beta
to normalized tensor. If False, beta
is ignored.gamma
. If False, gamma
is not used. When the next layer is linear (also e.g. nn.relu
), this can be disabled since the scaling will be done by the next layer.input_shape
(tuple of integers, does not include the samples axis) when using this layer as the first layer in a model.pytest