Bot releases are hidden (Show)
Published by fchollet over 4 years ago
As previously announced, we have discontinued multi-backend Keras to refocus exclusively on the TensorFlow implementation of Keras.
In the future, we will develop the TensorFlow implementation of Keras in the present repo, at keras-team/keras
. For the time being, it is being developed in tensorflow/tensorflow
and distributed as tensorflow.keras
. In this future, the keras
package on PyPI will be the same as tf.keras
.
This release (2.4.0) simply redirects all APIs in the standalone keras
package to point to tf.keras
. This helps address user confusion regarding differences and incompatibilities between tf.keras
and the standalone keras
package. There is now only one Keras: tf.keras
.
from tensorflow import keras
, rather than import keras
, for the time being.Published by fchollet about 5 years ago
Keras 2.3.1 is a minor bug-fix release. In particular, it fixes an issue with using Keras models across multiple threads.
Published by fchollet about 5 years ago
Keras 2.3.0 is the first release of multi-backend Keras that supports TensorFlow 2.0. It maintains compatibility with TensorFlow 1.14, 1.13, as well as Theano and CNTK.
This release brings the API in sync with the tf.keras API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use tf.keras.
This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0. It implements the same Keras 2.3.0 API (so switching should be as easy as changing the Keras import statements), but it has many advantages for TensorFlow users, such as support for eager execution, distribution, TPU training, and generally far better integration between low-level TensorFlow and high-level concepts like Layer and Model. It is also better maintained.
Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported.
size(x)
to backend API.add_metric
method added to Layer / Model (used in a similar way as add_loss
, but for metrics), as well as the metrics property
.layer.weights
(including layer.trainable_weights
or layer.non_trainable_weights
as appropriate).Loss
base class). This enables losses to be parameterized via constructor arguments. Loss classes added:
MeanSquaredError
MeanAbsoluteError
MeanAbsolutePercentageError
MeanSquaredLogarithmicError
BinaryCrossentropy
CategoricalCrossentropy
SparseCategoricalCrossentropy
Hinge
SquaredHinge
CategoricalHinge
Poisson
LogCosh
KLDivergence
Huber
Metric
base class). This enables metrics to be stateful (e.g. required for supported AUC) and to be parameterized via constructor arguments. Metric classes added:
Accuracy
MeanSquaredError
Hinge
CategoricalHinge
SquaredHinge
FalsePositives
TruePositives
FalseNegatives
TrueNegatives
BinaryAccuracy
CategoricalAccuracy
TopKCategoricalAccuracy
LogCoshError
Poisson
KLDivergence
CosineSimilarity
MeanAbsoluteError
MeanAbsolutePercentageError
MeanSquaredError
MeanSquaredLogarithmicError
RootMeanSquaredError
BinaryCrossentropy
CategoricalCrossentropy
Precision
Recall
AUC
SparseCategoricalAccuracy
SparseTopKCategoricalAccuracy
SparseCategoricalCrossentropy
reset_metrics
argument to train_on_batch
and test_on_batch
. Set this to True to maintain metric state across different batches when writing lower-level training/evaluation loops. If False, the metric value reported as output of the method call will be the value for the current batch only.model.reset_metrics()
method to Model. Use this at the start of an epoch to clear metric state when writing lower-level training/evaluation loops.lr
to learning_rate
for all optimizers.decay
for all optimizers. For learning rate decay, use LearningRateSchedule
objects in tf.keras.batch_size
argument is deprecated (ignored) when used with TF 2.0write_grads
is deprecated (ignored) when used with TF 2.0embeddings_freq
, embeddings_layer_names
, embeddings_metadata
, embeddings_data
are deprecated (ignored) when used with TF 2.0metrics=['acc']
, your metric will be reported under the string "acc", not "accuracy", and inversely metrics=['accuracy']
will be reported under the string "accuracy".sigmoid
(from hard_sigmoid
) in all RNN layers.Published by fchollet about 5 years ago
Keras 2.2.5 is the last release of Keras that implements the 2.2.* API. It is the last release to only support TensorFlow 1 (as well as Theano and CNTK).
The next release will be 2.3.0, which makes significant API changes and add support for TensorFlow 2.0. The 2.3.0 release will be the last major release of multi-backend Keras. Multi-backend Keras is superseded by tf.keras
.
At this time, we recommend that Keras users who use multi-backend Keras with the TensorFlow backend switch to tf.keras
in TensorFlow 2.0. tf.keras
is better maintained and has better integration with TensorFlow features.
ResNet101
, ResNet152
, ResNet50V2
, ResNet101V2
, ResNet152V2
.evaluate
and predict
.
callbacks
argument (list of callback instances) in evaluate
and predict
.on_train_batch_begin
, on_train_batch_end
, on_test_batch_begin
, on_test_batch_end
, on_predict_batch_begin
, on_predict_batch_end
, as well as on_test_begin
, on_test_end
, on_predict_begin
, on_predict_end
. Methods on_batch_begin
and on_batch_end
are now aliases for on_train_batch_begin
and on_train_batch_end
.save_model
and load_model
(in place of the filepath)name
argument in Sequential constructorvalidation_freq
argument in fit
, controlling the frequency of validation (e.g. setting validation_freq=3
would run validation every 3 epochs)fit
, evaluate
, and predict
, instead of having to use *_generator
methods.
max_queue_size
, workers
, use_multiprocessing
to these methods.dilation_rate
argument in layer DepthwiseConv2D
.m
to max_value
.dtype
argument in base layer (default dtype for layer's weights).Tokenizer
class.H5Dict
and model_to_dot
to utils.expand_nested
, dpi
to plot_model
.update_sub
, stack
, cumsum
, cumprod
, foldl
, foldr
to CNTK backendmerge_repeated
argument to ctc_decode
in TensorFlow backendThanks to the 89 committers who contributed code to this release!
Published by fchollet about 6 years ago
This is a bugfix release, addressing two issues:
Sequential
model.See here for the changelog since 2.2.2.
Published by fchollet about 6 years ago
ThresholdedReLU
and LeakyReLU
into the ReLU
layer.ReLU
layer now takes new arguments negative_slope
and threshold
, and the relu
function in the backend takes a new threshold
argument.update_freq
argument in TensorBoard
callback, controlling how often to write TensorBoard logs.exponential
function to keras.activations
.data_format
argument in all 4 Pooling1D
layers.interpolation
argument in UpSampling2D
layer and in resize_images
backend function, supporting modes "nearest"
(previous behavior, and new default) and "bilinear"
(new).dilation_rate
argument in Conv2DTranspose
layer and in conv2d_transpose
backend function.LearningRateScheduler
now receives the lr
key as part of the logs
argument in on_epoch_end
(current value of the learning rate).GlobalAveragePooling1D
layer support masking.filepath
argument save_model
and model.save()
can now be a h5py.Group
instance.restore_best_weights
to EarlyStopping
callback (optionally reverts to the weights that obtained the highest monitored score value).dtype
argument to keras.utils.to_categorical
.run_options
and run_metadata
as optional session arguments in model.compile()
for the TensorFlow backend.Sequential.get_config()
. Previously, the return value was a list of the config dictionaries of the layers of the model. Now, the return value is a dictionary with keys layers
, name
, and an optional key build_input_shape
. The old config is equivalent to new_config['layers']
. This makes the output of get_config
consistent across all model classes.Thanks to our 38 contributors whose commits are featured in this release:
@BertrandDechoux, @ChrisGll, @Dref360, @JamesHinshelwood, @MarcoAndreaBuchmann, @ageron, @alfasst, @blue-atom, @chasebrignac, @cshubhamrao, @danFromTelAviv, @datumbox, @farizrahman4u, @fchollet, @fuzzythecat, @gabrieldemarmiesse, @hadifar, @heytitle, @hsgkim, @jankrepl, @joelthchao, @knightXun, @kouml, @linjinjin123, @lvapeab, @nikoladze, @ozabluda, @qlzh727, @roywei, @rvinas, @sriyogesh94, @tacaswell, @taehoonlee, @tedyu, @xuhdev, @yanboliang, @yongzx, @yuanxiaosc
Published by fchollet about 6 years ago
This is a bugfix release, fixing a significant bug in multi_gpu_model
.
For changes since version 2.2.0, see release notes for Keras 2.2.1.
Published by fchollet about 6 years ago
output_padding
argument in Conv2DTranspose
(to override default padding behavior).No breaking changes recorded.
Thanks to our 33 contributors whose commits are featured in this release:
@Ajk4, @Anner-deJong, @Atcold, @Dref360, @EyeBool, @ageron, @briannemsick, @cclauss, @davidtvs, @dstine, @eTomate, @ebatuhankaynak, @eliberis, @farizrahman4u, @fchollet, @fuzzythecat, @gabrieldemarmiesse, @jlopezpena, @kamil-kaczmarek, @kbattocchi, @kmader, @kvechera, @maxpumperla, @mkaze, @pavithrasv, @rvinas, @sachinruk, @seriousmac, @soumyac1999, @taehoonlee, @yanboliang, @yongzx, @yuyang-huang
Published by fchollet over 6 years ago
Model
subclassing.Sequential
model is now a plain subclass of Model
.applications
and preprocessing
are now externalized to their own repositories (keras-applications and keras-preprocessing).Model
subclassing API (details below).SeparableConv1D
, SeparableConv2D
, as well as backend methods separable_conv1d
and separable_conv2d
(previously only available for TensorFlow).Xception
and MobileNet
(previously only available for TensorFlow).MobileNetV2
application (available for all backends).~/.keras.json
configuration file (e.g. PlaidML backend).sample_weight
in ImageDataGenerator
.preprocessing.image.save_img
utility to write images to disk.Flatten
layer's data_format
argument to None
(which defaults to global Keras config).Sequential
is now a plain subclass of Model
. The attribute sequential.model
is deprecated.baseline
argument in EarlyStopping
(stop training if a given baseline isn't reached).data_format
argument to Conv1D
.multi_gpu_model
serializable.TimeDistributed
layer.advanced_activation
layer ReLU
, making the ReLU activation easier to configure while retaining easy serialization capabilities.axis=-1
argument in backend crossentropy functions specifying the class prediction axis in the input tensor.Model
subclassingIn addition to the Sequential
API and the functional Model
API, you may now define models by subclassing the Model
class and writing your own call
forward pass:
import keras
class SimpleMLP(keras.Model):
def __init__(self, use_bn=False, use_dp=False, num_classes=10):
super(SimpleMLP, self).__init__(name='mlp')
self.use_bn = use_bn
self.use_dp = use_dp
self.num_classes = num_classes
self.dense1 = keras.layers.Dense(32, activation='relu')
self.dense2 = keras.layers.Dense(num_classes, activation='softmax')
if self.use_dp:
self.dp = keras.layers.Dropout(0.5)
if self.use_bn:
self.bn = keras.layers.BatchNormalization(axis=-1)
def call(self, inputs):
x = self.dense1(inputs)
if self.use_dp:
x = self.dp(x)
if self.use_bn:
x = self.bn(x)
return self.dense2(x)
model = SimpleMLP()
model.compile(...)
model.fit(...)
Layers are defined in __init__(self, ...)
, and the forward pass is specified in call(self, inputs)
. In call
, you may specify custom losses by calling self.add_loss(loss_tensor)
(like you would in a custom layer).
With Keras 2.2.0 and TensorFlow 1.8 or higher, you may fit
, evaluate
and predict
using symbolic TensorFlow tensors (that are expected to yield data indefinitely). The API is similar to the one in use in fit_generator
and other generator methods:
iterator = training_dataset.make_one_shot_iterator()
x, y = iterator.get_next()
model.fit(x, y, steps_per_epoch=100, epochs=10)
iterator = validation_dataset.make_one_shot_iterator()
x, y = iterator.get_next()
model.evaluate(x, y, steps=50)
This is achieved by dynamically rewiring the TensorFlow graph to feed the input tensors to the existing model placeholders. There is no performance loss compared to building your model on top of the input tensors in the first place.
Merge
layers and associated functionality (remnant of Keras 0), which were deprecated in May 2016, with full removal initially scheduled for August 2017. Models from the Keras 0 API using these layers cannot be loaded with Keras 2.2.0 and above.truncated_normal
base initializer now returns values that are scaled by ~0.9 (resulting in correct variance value after truncation). This has a small chance of affecting initial convergence behavior on some models.Thanks to our 46 contributors whose commits are featured in this release:
@ASvyatkovskiy, @AmirAlavi, @Anirudh-Swaminathan, @DavidAriel, @Dref360, @JonathanCMitchell, @KuzMenachem, @PeterChe1990, @Saharkakavand, @StefanoCappellini, @ageron, @askskro, @bileschi, @bonlime, @bottydim, @brge17, @briannemsick, @bzamecnik, @christian-lanius, @clemens-tolboom, @dschwertfeger, @dynamicwebpaige, @farizrahman4u, @fchollet, @fuzzythecat, @ghostplant, @giuscri, @huyu398, @jnphilipp, @masstomato, @morenoh149, @mrTsjolder, @nittanycolonial, @r-kellerm, @reidjohnson, @roatienza, @sbebo, @stevemurr, @taehoonlee, @tiferet, @tkoivisto, @tzerrell, @vkk800, @wangkechn, @wouterdobbels, @zwang36wang
Published by fchollet over 6 years ago
ReduceLROnPlateau
, rename epsilon
argument to min_delta
(backwards-compatible).RemoteMonitor
, add argument send_as_json
.softmax
function, add argument axis
.Flatten
layer, add argument data_format
.save_model
(Model.save
) and load_model
functions, allow the filepath
argument to be a h5py.File
object.Model.evaluate_generator
, add verbose
argument.Bidirectional
wrapper layer, add constants
argument.multi_gpu_model
function, add arguments cpu_merge
and cpu_relocation
(controlling whether to force the template model's weights to be on CPU, and whether to operate merge operations on CPU or GPU).ImageDataGenerator
, allow argument width_shift_range
to be int
or 1D array-like.This release does not include any known breaking changes.
Thanks to our 37 contributors whose commits are featured in this release:
@Dref360, @FirefoxMetzger, @Naereen, @NiharG15, @StefanoCappellini, @WindQAQ, @dmadeka, @edrogers, @eltronix, @farizrahman4u, @fchollet, @gabrieldemarmiesse, @ghostplant, @jedrekfulara, @jlherren, @joeyearsley, @johanahlqvist, @johnyf, @jsaporta, @kalkun, @lucasdavid, @masstomato, @mrlzla, @myutwo150, @nisargjhaveri, @obi1kenobi, @olegantonyan, @ozabluda, @pasky, @planck35, @sotlampr, @souptc, @srjoglekar246, @stamate, @taehoonlee, @vkk800, @xuhdev
Published by fchollet over 6 years ago
TimeseriesGenerator
, and new layer DepthwiseConv2D
.keras.preprocessing.sequence.TimeseriesGenerator
.keras.layers.DepthwiseConv2D
.keras.layers.CuDNNLSTM
to be loaded into a keras.layers.LSTM
layer (e.g. for inference on CPU).brightness_range
data augmentation argument in keras.preprocessing.image.ImageDataGenerator
.validation_split
API in keras.preprocessing.image.ImageDataGenerator
. You can pass validation_split
to the constructor (float), then select between training/validation subsets by passing the argument subset='validation'
or subset='training'
to methods flow
and flow_from_directory
.ConvLSTM2D
to a modular implementation, recurrent dropout support in Theano has been dropped for this layer.Thanks to our 28 contributors whose commits are featured in this release:
@DomHudson, @Dref360, @VitamintK, @abrad1212, @ahundt, @bojone, @brainnoise, @bzamecnik, @caisq, @cbensimon, @davinnovation, @farizrahman4u, @fchollet, @gabrieldemarmiesse, @khosravipasha, @ksindi, @lenjoy, @masstomato, @mewwts, @ozabluda, @paulpister, @sandpiturtle, @saralajew, @srjoglekar246, @stefangeneralao, @taehoonlee, @tiangolo, @treszkai
Published by fchollet over 6 years ago
model.compile(..., metrics=[...])
. A stateful metric inherits from Layer
, and implements __call__
and reset_states
.constants
argument in StackedRNNCells
.TensorBoard
callback (loss and metrics plotting) with non-TensorFlow backends.reshape
argument in model.load_weights()
, to optionally reshape weights being loaded to the size of the target weights in the model considered.tif
to supported formats in ImageDataGenerator
.multi_gpu_model()
(set gpus=None
).LearningRateScheduler
callback, the scheduling function now takes an argument: lr
, the current learning rate.ImageDataGenerator
, change default interpolation of image transforms from nearest to bilinear. This should probably not break any users, but it is a change of behavior.Thanks to our 37 contributors whose commits are featured in this release:
@DalilaSal, @Dref360, @GalaxyDream, @GarrisonJ, @Max-Pol, @May4m, @MiliasV, @MrMYHuang, @N-McA, @Vijayabhaskar96, @abrad1212, @ahundt, @angeloskath, @bbabenko, @bojone, @brainnoise, @bzamecnik, @caisq, @cclauss, @dsadulla, @fchollet, @gabrieldemarmiesse, @ghostplant, @gorogoroyasu, @icyblade, @kapsl, @kevinbache, @mendesmiguel, @mikesol, @myutwo150, @ozabluda, @sadreamer, @simra, @taehoonlee, @veniversum, @yongtang, @zhangwj618
Published by fchollet almost 7 years ago
applications
module.trainable
attribute in BatchNormalization
now disables the updates of the batch statistics (i.e. if trainable == False
the layer will now run 100% in inference mode).amsgrad
argument in Adam
optimizer.NASNetMobile
, NASNetLarge
, DenseNet121
, DenseNet169
, DenseNet201
.Softmax
layer (removing need to use a Lambda
layer in order to specify the axis
argument).SeparableConv1D
layer.preprocessing.image.ImageDataGenerator
, allow width_shift_range
and height_shift_range
to take integer values (absolute number of pixels)return_state
in Bidirectional
applied to RNNs (return_state
should be set on the child layer)."crossentropy"
and "ce"
are now allowed in the metrics
argument (in model.compile()
), and are routed to either categorical_crossentropy
or binary_crossentropy
as needed.steps
argument in predict_*
methods on the Sequential
model.oov_token
argument in preprocessing.text.Tokenizer
.preprocessing.image.ImageDataGenerator
, shear_range
has been switched to use degrees rather than radians (for consistency). This should not actually break anything (neither training nor inference), but keep this change in mind in case you see any issues with regard to your image data augmentation process.Thanks to our 45 contributors whose commits are featured in this release:
@Dref360, @OliPhilip, @TimZaman, @bbabenko, @bdwyer2, @berkatmaca, @caisq, @decrispell, @dmaniry, @fchollet, @fgaim, @gabrieldemarmiesse, @gklambauer, @hgaiser, @hlnull, @icyblade, @jgrnt, @kashif, @kouml, @lutzroeder, @m-mohsen, @mab4058, @manashty, @masstomato, @mihirparadkar, @myutwo150, @nickbabcock, @novotnj3, @obsproth, @ozabluda, @philferriere, @piperchester, @pstjohn, @roatienza, @souptc, @spiros, @srs70187, @sumitgouthaman, @taehoonlee, @tigerneil, @titu1994, @tobycheese, @vitaly-krumins, @yang-zhang, @ziky90
Published by fchollet almost 7 years ago
preprocess_input
in all Keras applications compatible with both Numpy arrays and symbolic tensors (previously only supported Numpy arrays).weights
argument in all Keras applications to accept the path to a custom weights file to load (previously only supported the built-in imagenet
weights file).steps_per_epoch
behavior change in generator training/evaluation methods:
Sequence
, the specified value was overridden by the Sequence
length)Sequence
, we set it to the Sequence
length.workers=0
in generator training/evaluation methods (will run the generator in the main process, in a blocking way).interpolation
argument in ImageDataGenerator.flow_from_directory
, allowing a custom interpolation method for image resizing.gpus
argument in multi_gpu_model
to be a list of specific GPU ids.steps_per_epoch
behavior (described above) may affect some users.Thanks to our 26 contributors whose commits are featured in this release:
@Alex1729, @alsrgv, @apisarek, @asos-saul, @athundt, @cherryunix, @dansbecker, @datumbox, @de-vri-es, @drauh, @evhub, @fchollet, @heath730, @hgaiser, @icyblade, @jjallaire, @knaveofdiamonds, @lance6716, @luoch, @mjacquem1, @myutwo150, @ozabluda, @raviksharma, @rh314, @yang-zhang, @zach-nervana
Published by fchollet almost 7 years ago
This release amends release 2.1.0 to include a fix for an erroneous breaking change introduced in #8419.
Published by fchollet almost 7 years ago
This is a small release that fixes outstanding bugs that were reported since the previous release.
go_backwards
to cuDNN RNNs (enables Bidirectional
wrapper on cuDNN RNNs).fetches
to K.Function()
with the TensorFlow backend.steps_per_epoch
and validation_steps
arguments in Sequential.fit()
(to sync it with Model.fit()
).None.
Thanks to our 14 contributors whose commits are featured in this release:
@Dref360, @LawnboyMax, @anj-s, @bzamecnik, @datumbox, @diogoff, @farizrahman4u, @fchollet, @frexvahi, @jjallaire, @nsuh, @ozabluda, @roatienza, @yakigac
Published by fchollet almost 7 years ago
RNN
base class.CuDNNLSTM
and CuDNNGRU
layers, backend by NVIDIA's cuDNN library for fast GPU training & inference.constants
argument in RNN
's call
method, making RNN attention easier to implement.keras.utils.multi_gpu_model
.keras.datasets.fashion_mnist.load_data()
Minimum
merge layer as keras.layers.Minimum
(class) and keras.layers.minimum(inputs)
(function)InceptionResNetV2
to keras.applications
.bool
variables in TensorFlow backend.dilation
to SeparableConv2D
.noise_shape
in Dropout
keras.layers.RNN()
base class for batch-level RNNs (used to implement custom RNN layers from a cell class).keras.layers.StackedRNNCells()
layer wrapper, used to stack a list of RNN cells into a single cell.CuDNNLSTM
and CuDNNGRU
layers.implementation=0
for RNN layers.keras.preprocessing.image.load_img()
.keras.utils.multi_gpu_model
for easy multi-GPU data parallelism.constants
argument in RNN
's call
method, used to pass a list of constant tensors to the underlying RNN cell.keras.losses.cosine_proximity
results in a different (correct) scaling behavior.ImageDataGenerator
results in a different normalization behavior.Thanks to our 59 contributors whose commits are featured in this release!
@Alok, @Danielhiversen, @Dref360, @HelgeS, @JakeBecker, @MPiecuch, @MartinXPN, @RitwikGupta, @TimZaman, @adammenges, @aeftimia, @ahojnnes, @akshaychawla, @alanyee, @aldenks, @andhus, @apbard, @aronj, @bangbangbear, @bchu, @bdwyer2, @bzamecnik, @cclauss, @colllin, @datumbox, @deltheil, @dhaval067, @durana, @ericwu09, @facaiy, @farizrahman4u, @fchollet, @flomlo, @fran6co, @grzesir, @hgaiser, @icyblade, @jsaporta, @julienr, @jussihuotari, @kashif, @lucashu1, @mangerlahn, @myutwo150, @nicolewhite, @noahstier, @nzw0301, @olalonde, @ozabluda, @patrikerdes, @podhrmic, @qin, @raelg, @roatienza, @shadiakiki1986, @smgt, @souptc, @taehoonlee, @y0z
Published by fchollet about 7 years ago
The primary purpose of this release is to address an incompatibility between Keras 2.0.7 and the next version of TensorFlow (1.4). TensorFlow 1.4 isn't due until a while, but the sooner the PyPI release has the fix, the fewer people will be affected when upgrading to the next TensorFlow version when it gets released.
No API changes for this release. A few bug fixes.
Published by fchollet about 7 years ago
clone_model
method, enabling to construct a new model, given an existing model to use as a template. Works even in a TensorFlow graph different from that of the original model.target_tensors
argument in compile
, enabling to use custom tensors or placeholders as model targets.steps_per_epoch
argument in fit
, enabling to train a model from data tensors in a way that is consistent with training from Numpy arrays.steps
argument in predict
and evaluate
.Subtract
merge layer, and associated layer function subtract
.weighted_metrics
argument in compile
to specify metric functions meant to take into account sample_weight
or class_weight
.stop_gradients
backend function consistent across backends.repeat_elements
backend function.categorical_crossentropy
, sparse_categorical_crossentropy
, binary_crossentropy
had the order of their positional arguments (y_true
, y_pred
) inverted. This change does not affect the losses
API. This change was done to achieve API consistency between the losses
API and the backend API.constraints
attribute on layers and models (not expected to affect any user).Thanks to our 47 contributors whose commits are featured in this release!
@5ke, @Alok, @Danielhiversen, @Dref360, @NeilRon, @abnera, @acburigo, @airalcorn2, @angeloskath, @athundt, @brettkoonce, @cclauss, @denfromufa, @enkait, @erg, @ericwu09, @farizrahman4u, @fchollet, @georgwiese, @ghisvail, @gokceneraslan, @hgaiser, @inexxt, @joeyearsley, @jorgecarleitao, @kennyjacob, @keunwoochoi, @krizp, @lukedeo, @milani, @n17r4m, @nicolewhite, @nigeljyng, @nyghtowl, @nzw0301, @rapatel0, @souptc, @srinivasreddy, @staticfloat, @taehoonlee, @td2014, @titu1994, @tleeuwenburg, @udibr, @waleedka, @wassname, @yashk2810
Published by fchollet over 7 years ago
predict_generator
, fit_generator
, evaluate_generator
) and add data enqueuing utilities.Conv3DTranspose
layer, new MobileNet
application, self-normalizing networks.selu
activation function, AlphaDropout
layer, lecun_normal
initializer.Sequence
, SequenceEnqueuer
, GeneratorEnqueuer
to utils
.pickle_safe
(replaced with use_multiprocessing
) and max_q_size
(replaced with max_queue_size
).MobileNet
to the applications module.Conv3DTranspose
layer.summary
method (argument print_fn
).