keras

Deep Learning for humans

APACHE-2.0 License

Downloads
14M
Stars
60.9K
Committers
1.3K

Bot releases are hidden (Show)

keras - Keras 2.4.0

Published by fchollet over 4 years ago

As previously announced, we have discontinued multi-backend Keras to refocus exclusively on the TensorFlow implementation of Keras.

In the future, we will develop the TensorFlow implementation of Keras in the present repo, at keras-team/keras. For the time being, it is being developed in tensorflow/tensorflow and distributed as tensorflow.keras. In this future, the keras package on PyPI will be the same as tf.keras.

This release (2.4.0) simply redirects all APIs in the standalone keras package to point to tf.keras. This helps address user confusion regarding differences and incompatibilities between tf.keras and the standalone keras package. There is now only one Keras: tf.keras.

  • Note that this release may be breaking for some workflows when going from Keras 2.3.1 to 2.4.0. Test before upgrading.
  • Note that we still recommend that you import Keras as from tensorflow import keras, rather than import keras, for the time being.
keras - Keras 2.3.1

Published by fchollet about 5 years ago

Keras 2.3.1 is a minor bug-fix release. In particular, it fixes an issue with using Keras models across multiple threads.

Changes

  • Bug fixes
  • Documentation fixes
  • No API changes
  • No breaking changes
keras - Keras 2.3.0

Published by fchollet about 5 years ago

Keras 2.3.0 is the first release of multi-backend Keras that supports TensorFlow 2.0. It maintains compatibility with TensorFlow 1.14, 1.13, as well as Theano and CNTK.

This release brings the API in sync with the tf.keras API as of TensorFlow 2.0. However note that it does not support most TensorFlow 2.0 features, in particular eager execution. If you need these features, use tf.keras.

This is also the last major release of multi-backend Keras. Going forward, we recommend that users consider switching their Keras code to tf.keras in TensorFlow 2.0. It implements the same Keras 2.3.0 API (so switching should be as easy as changing the Keras import statements), but it has many advantages for TensorFlow users, such as support for eager execution, distribution, TPU training, and generally far better integration between low-level TensorFlow and high-level concepts like Layer and Model. It is also better maintained.

Development will focus on tf.keras going forward. We will keep maintaining multi-backend Keras over the next 6 months, but we will only be merging bug fixes. API changes will not be ported.

API changes

  • Add size(x) to backend API.
  • add_metric method added to Layer / Model (used in a similar way as add_loss, but for metrics), as well as the metrics property.
  • Variables set as attributes of a Layer are now tracked in layer.weights (including layer.trainable_weights or layer.non_trainable_weights as appropriate).
  • Layers set as attributes of a Layer are now tracked (so the weights/metrics/losses/etc of a sublayer are tracked by parent layers). This behavior already existed for Model specifically and is now extended to all Layer subclasses.
  • Introduce class-based losses (inheriting from Loss base class). This enables losses to be parameterized via constructor arguments. Loss classes added:
    • MeanSquaredError
    • MeanAbsoluteError
    • MeanAbsolutePercentageError
    • MeanSquaredLogarithmicError
    • BinaryCrossentropy
    • CategoricalCrossentropy
    • SparseCategoricalCrossentropy
    • Hinge
    • SquaredHinge
    • CategoricalHinge
    • Poisson
    • LogCosh
    • KLDivergence
    • Huber
  • Introduce class-based metrics (inheriting from Metric base class). This enables metrics to be stateful (e.g. required for supported AUC) and to be parameterized via constructor arguments. Metric classes added:
    • Accuracy
    • MeanSquaredError
    • Hinge
    • CategoricalHinge
    • SquaredHinge
    • FalsePositives
    • TruePositives
    • FalseNegatives
    • TrueNegatives
    • BinaryAccuracy
    • CategoricalAccuracy
    • TopKCategoricalAccuracy
    • LogCoshError
    • Poisson
    • KLDivergence
    • CosineSimilarity
    • MeanAbsoluteError
    • MeanAbsolutePercentageError
    • MeanSquaredError
    • MeanSquaredLogarithmicError
    • RootMeanSquaredError
    • BinaryCrossentropy
    • CategoricalCrossentropy
    • Precision
    • Recall
    • AUC
    • SparseCategoricalAccuracy
    • SparseTopKCategoricalAccuracy
    • SparseCategoricalCrossentropy
  • Add reset_metrics argument to train_on_batch and test_on_batch. Set this to True to maintain metric state across different batches when writing lower-level training/evaluation loops. If False, the metric value reported as output of the method call will be the value for the current batch only.
  • Add model.reset_metrics() method to Model. Use this at the start of an epoch to clear metric state when writing lower-level training/evaluation loops.
  • Rename lr to learning_rate for all optimizers.
  • Deprecate argument decay for all optimizers. For learning rate decay, use LearningRateSchedule objects in tf.keras.

Breaking changes

  • TensorBoard callback:
    • batch_size argument is deprecated (ignored) when used with TF 2.0
    • write_grads is deprecated (ignored) when used with TF 2.0
    • embeddings_freq, embeddings_layer_names, embeddings_metadata, embeddings_data are deprecated (ignored) when used with TF 2.0
  • Change loss aggregation mechanism to sum over batch size. This may change reported loss values if you were using sample weighting or class weighting. You can achieve the old behavior by making sure your sample weights sum to 1 for each batch.
  • Metrics and losses are now reported under the exact name specified by the user (e.g. if you pass metrics=['acc'], your metric will be reported under the string "acc", not "accuracy", and inversely metrics=['accuracy'] will be reported under the string "accuracy".
  • Change default recurrent activation to sigmoid (from hard_sigmoid) in all RNN layers.
keras - Keras 2.2.5

Published by fchollet about 5 years ago

Keras 2.2.5 is the last release of Keras that implements the 2.2.* API. It is the last release to only support TensorFlow 1 (as well as Theano and CNTK).

The next release will be 2.3.0, which makes significant API changes and add support for TensorFlow 2.0. The 2.3.0 release will be the last major release of multi-backend Keras. Multi-backend Keras is superseded by tf.keras.

At this time, we recommend that Keras users who use multi-backend Keras with the TensorFlow backend switch to tf.keras in TensorFlow 2.0. tf.keras is better maintained and has better integration with TensorFlow features.

API Changes

  • Add new Applications: ResNet101, ResNet152, ResNet50V2, ResNet101V2, ResNet152V2.
  • Callbacks: enable callbacks to be passed in evaluate and predict.
    • Add callbacks argument (list of callback instances) in evaluate and predict.
    • Add callback methods on_train_batch_begin, on_train_batch_end, on_test_batch_begin, on_test_batch_end, on_predict_batch_begin, on_predict_batch_end, as well as on_test_begin, on_test_end, on_predict_begin, on_predict_end. Methods on_batch_begin and on_batch_end are now aliases for on_train_batch_begin and on_train_batch_end.
  • Allow file pointers in save_model and load_model (in place of the filepath)
  • Add name argument in Sequential constructor
  • Add validation_freq argument in fit, controlling the frequency of validation (e.g. setting validation_freq=3 would run validation every 3 epochs)
  • Allow Python generators (or Keras Sequence objects) to be passed in fit, evaluate, and predict, instead of having to use *_generator methods.
    • Add generator-related arguments max_queue_size, workers, use_multiprocessing to these methods.
  • Add dilation_rate argument in layer DepthwiseConv2D.
  • MaxNorm constraint: rename argument m to max_value.
  • Add dtype argument in base layer (default dtype for layer's weights).
  • Add Google Cloud Storage support for model.save_weights and model.load_weights.
  • Add JSON-serialization to the Tokenizer class.
  • Add H5Dict and model_to_dot to utils.
  • Allow default Keras path to be specified at startup via environment variable KERAS_HOME.
  • Add arguments expand_nested, dpi to plot_model.
  • Add update_sub, stack, cumsum, cumprod, foldl, foldr to CNTK backend
  • Add merge_repeated argument to ctc_decode in TensorFlow backend

Thanks to the 89 committers who contributed code to this release!

keras - Keras 2.2.4

Published by fchollet about 6 years ago

This is a bugfix release, addressing two issues:

  • Ability to save a model when a file with the same name already exists.
  • Issue with loading legacy config files for the Sequential model.

See here for the changelog since 2.2.2.

keras - Keras 2.2.3

Published by fchollet about 6 years ago

Areas of improvement

  • API completeness & usability improvements
  • Bug fixes
  • Documentation improvements

API changes

  • Keras models can now be safely pickled.
  • Consolidate the functionality of the activation layers ThresholdedReLU and LeakyReLU into the ReLU layer.
  • As a result, the ReLU layer now takes new arguments negative_slope and threshold, and the relu function in the backend takes a new threshold argument.
  • Add update_freq argument in TensorBoard callback, controlling how often to write TensorBoard logs.
  • Add the exponential function to keras.activations.
  • Add data_format argument in all 4 Pooling1D layers.
  • Add interpolation argument in UpSampling2D layer and in resize_images backend function, supporting modes "nearest" (previous behavior, and new default) and "bilinear" (new).
  • Add dilation_rate argument in Conv2DTranspose layer and in conv2d_transpose backend function.
  • The LearningRateScheduler now receives the lr key as part of the logs argument in on_epoch_end (current value of the learning rate).
  • Make GlobalAveragePooling1D layer support masking.
  • The the filepath argument save_model and model.save() can now be a h5py.Group instance.
  • Add argument restore_best_weights to EarlyStopping callback (optionally reverts to the weights that obtained the highest monitored score value).
  • Add dtype argument to keras.utils.to_categorical.
  • Support run_options and run_metadata as optional session arguments in model.compile() for the TensorFlow backend.

Breaking changes

  • Modify the return value of Sequential.get_config(). Previously, the return value was a list of the config dictionaries of the layers of the model. Now, the return value is a dictionary with keys layers, name, and an optional key build_input_shape. The old config is equivalent to new_config['layers']. This makes the output of get_config consistent across all model classes.

Credits

Thanks to our 38 contributors whose commits are featured in this release:

@BertrandDechoux, @ChrisGll, @Dref360, @JamesHinshelwood, @MarcoAndreaBuchmann, @ageron, @alfasst, @blue-atom, @chasebrignac, @cshubhamrao, @danFromTelAviv, @datumbox, @farizrahman4u, @fchollet, @fuzzythecat, @gabrieldemarmiesse, @hadifar, @heytitle, @hsgkim, @jankrepl, @joelthchao, @knightXun, @kouml, @linjinjin123, @lvapeab, @nikoladze, @ozabluda, @qlzh727, @roywei, @rvinas, @sriyogesh94, @tacaswell, @taehoonlee, @tedyu, @xuhdev, @yanboliang, @yongzx, @yuanxiaosc

keras - Keras 2.2.2

Published by fchollet about 6 years ago

This is a bugfix release, fixing a significant bug in multi_gpu_model.

For changes since version 2.2.0, see release notes for Keras 2.2.1.

keras - Keras 2.2.1

Published by fchollet about 6 years ago

Areas of improvement

  • Bugs fixes
  • Performance improvements
  • Documentation improvements

API changes

  • Add output_padding argument in Conv2DTranspose (to override default padding behavior).
  • Enable automatic shape inference when using Lambda layers with the CNTK backend.

Breaking changes

No breaking changes recorded.

Credits

Thanks to our 33 contributors whose commits are featured in this release:

@Ajk4, @Anner-deJong, @Atcold, @Dref360, @EyeBool, @ageron, @briannemsick, @cclauss, @davidtvs, @dstine, @eTomate, @ebatuhankaynak, @eliberis, @farizrahman4u, @fchollet, @fuzzythecat, @gabrieldemarmiesse, @jlopezpena, @kamil-kaczmarek, @kbattocchi, @kmader, @kvechera, @maxpumperla, @mkaze, @pavithrasv, @rvinas, @sachinruk, @seriousmac, @soumyac1999, @taehoonlee, @yanboliang, @yongzx, @yuyang-huang

keras - Keras 2.2.0

Published by fchollet over 6 years ago

Areas of improvements

  • New model definition API: Model subclassing.
  • New input mode: ability to call models on TensorFlow tensors directly (TensorFlow backend only).
  • Improve feature coverage of Keras with the Theano and CNTK backends.
  • Bug fixes and performance improvements.
  • Large refactors improving code structure, code health, and reducing test time. In particular:
    • The Keras engine now follows a much more modular structure.
    • The Sequential model is now a plain subclass of Model.
    • The modules applications and preprocessing are now externalized to their own repositories (keras-applications and keras-preprocessing).

API changes

  • Add Model subclassing API (details below).
  • Allow symbolic tensors to be fed to models, with TensorFlow backend (details below).
  • Enable CNTK and Theano support for layers SeparableConv1D, SeparableConv2D, as well as backend methods separable_conv1d and separable_conv2d (previously only available for TensorFlow).
  • Enable CNTK and Theano support for applications Xception and MobileNet (previously only available for TensorFlow).
  • Add MobileNetV2 application (available for all backends).
  • Enable loading external (non built-in) backends by changing your ~/.keras.json configuration file (e.g. PlaidML backend).
  • Add sample_weight in ImageDataGenerator.
  • Add preprocessing.image.save_img utility to write images to disk.
  • Default Flatten layer's data_format argument to None (which defaults to global Keras config).
  • Sequential is now a plain subclass of Model. The attribute sequential.model is deprecated.
  • Add baseline argument in EarlyStopping (stop training if a given baseline isn't reached).
  • Add data_format argument to Conv1D.
  • Make the model returned by multi_gpu_model serializable.
  • Support input masking in TimeDistributed layer.
  • Add an advanced_activation layer ReLU, making the ReLU activation easier to configure while retaining easy serialization capabilities.
  • Add axis=-1 argument in backend crossentropy functions specifying the class prediction axis in the input tensor.

New model definition API : Model subclassing

In addition to the Sequential API and the functional Model API, you may now define models by subclassing the Model class and writing your own call forward pass:

import keras

class SimpleMLP(keras.Model):

    def __init__(self, use_bn=False, use_dp=False, num_classes=10):
        super(SimpleMLP, self).__init__(name='mlp')
        self.use_bn = use_bn
        self.use_dp = use_dp
        self.num_classes = num_classes

        self.dense1 = keras.layers.Dense(32, activation='relu')
        self.dense2 = keras.layers.Dense(num_classes, activation='softmax')
        if self.use_dp:
            self.dp = keras.layers.Dropout(0.5)
        if self.use_bn:
            self.bn = keras.layers.BatchNormalization(axis=-1)

    def call(self, inputs):
        x = self.dense1(inputs)
        if self.use_dp:
            x = self.dp(x)
        if self.use_bn:
            x = self.bn(x)
        return self.dense2(x)

model = SimpleMLP()
model.compile(...)
model.fit(...)

Layers are defined in __init__(self, ...), and the forward pass is specified in call(self, inputs). In call, you may specify custom losses by calling self.add_loss(loss_tensor) (like you would in a custom layer).

New input mode: symbolic TensorFlow tensors

With Keras 2.2.0 and TensorFlow 1.8 or higher, you may fit, evaluate and predict using symbolic TensorFlow tensors (that are expected to yield data indefinitely). The API is similar to the one in use in fit_generator and other generator methods:

iterator = training_dataset.make_one_shot_iterator()
x, y = iterator.get_next()

model.fit(x, y, steps_per_epoch=100, epochs=10)

iterator = validation_dataset.make_one_shot_iterator()
x, y = iterator.get_next()
model.evaluate(x, y, steps=50)

This is achieved by dynamically rewiring the TensorFlow graph to feed the input tensors to the existing model placeholders. There is no performance loss compared to building your model on top of the input tensors in the first place.

Breaking changes

  • Remove legacy Merge layers and associated functionality (remnant of Keras 0), which were deprecated in May 2016, with full removal initially scheduled for August 2017. Models from the Keras 0 API using these layers cannot be loaded with Keras 2.2.0 and above.
  • The truncated_normal base initializer now returns values that are scaled by ~0.9 (resulting in correct variance value after truncation). This has a small chance of affecting initial convergence behavior on some models.

Credits

Thanks to our 46 contributors whose commits are featured in this release:

@ASvyatkovskiy, @AmirAlavi, @Anirudh-Swaminathan, @DavidAriel, @Dref360, @JonathanCMitchell, @KuzMenachem, @PeterChe1990, @Saharkakavand, @StefanoCappellini, @ageron, @askskro, @bileschi, @bonlime, @bottydim, @brge17, @briannemsick, @bzamecnik, @christian-lanius, @clemens-tolboom, @dschwertfeger, @dynamicwebpaige, @farizrahman4u, @fchollet, @fuzzythecat, @ghostplant, @giuscri, @huyu398, @jnphilipp, @masstomato, @morenoh149, @mrTsjolder, @nittanycolonial, @r-kellerm, @reidjohnson, @roatienza, @sbebo, @stevemurr, @taehoonlee, @tiferet, @tkoivisto, @tzerrell, @vkk800, @wangkechn, @wouterdobbels, @zwang36wang

keras - Keras 2.1.6

Published by fchollet over 6 years ago

Areas of improvement

  • Bug fixes
  • Documentation improvements
  • Minor usability improvements

API changes

  • In callback ReduceLROnPlateau, rename epsilon argument to min_delta (backwards-compatible).
  • In callback RemoteMonitor, add argument send_as_json.
  • In backend softmax function, add argument axis.
  • In Flatten layer, add argument data_format.
  • In save_model (Model.save) and load_model functions, allow the filepath argument to be a h5py.File object.
  • In Model.evaluate_generator, add verbose argument.
  • In Bidirectional wrapper layer, add constants argument.
  • In multi_gpu_model function, add arguments cpu_merge and cpu_relocation (controlling whether to force the template model's weights to be on CPU, and whether to operate merge operations on CPU or GPU).
  • In ImageDataGenerator, allow argument width_shift_range to be int or 1D array-like.

Breaking changes

This release does not include any known breaking changes.

Credits

Thanks to our 37 contributors whose commits are featured in this release:

@Dref360, @FirefoxMetzger, @Naereen, @NiharG15, @StefanoCappellini, @WindQAQ, @dmadeka, @edrogers, @eltronix, @farizrahman4u, @fchollet, @gabrieldemarmiesse, @ghostplant, @jedrekfulara, @jlherren, @joeyearsley, @johanahlqvist, @johnyf, @jsaporta, @kalkun, @lucasdavid, @masstomato, @mrlzla, @myutwo150, @nisargjhaveri, @obi1kenobi, @olegantonyan, @ozabluda, @pasky, @planck35, @sotlampr, @souptc, @srjoglekar246, @stamate, @taehoonlee, @vkk800, @xuhdev

keras - Keras 2.1.5

Published by fchollet over 6 years ago

Areas of improvement

  • Bug fixes.
  • New APIs: sequence generation API TimeseriesGenerator, and new layer DepthwiseConv2D.
  • Unit tests / CI improvements.
  • Documentation improvements.

API changes

  • Add new sequence generation API keras.preprocessing.sequence.TimeseriesGenerator.
  • Add new convolutional layer keras.layers.DepthwiseConv2D.
  • Allow weights from keras.layers.CuDNNLSTM to be loaded into a keras.layers.LSTM layer (e.g. for inference on CPU).
  • Add brightness_range data augmentation argument in keras.preprocessing.image.ImageDataGenerator.
  • Add validation_split API in keras.preprocessing.image.ImageDataGenerator. You can pass validation_split to the constructor (float), then select between training/validation subsets by passing the argument subset='validation' or subset='training' to methods flow and flow_from_directory.

Breaking changes

  • As a side effect of a refactor of ConvLSTM2D to a modular implementation, recurrent dropout support in Theano has been dropped for this layer.

Credits

Thanks to our 28 contributors whose commits are featured in this release:

@DomHudson, @Dref360, @VitamintK, @abrad1212, @ahundt, @bojone, @brainnoise, @bzamecnik, @caisq, @cbensimon, @davinnovation, @farizrahman4u, @fchollet, @gabrieldemarmiesse, @khosravipasha, @ksindi, @lenjoy, @masstomato, @mewwts, @ozabluda, @paulpister, @sandpiturtle, @saralajew, @srjoglekar246, @stefangeneralao, @taehoonlee, @tiangolo, @treszkai

keras - Keras 2.1.4

Published by fchollet over 6 years ago

Areas of improvement

  • Bug fixes
  • Performance improvements
  • Improvements to example scripts

API changes

  • Allow for stateful metrics in model.compile(..., metrics=[...]). A stateful metric inherits from Layer, and implements __call__ and reset_states.
  • Support constants argument in StackedRNNCells.
  • Enable some TensorBoard features in the TensorBoard callback (loss and metrics plotting) with non-TensorFlow backends.
  • Add reshape argument in model.load_weights(), to optionally reshape weights being loaded to the size of the target weights in the model considered.
  • Add tif to supported formats in ImageDataGenerator.
  • Allow auto-GPU selection in multi_gpu_model() (set gpus=None).
  • In LearningRateScheduler callback, the scheduling function now takes an argument: lr, the current learning rate.

Breaking changes

  • In ImageDataGenerator, change default interpolation of image transforms from nearest to bilinear. This should probably not break any users, but it is a change of behavior.

Credits

Thanks to our 37 contributors whose commits are featured in this release:

@DalilaSal, @Dref360, @GalaxyDream, @GarrisonJ, @Max-Pol, @May4m, @MiliasV, @MrMYHuang, @N-McA, @Vijayabhaskar96, @abrad1212, @ahundt, @angeloskath, @bbabenko, @bojone, @brainnoise, @bzamecnik, @caisq, @cclauss, @dsadulla, @fchollet, @gabrieldemarmiesse, @ghostplant, @gorogoroyasu, @icyblade, @kapsl, @kevinbache, @mendesmiguel, @mikesol, @myutwo150, @ozabluda, @sadreamer, @simra, @taehoonlee, @veniversum, @yongtang, @zhangwj618

keras - Keras 2.1.3

Published by fchollet almost 7 years ago

Areas of improvement

  • Performance improvements (esp. convnets with TensorFlow backend).
  • Usability improvements.
  • Docs & docstrings improvements.
  • New models in the applications module.
  • Bug fixes.

API changes

  • trainable attribute in BatchNormalization now disables the updates of the batch statistics (i.e. if trainable == False the layer will now run 100% in inference mode).
  • Add amsgrad argument in Adam optimizer.
  • Add new applications: NASNetMobile, NASNetLarge, DenseNet121, DenseNet169, DenseNet201.
  • Add Softmax layer (removing need to use a Lambda layer in order to specify the axis argument).
  • Add SeparableConv1D layer.
  • In preprocessing.image.ImageDataGenerator, allow width_shift_range and height_shift_range to take integer values (absolute number of pixels)
  • Support return_state in Bidirectional applied to RNNs (return_state should be set on the child layer).
  • The string values "crossentropy" and "ce" are now allowed in the metrics argument (in model.compile()), and are routed to either categorical_crossentropy or binary_crossentropy as needed.
  • Allow steps argument in predict_* methods on the Sequential model.
  • Add oov_token argument in preprocessing.text.Tokenizer.

Breaking changes

  • In preprocessing.image.ImageDataGenerator, shear_range has been switched to use degrees rather than radians (for consistency). This should not actually break anything (neither training nor inference), but keep this change in mind in case you see any issues with regard to your image data augmentation process.

Credits

Thanks to our 45 contributors whose commits are featured in this release:

@Dref360, @OliPhilip, @TimZaman, @bbabenko, @bdwyer2, @berkatmaca, @caisq, @decrispell, @dmaniry, @fchollet, @fgaim, @gabrieldemarmiesse, @gklambauer, @hgaiser, @hlnull, @icyblade, @jgrnt, @kashif, @kouml, @lutzroeder, @m-mohsen, @mab4058, @manashty, @masstomato, @mihirparadkar, @myutwo150, @nickbabcock, @novotnj3, @obsproth, @ozabluda, @philferriere, @piperchester, @pstjohn, @roatienza, @souptc, @spiros, @srs70187, @sumitgouthaman, @taehoonlee, @tigerneil, @titu1994, @tobycheese, @vitaly-krumins, @yang-zhang, @ziky90

keras - Keras 2.1.2

Published by fchollet almost 7 years ago

Areas of improvement

  • Bug fixes and performance improvements.
  • API improvements in Keras applications, generator methods.

API changes

  • Make preprocess_input in all Keras applications compatible with both Numpy arrays and symbolic tensors (previously only supported Numpy arrays).
  • Allow the weights argument in all Keras applications to accept the path to a custom weights file to load (previously only supported the built-in imagenet weights file).
  • steps_per_epoch behavior change in generator training/evaluation methods:
    • If specified, the specified value will be used (previously, in the case of generator of type Sequence, the specified value was overridden by the Sequence length)
    • If unspecified and if the generator passed is a Sequence, we set it to the Sequence length.
  • Allow workers=0 in generator training/evaluation methods (will run the generator in the main process, in a blocking way).
  • Add interpolation argument in ImageDataGenerator.flow_from_directory, allowing a custom interpolation method for image resizing.
  • Allow gpus argument in multi_gpu_model to be a list of specific GPU ids.

Breaking changes

  • The change in steps_per_epoch behavior (described above) may affect some users.

Credits

Thanks to our 26 contributors whose commits are featured in this release:

@Alex1729, @alsrgv, @apisarek, @asos-saul, @athundt, @cherryunix, @dansbecker, @datumbox, @de-vri-es, @drauh, @evhub, @fchollet, @heath730, @hgaiser, @icyblade, @jjallaire, @knaveofdiamonds, @lance6716, @luoch, @mjacquem1, @myutwo150, @ozabluda, @raviksharma, @rh314, @yang-zhang, @zach-nervana

keras - Keras 2.1.1

Published by fchollet almost 7 years ago

This release amends release 2.1.0 to include a fix for an erroneous breaking change introduced in #8419.

keras - Keras 2.1.0

Published by fchollet almost 7 years ago

This is a small release that fixes outstanding bugs that were reported since the previous release.

Areas of improvement

  • Bug fixes (in particular, Keras no longer allocates devices at startup time with the TensorFlow backend. This was causing issues with Horovod.)
  • Documentation and docstring improvements.
  • Better CIFAR10 ResNet example script and improvements to example scripts code style.

API changes

  • Add go_backwards to cuDNN RNNs (enables Bidirectional wrapper on cuDNN RNNs).
  • Add ability to pass fetches to K.Function() with the TensorFlow backend.
  • Add steps_per_epoch and validation_steps arguments in Sequential.fit() (to sync it with Model.fit()).

Breaking changes

None.

Credits

Thanks to our 14 contributors whose commits are featured in this release:

@Dref360, @LawnboyMax, @anj-s, @bzamecnik, @datumbox, @diogoff, @farizrahman4u, @fchollet, @frexvahi, @jjallaire, @nsuh, @ozabluda, @roatienza, @yakigac

keras - Keras 2.0.9

Published by fchollet almost 7 years ago

Areas of improvement

  • RNN improvements:
    • Refactor RNN layers to rely on atomic RNN cells. This makes the creation of custom RNN very simple and user-friendly, via the RNN base class.
    • Add ability to create new RNN cells by stacking a list of cells, allowing for efficient stacked RNNs.
    • Add CuDNNLSTM and CuDNNGRU layers, backend by NVIDIA's cuDNN library for fast GPU training & inference.
    • Add RNN Sequence-to-sequence example script.
    • Add constants argument in RNN's call method, making RNN attention easier to implement.
  • Easier multi-GPU data parallelism via keras.utils.multi_gpu_model.
  • Bug fixes & performance improvements (in particular, native support for NCHW data layout in TensorFlow).
  • Documentation improvements and examples improvements.

API changes

  • Add "fashion mnist" dataset as keras.datasets.fashion_mnist.load_data()
  • Add Minimum merge layer as keras.layers.Minimum (class) and keras.layers.minimum(inputs) (function)
  • Add InceptionResNetV2 to keras.applications.
  • Support bool variables in TensorFlow backend.
  • Add dilation to SeparableConv2D.
  • Add support for dynamic noise_shape in Dropout
  • Add keras.layers.RNN() base class for batch-level RNNs (used to implement custom RNN layers from a cell class).
  • Add keras.layers.StackedRNNCells() layer wrapper, used to stack a list of RNN cells into a single cell.
  • Add CuDNNLSTM and CuDNNGRU layers.
  • Deprecate implementation=0 for RNN layers.
  • The Keras progbar now reports time taken for each past epoch, and average time per step.
  • Add option to specific resampling method in keras.preprocessing.image.load_img().
  • Add keras.utils.multi_gpu_model for easy multi-GPU data parallelism.
  • Add constants argument in RNN's call method, used to pass a list of constant tensors to the underlying RNN cell.

Breaking changes

  • Implementation change in keras.losses.cosine_proximity results in a different (correct) scaling behavior.
  • Implementation change for samplewise normalization in ImageDataGenerator results in a different normalization behavior.

Credits

Thanks to our 59 contributors whose commits are featured in this release!

@Alok, @Danielhiversen, @Dref360, @HelgeS, @JakeBecker, @MPiecuch, @MartinXPN, @RitwikGupta, @TimZaman, @adammenges, @aeftimia, @ahojnnes, @akshaychawla, @alanyee, @aldenks, @andhus, @apbard, @aronj, @bangbangbear, @bchu, @bdwyer2, @bzamecnik, @cclauss, @colllin, @datumbox, @deltheil, @dhaval067, @durana, @ericwu09, @facaiy, @farizrahman4u, @fchollet, @flomlo, @fran6co, @grzesir, @hgaiser, @icyblade, @jsaporta, @julienr, @jussihuotari, @kashif, @lucashu1, @mangerlahn, @myutwo150, @nicolewhite, @noahstier, @nzw0301, @olalonde, @ozabluda, @patrikerdes, @podhrmic, @qin, @raelg, @roatienza, @shadiakiki1986, @smgt, @souptc, @taehoonlee, @y0z

keras - Keras 2.0.8

Published by fchollet about 7 years ago

The primary purpose of this release is to address an incompatibility between Keras 2.0.7 and the next version of TensorFlow (1.4). TensorFlow 1.4 isn't due until a while, but the sooner the PyPI release has the fix, the fewer people will be affected when upgrading to the next TensorFlow version when it gets released.

No API changes for this release. A few bug fixes.

keras - Keras 2.0.7

Published by fchollet about 7 years ago

Areas of improvement

  • Bug fixes.
  • Performance improvements.
  • Documentation improvements.
  • Better support for training models from data tensors in TensorFlow (e.g. Datasets, TFRecords). Add a related example script.
  • Improve TensorBoard UX with better grouping of ops into name scopes.
  • Improve test coverage.

API changes

  • Add clone_model method, enabling to construct a new model, given an existing model to use as a template. Works even in a TensorFlow graph different from that of the original model.
  • Add target_tensors argument in compile, enabling to use custom tensors or placeholders as model targets.
  • Add steps_per_epoch argument in fit, enabling to train a model from data tensors in a way that is consistent with training from Numpy arrays.
  • Similarly, add steps argument in predict and evaluate.
  • Add Subtract merge layer, and associated layer function subtract.
  • Add weighted_metrics argument in compile to specify metric functions meant to take into account sample_weight or class_weight.
  • Make the stop_gradients backend function consistent across backends.
  • Allow dynamic shapes in repeat_elements backend function.
  • Enable stateful RNNs with CNTK.

Breaking changes

  • The backend methods categorical_crossentropy, sparse_categorical_crossentropy, binary_crossentropy had the order of their positional arguments (y_true, y_pred) inverted. This change does not affect the losses API. This change was done to achieve API consistency between the losses API and the backend API.
  • Move constraint management to be based on variable attributes. Remove the now-unused constraints attribute on layers and models (not expected to affect any user).

Credits

Thanks to our 47 contributors whose commits are featured in this release!

@5ke, @Alok, @Danielhiversen, @Dref360, @NeilRon, @abnera, @acburigo, @airalcorn2, @angeloskath, @athundt, @brettkoonce, @cclauss, @denfromufa, @enkait, @erg, @ericwu09, @farizrahman4u, @fchollet, @georgwiese, @ghisvail, @gokceneraslan, @hgaiser, @inexxt, @joeyearsley, @jorgecarleitao, @kennyjacob, @keunwoochoi, @krizp, @lukedeo, @milani, @n17r4m, @nicolewhite, @nigeljyng, @nyghtowl, @nzw0301, @rapatel0, @souptc, @srinivasreddy, @staticfloat, @taehoonlee, @td2014, @titu1994, @tleeuwenburg, @udibr, @waleedka, @wassname, @yashk2810

keras - Keras 2.0.6

Published by fchollet over 7 years ago

Areas of improvement

  • Improve generator methods (predict_generator, fit_generator, evaluate_generator) and add data enqueuing utilities.
  • Bug fixes and performance improvements.
  • New features: new Conv3DTranspose layer, new MobileNet application, self-normalizing networks.

API changes

  • Self-normalizing networks: add selu activation function, AlphaDropout layer, lecun_normal initializer.
  • Data enqueuing: add Sequence, SequenceEnqueuer, GeneratorEnqueuer to utils.
  • Generator methods: rename arguments pickle_safe (replaced with use_multiprocessing) and max_q_size (replaced with max_queue_size).
  • Add MobileNet to the applications module.
  • Add Conv3DTranspose layer.
  • Allow custom print functions for model's summary method (argument print_fn).