chainer

A flexible framework of neural networks for deep learning

MIT License

Downloads
86.6K
Stars
5.9K
Committers
298
chainer - v4.2.0

Published by hvy over 6 years ago

This is the release note of v4.2.0. See here for the complete list of solved issues and merged PRs.

Highlights

  • return_indices option has been added to F.max_pooling_2d and F.max_pooling_nd, so that you can access indices (or indexes) without using MaxPooling2D and MaxPoolingND classes directly.

New Features

  • Add more functions with double-backprop support: F.prod (#4826) and F.contrastive (#4861)
  • Support importing Caffe Reshape layer (#4905)

Enhancements

  • Raise an error when CUDA context has been initialized before MultiprocessParallelUpdater is initialized (#4750)
  • Output the direction of differentiation in gradient_check error (#4817)
  • Show progress report to stderr (#4827)
  • Optimize F.bilinear (#4834)
  • Remove redundant device allocation in word2vec example (#4856, thanks @arisliang!)
  • Issue a warning when repeat attribute of some iterators in Evaluator is True (#4865)
  • Improve error message and exception type during MultiprocessParallelUpdater initialization (#4867)
  • Raise an error if neither __call__ nor forward is defined in a link (#4888)
  • Deprecate F.fixed_batch_renormalization (#4940)
  • Add return_indices option to F.max_pooling_2d (#4952)
  • Add return_indices option to F.max_pooling_nd (#4953)
  • Add mask and return_mask option to F.dropout (#4954)
  • Add eps and return_eps option to F.gaussian (#4955)

Bug Fixes

  • Select devices appropriately when calling user code in ParallelUpdater (#4842)
  • Fix F.rollaxis backward when axis=0 and start=0 (#4843)
  • Raise an error when devices argument given to MultiprocessParallelUpdater is neither dict nor list (#4847)
  • Fix error message when checking types of grads (#4885, thanks @reo911gt3!)
  • Fix Caffe exporter to export eps of the BatchNormalization layer (#4913)
  • Fix SigmoidCrossEntropy.backward (#4924)
  • Fix ValueError in Caffe exporter on Windows with Python 2.7 (#4929)
  • Remove wrong-length check from GetItem.check_type_forward (#4944)

Documentation

  • Add notes on differences between v1 and later versions to the document of the extract method of vision models (ResNet, VGG16, GoogLeNet) (#4941)
  • Fix some typos in tutorials (#4802, thanks @elvisyjlin!)
  • Update URL in README (#4803)
  • Fix comment in caffe.py (#4815, thanks @poutyface!)
  • Fix a documenation bug in TransformDataset (#4819)
  • Remove Slack archive link from README.md (#4840, thanks @hidetomasuoka!)
  • Improve documentation about wrapper functions (#4850)
  • Document call_for_each_param attribute in optimizer hooks (#4857)
  • Correct docs of batch_{re}normalization about running averages (#4870, thanks @grafi-tt!)
  • Add type check trouble shooting to FAQ (#4883)
  • Fix gradient_check docstring to enable doctest (#4926)
  • Replace 0xa0 (nbsp) with 0x20 (space) in documentation (#4930)
  • Update documentation to explicitly suggest using ideep4py v1 (#4939)
  • Document performance best practices (#4942)
  • Improve function reference and documentation (#4943)

Installation

  • Fix required version of CuPy (#4830)
  • Remove deprecated imp.load_source in setup.py (#4859, thanks @vilyaair!)

Examples

  • Fix custom loop examples to disable train mode during evaluation (#4804)

Tests

  • Separate MultiprocessParallelUpdater tests (#4801)
  • Fix test warning filter for Theano 1.0.2 which triggers DeprecationWarning (#4820)
  • Fix input checks in F.min and F.max tests (#4841)
  • Set output grad in TestCUDAProfileHook (#4866)
  • Mark multi_gpu tests as gpu tests (#4881)
  • Remove flake8 from AppVeyor test (#4882)
  • Deselect gpu tests instead of skipping them (#4906)
  • Stop using freeze_running_statistics in TestFixedBatchRenormalization (#4908)
  • Add backprop tests (#4916)
  • Tentatively restrict pytest-timeout version to <1.3.0 (#4919)
  • Check docs build in Travis CI (#4931)
  • Delete redundant use of no_grads argument of check_backward from test (#4938)
  • Fix occasional test failure of F.average by changing lower-bound of the sum of weights (#4960)
  • Remove unnecessary tests (#4967)
chainer - v5.0.0b2

Published by niboshi over 6 years ago

This is the release note of v5.0.0b2. See here for the complete list of solved issues and merged PRs.

Highlights

  • New configuration value chainer.config.dtype has been introduced. This configuration can be used to switch your model to run with float16 / float32 / float64 without modifying your code. In this version of Chainer, this configuration is supported by initializers, built-in datasets and part of built-in Links. We're going to improve all built-in Links to support this feature towards the final v5 release (#4582).
  • New module chainer.distributions has been introduced (see API Reference). We're going to provide more probability distribution implementations towards the final v5 release (#4678).
  • Some functions (Variable operators and F.matmul) now support NumPy-style broadcasting. We're going to improve more built-in Functions to support broadcast towards the final v5 release (#4679).
  • L.ConvolutionND and L.DeconvolutionND now support grouped and dilated convolution.
  • Interoperability with Caffe model has been improved for both export (Deconvolution and LeakyReLU functions) and import (Deconvolution and Reshape layer).
  • return_indices option has been added to F.max_pooling_2d and F.max_pooling_nd, so that you can access indices (or indexes) without using MaxPooling2D and MaxPoolingND class directly.
  • chainer.datasets.TextDataset has been introduced to reduce host memory when loading large text files. See seq2seq example for an example usage.

Changes without compatibility

  • Make updaters call new_epoch automatically (#4608)
    • This change should affect only a minority of users (who call Optimizer.new_epoch while using a trainer, or who implement their own updater class). See the Upgrade Guide for details.
  • Fix no_grads option of check_backward (#4654)

New Features

  • Add L.DeformableConvolution2D (#2468)
  • Support skipping padded regions in F.average_pooling_nd (#3486)
  • Support double-backprop in F.contrastive (#3830)
  • Add M-SVAG optimizer (#4473!)
  • Add chainer.config.dtype and use it in initializers and dataset loaders. (#4510)
  • Add InverseShift extension (#4565, thanks @jinjiren!)
  • Support grouped and dilated convolutions for ConvolutionND and DeconvolutionND (#4591)
  • Make updaters call new_epoch automatically (#4608)
  • Add group normalization (#4638, thanks @lyakaap!)
  • Support remove/replace child link in ChainList (#4660, thanks @insanity!)
  • Automatic broadcast in basic_math and matmul (#4679)
  • Add normal distribution (#4773)
  • Implement TextDataset to load line-oriented text files (#4782)
  • Add CorrectedMomentumSGD optimizer (#4835)
  • Support importing Caffe Reshape layer (#4875)
  • Add F.diagonal (#4901)
  • Add caffe import for Deconvolution layer; Fix num_output parameter in caffe export of Deconvolution layer (#4936, thanks @tsurumeso!)
  • Add caffe export for LeakyReLU layer (#4937, thanks @tsurumeso!)

Enhancements

  • Issue a warning when repeat attribute of some iterators in Evaluator is True (#3436)
  • Use input_num to remove array creation overhead (#4162)
  • Avoid using retained inputs in F.rsqrt (#4614)
  • Assign user-friendly name for better type check exception message (#4680)
  • Improve error message and exception type during MultiprocessParallelUpdater initialization (#4751)
  • Optimize F.bilinear (#4752)
  • Show progress report to stderr (#4766)
  • Raise an error if neither __call__ nor forward is defined in a link (#4770)
  • Output the direction of differentiation in gradient_check error (#4797)
  • Allow more than 2 axis in F.normalize (#4799)
  • Remove redundant device allocation in word2vec example (#4848, thanks @arisliang!)
  • Deprecate F.fixed_batch_renormalization (#4869)
  • Improve argument processing performance (#4880)
  • Add return_indices option to F.max_pooling_2d (#4890)
  • Add return_indices option to F.max_pooling_nd (#4891)
  • Remove experimental warning (#4896)
  • Add mask and return_mask option to F.dropout (#4907)
  • Add eps and return_eps option to F.gaussian (#4909)
  • Simplify type checks in F.broadcast using type_check.expect_broadcast_shapes (#4947)

Bug Fixes

  • Fix stride in the ResNeXt50 example (#4479, thanks @akitotakeki!)
  • Fix no_grads option of check_backward (#4654)
  • Raise an error when devices argument given to MultiprocessParallelUpdater is neither dict nor list (#4716)
  • Fix a stability issue in LayerNormalization (#4744, thanks @anaruse!)
  • Select devices appropriately when calling user code in ParallelUpdater (#4774)
  • Fix ResNetLayers to not pass-through downsample_fb argument to L.Convolution2D (#4829)
  • Fix F.rollaxis backward when axis=0 and start=0 (#4836)
  • Remove wrong-length check from GetItem.check_type_forward (#4845)
  • Fix error message when checking types of grads (#4879, thanks @reo911gt3!)
  • Fix Caffe exporter to export eps of BatchNormalization layer (#4884)
  • Fix SigmoidCrossEntropy.backward (#4915)
  • Fix ValueError in caffe exporter on Windows with Python 2.7 (#4928)
  • Fix operations in ChainList (#4945)
  • Raise an error on 0-dim input in F.matmul (#4949)

Documentation

  • Improve documentation about wrapper functions (#4052)
  • Add notes on differences between v1 and later versions of the extract method of vision models (ResNet, VGG16, GoogLeNet) (#4675)
  • Improve CNN example docs (#4714)
  • Document the pros and cons of the multi-GPU updaters (#4778)
  • Add type check trouble shooting to FAQ (#4785)
  • Fix comment in caffe.py (#4808, thanks @poutyface!)
  • Document performance best practices (#4809)
  • Fix a documenation bug in TransformDataset (#4814)
  • Correct docs of batch_{re}normalization about running averages (#4818, thanks @grafi-tt!)
  • Add testing utilities to documentation (#4828)
  • Remove slack archive link from README.md (#4839, thanks @hidetomasuoka!)
  • Document call_for_each_param attribute in optimizer hooks (#4849)
  • Improve function reference and documentation (#4855)
  • Add distribution reference (#4886)
  • Fix gradient_check docstring to enable doctest (#4889)
  • Fix normal distribution docs (#4892)
  • Fix Adam alpha argument explanation (#4895)
  • Improve the documentation of F.moveaxis and discourage F.rollaxis (#4900)
  • Replace 0xa0 (nbsp) with 0x20 (space) in documentation (#4921)
  • Update documentation to explicitly suggest using ideep4py v1 (#4934)
  • Add ZippedImageDataset and MultiZippedImageDataset to documentation (#4959)

Installation

  • Remove deprecated imp.load_source in setup.py (#4846, thanks @vilyaair!)

Examples

  • Add pix2pix example (#4271!)
  • Refactor sentiment example (#4694)
  • Add serialization example (#4740)
  • Simplify classification code in MNIST custom loop example (#4837)

Tests

  • Fix occasional test failure of F.average by changing lower-bound of weight.sum (#4771)
  • Add backprop tests (#4779)
  • Fix test warning filter for Theano 1.0.2 which triggers DeprecationWarning (#4810)
  • Test F.layer_normalization with large eps (#4816)
  • Fix input checks in F.min and F.max tests (#4838)
  • Remove flake8 from AppVeyor test (#4852)
  • Set output grad in TestCUDAProfileHook (#4863)
  • Stop using freeze_running_statistics in TestFixedBatchRenormalization (#4868)
  • Improve batch renormalization tests (#4871)
  • Mark multi_gpu tests as gpu tests (#4872)
  • Deselect gpu tests instead of skipping them (#4873)
  • Catch warnings in test_count_params (#4876)
  • Delete redundant use of no_grads argument of check_backward from test (#4914)
  • Check docs build in Travis CI (#4917)
  • Tentatively restrict pytest-timeout version to <1.3.0 (#4918)
  • Fix keyword argument name in TestResNetLayers (#4965)
  • Remove unnecessary tests (#4966)
chainer - v5.0.0b1

Published by hvy over 6 years ago

This is the release notes of v5.0.0b1. See here for the complete list of solved issues and merged PRs.

New Features

  • Add order_sampler option to Iterators (#3429)
  • New style prod function (#3764)
  • Moveaxis array operation proposal (#4112, thanks @fukatani!)
  • Add n_batch_axes option to F.linear (#4204)
  • Support Layer-wise Adaptive Rate Scaling (LARS) (#4237, thanks @tohmae!)
  • Sparse matmul support (#4397, thanks @anaruse!)
  • Optimize grouped convolution for intel64 backed environment (#4450, thanks @tkng!)
  • Add ignore_names option to load_npz (#4682)
  • Add the support of PolynomialShift in the extensions of training (#4693, thanks @tianshilei1992!)

Enhancements

  • Use rsqrt in F.batch_normalization (#4612)
  • Raise an error when CUDA has been initialized before MultiprocessParallelUpdater is initialized (#4717)
  • Fix performance regression in F.bilinear (#4738)
  • Support copying 1-dim array to iDeep (#4746)
  • Optimize Huber loss by simpler calculation (#4775, thanks @grafi-tt!)

Bug Fixes

  • Fix Chain.repeat raise error (#4649, thanks @mori97!)
  • Fix ChainList.copy not supporting mode argument (#4652)
  • Warn only if the BatchNormalization input tensor is 2-dimensional (#4663)
  • Fix import failure when matplotlib 1.x is installed (#4681)
  • Select current device using grads of outputs in addition to inputs during backward (#4725)
  • Fix dtype bug in F.normalize (#4763)
  • Fix lazy_grad_sum debug mode (#4768)

Documentation

  • Fix heading level of Caffe docs (#4631)
  • Fix typos in seq2seq tutorial document (#4643, thanks @kuni-kuni!)
  • Update URL in README (#4657)
  • Fix F.batch_normalization axis document (#4666)
  • Fix docs and tests of L.BatchNormalization (#4671)
  • Update train_loop.rst (#4700, thanks @arisliang!)
  • Update math formula typo (#4702, thanks @arisliang!)
  • Update word2vec.rst (#4705, thanks @arisliang!)
  • Add example for transpose_sequence (#4719)
  • Update seq2seq.rst (#4721, thanks @arisliang!)
  • Add doc build steps to the contribution guide (#4722)
  • Fix dead link to CuPy installation guide (#4724)
  • Update top sentences in documentation (#4732)
  • Add abs to document of Variable (#4757)
  • Fix some typos in tutorials (#4760, thanks @elvisyjlin!)
  • Sparse matrix documentation enhancements (#4786)

Examples

  • Mini-batch training for recursive neural networks example (#2135)
  • Fix custom loop examples to disable train mode during evaluation (#4568)

Tests

  • Unify TestBatchNormalization and TestBatchNormalizationAxis (#4558)
  • L.BatchNormalization: miscellaneous fixes (#4671)
  • Support skipping tests decorated by condition or parametrize (#4685)
  • Separate MultiprocessParallelUpdater tests (#4726)
  • Fix hacking version (#4731)
  • Fix incorrect attributes and incorrect indentation in pooling tests (#4743)
chainer - v4.1.0

Published by niboshi over 6 years ago

This is the release note of v4.1.0. See here for the complete list of solved issues and merged PRs.

New Features

  • Provide a cleaner way to collect threads and processes in MultiprocessIterator (#4637)
  • Support Layer-wise Adaptive Rate Scaling (LARS) (#4668, thanks @tohmae!)
  • Optimize grouped convolution for intel64 backed environment (#4787, thanks @tkng!)

Enhancements

  • Improve F.rsqrt performance in CPU (#4634)
  • Fix temporary file permission issue in LogReport (#4635)
  • Use rsqrt in F.batch_normalization (#4665)
  • Fix performance regression in F.bilinear (#4762)
  • Optimize Huber loss by simpler calculation (#4788, thanks @grafi-tt!)
  • Support copying 1-dim array to iDeep (#4794)

Bug Fixes

  • Fix Chain.repeat raise error (#4653, thanks @mori97!)
  • Fix ChainList.copy not supporting mode argument (#4664)
  • Fix import failure when matplotlib 1.x is installed (#4689)
  • Fix lazy_grad_sum debug mode (#4776)
  • Select current device using grads of outputs in addition to inputs during backward (#4780)
  • Fix dtype bug in F.normalize (#4781)

Documents

  • Add tutorial for Reporter and report (#4632)
  • Rewrite installation guide to align with CuPy's installation guide (#4633)
  • Add higher-order derivative support of Chainer to the comparison table (#4636)
  • Fix typos in seq2seq tutorial document (#4651, thanks @kuni-kuni!)
  • Fix heading level of Caffe docs (#4690)
  • Update word2vec.rst (#4713, thanks @arisliang!)
  • Update math formula typo (#4715, thanks @arisliang!)
  • Fix dead link to CuPy installation guide (#4727)
  • Update seq2seq.rst (#4728, thanks @arisliang!)
  • Update top sentences in documentation (#4734)
  • Add example for transpose_sequence (#4745)
  • Update train_loop.rst (#4759, thanks @arisliang!)
  • Add abs to document of Variable (#4764)
  • Add doc build steps to the contribution guide (#4784)

Tests

  • Support skipping tests decorated by condition or parameterize (#4688)
  • Fix hacking version (#4733)
  • Fix incorrect attributes and incorrect indentation in pooling tests (#4747)
chainer - v4.0.0

Published by kmaehashi over 6 years ago

This is a major release of Chainer v4.0.0. All the updates from the previous major version (v3.5.0) are found in the release notes below:

Summary of v4 update

  • Major Performance Improvements
    • Support iDeep backend for acceleration on Intel CPUs. We observed that the GoogLeNet inference with batch size 1 is made x8.9 faster (compared to the case without iDeep, both on Intel(R) Xeon(R) CPU E5-2623 v3 @ 3.00GHz with MKL).
    • Support cuDNN convolution autotuning
  • Better FP16 training support
    • TensorCore
    • Loss scaling
  • Caffe export support (experimental)
  • NCCL2 support

See the blog post for the details. Also see the Upgrade Guide for users migrating from Chainer v3 to v4.

Updates from the release candidate are as follows.

New Features

  • Support double-backprop in forget function (#4522)
  • Implement Variable.xp (#4536)
  • Add Sequential class for easy model definition of a single-stream computational graph (#4601)
  • Add extension to kill training when NaN or Inf is detected (FailOnNonNumber) (#4602)
  • Implement interface to print runtime information (#4613)
  • Implement Caffe export (#4621)

Enhancements

  • Support deserializing into array of different dtype (#4529)
  • Support dilated convolution on iDeep backend (#4540, thanks @LuoYuanke!)
  • Optimize depth2space & space2depth funcs by reducing operations (#4590, thanks @ruimashita!)
  • Avoid mutable default arguments (#4600)
  • Improve chainer.utils.argument exception message (#4615)
  • Removed redundant member variables (#4627)

Bug Fixes

  • Remove eps from batch normalization statistics (#4517)
  • Call FunctionHooks in chainer.grad (#4541)
  • Fix group argument of {de,}convolution_2d to keyword-only argument. (#4573)
  • Fix to_intel64 not updating VariableNode.data (#4597)
  • Fix to_intel64 to check if it is suitable for iDeep (#4598)
  • Fix seq2seq example ignoring limitation for target length (#4617)
  • Save snapshot in the OS default permission (#4618, thanks @belltailjp!)
  • Fix docstrings of Sequential class (#4628)

Documents

  • Document arithmetic special functions (#4515)
  • Fix debug mode documentation (#4519)
  • Add document on using iDeep (#4520)
  • Fix order of items in Reference section (#4526)
  • Improve docs for supported operators in Variable (#4527)
  • Improve docstring of CTC (#4531)
  • Improve description of Extensions (#4561)
  • Fix a typo (#4570)
  • Add upgrade guide for optimizer_hooks namespace (#4571)
  • Fix docs in {convolution,deconvolution}_2d (#4575)
  • Fix typo (#4576, thanks @Hakuyume!)
  • Fix document in EarlyStoppingTrigger (#4584, thanks @mori97!)
  • Add document for chainer.config.lazy_grad_sum (#4585)
  • Improve docstrings about axis of Caffe functions (#4603)
  • Improve docs of get_device_from_array (#4624)
  • Fix docstrings of Sequential (#4628)
  • Add FAQ for MultiprocessIterator + OpenCV problem (#4629)

Installation

  • use --no-cache-dir in Dockerfile (#4535)
  • Fix installation order of hacking (#4609)

Examples

  • fix seq2seq example ignoring limitation for target length (#4617)

Tests

  • Travis test against v4 branch (#4518)
  • Fix doctest (#4521)
  • Simplify TestSoftmaxCrossEntropyInvalidReduce in test_softmax_cross_entropy (#4579)
  • Fix installation order of hacking (#4609)
  • Fix and simplify TestForwardConsistency in softmax_cross_entropy (#4626)
chainer - v5.0.0a1

Published by niboshi over 6 years ago

This is the release of v5.0.0a1. See here for the complete list of solved issues and merged PRs.

New Features

  • Add Sequential class for easy model definition of a single-stream computational graph (#2918)
  • Add count_params() method to Link which enables to count the number of trainable values in a Link easily (#3101)
  • Implement Caffe export which can export a Chainer model into Caffe protobuf format (#3631)
  • Add more support on double backward: F.forget (#3792)
  • Provide a cleaner way to collect threads and processes in MultiprocessIterator (#4155)
  • Add axis option to batch normalization (#4266, thanks @anaruse!)
  • Add add_extra option to SVHN (#4478, thanks @akitotakeki!)
  • Implement Variable.xp (#4497)
  • Add a new Trainer extension to kill the training when NaN or Inf is detected in the model parameters (FailOnNonNumber) (#4545)
  • Add chainer.print_runtime_info() method to summarize the versions of libraries used in Chainer and CuPy (#4559)

Enhancements

  • Remove submodule aliases (#4378)
  • Avoid mutable default arguments (#4419)
  • Avoid use of from __future__ print_function (#4470)
  • Optimize F.depth2space and F.space2depth by reducing operations (#4482, thanks @ruimashita!)
  • Support deserializing into array of different dtypes (#4511)
  • Fix temporary file permission issue in LogReport (#4528)
  • Support dilated convolution on iDeep backend (#4537, thanks @LuoYuanke!)
  • Improve F.rsqrt performance in CPU (#4538)
  • Improve chainer.utils.argument exception message (#4551)
  • Removed redundant member variables (#4580)

Bug Fixes

  • Save snapshot in the OS default permission (#4461, thanks @belltailjp!)
  • Call FunctionHooks in chainer.grad (#4499)
  • Remove eps from batch normalization statistics (#4505)
  • Change group argument of F.convolution_2d and F.deconvolution_2d to keyword-only argument. (#4564)
  • Fix to_intel64 to check if it is suitable for iDeep (#4577)
  • Fix to_intel64 not updating VariableNode.data (#4592)

Documents

  • Add higher-order derivative support of Chainer to the comparison table (#3477)
  • Add tutorial: How to use chainer.Reporter (#3688)
  • Improve docstring of F.connectionist_temporal_classification (CTC) (#4309)
  • Add upgrade guide for optimizer_hooks namespace (#4468)
  • Add document on using iDeep (#4477)
  • Fix debug mode documentation (#4492)
  • Fix order of items in reference section (#4493)
  • Add a document of arithmetic special functions (#4495)
  • Improve docs for supported operators in Variable (#4516)
  • Fix docs of convolution_2d and deconvolution_2d (#4539)
  • Add document for the configuration chainer.config.lazy_grad_sum (#4543)
  • Improve description of extensions (#4552)
  • Fix dead link to optimizer hooks (#4563)
  • Fix typos (#4569, #4574, thanks @Hakuyume!)
  • Fix document in EarlyStoppingTrigger (#4578, thanks @mori97!)
  • Add FAQ for MultiprocessIterator + OpenCV problem (#4589)
  • Improve docstrings about axis of Caffe functions (#4599)
  • Improve docs of get_device_from_array (#4604)
  • Fix docstrings of Sequential (#4605)
  • Rewrite installation guide to align with CuPy's installation guide (#4622)

Installation

  • Use --no-cache-dir in Dockerfile (#4532)
  • Fix installation order of hacking (#4606)

Examples

  • End-to-end memory networks example (#4222)
  • Fix seq2seq example ignoring limitation for target length (#4611)

Tests

  • Travis test against v4 branch (#4503)
  • Fix doctest (#4506)
  • Fix and simplify TestForwardConsistency in F.softmax_cross_entropy (#4554)
  • Simplify TestSoftmaxCrossEntropyInvalidReduce in F.test_softmax_cross_entropy (#4555)

Others

  • Let the StaleBot ignore issues labelled with “roadmap” (#4496)
chainer - v4.0.0rc1

Published by beam2d over 6 years ago

This is the release candidate of v4. See here for the complete list of solved issues and merged PRs.

Announcements

  • The master branch has been switched to v5 development. The development of v4 will continue in the v4 branch.
  • The major release of v4 is planned on Apr. 17.

New Features

  • New differentiable functions
    • repeat (#3735)
    • fft, ifft (#4241)
    • local_convolution_2d: 2D convolution with spatially unshared weights (#4073, thanks @mihirparadkar!)
    • swish: a new activation function proposed here (#4262, thanks @mizuno-gsinet!)
    • convolution_{1,3}d, deconvolution_{1,3}d, average_pooling_{1,3}d, max_pooling_{1,3}d, unpooling_{1,3}d (#4025)
      • These are thin wrappers of ***_nd variants
  • New-style (double backpropable) function support
    • simplified_dropconnect (#3807)
    • zoneout function (#3949)
    • average_pooling_nd, max_pooling_nd, unpooling_nd (#4132)
  • Add testing.patch which calls mock.patch with wraps argument (#3883)
  • Added initial version of AMSGrad (#4032, thanks @kashif!)
  • Added ZippedImageDataset and MultiZippedImageDataset (#4127, thanks @d0i!)
  • Accumulate the gradient to a tuple for lazy add operation (#4136, thanks @LuoYuanke!)
  • Implement time trigger (#4294)
  • Add CuPy support in ParameterStatistics; Add option to skip parameters with NaN values in ParameterStatistics (#4345)
  • Spatial pyramid pooling method specified by string instead of type (#4401)
  • Add n_cell property to NStepRNN family (#4417, thanks @levelfour!)

Bug Fixes

  • Fix to make it consistent with numpy.split and cupy.split (#4153, thanks @ken-nakanishi!)
  • Fix lstm backward arguments (#4320)
  • Fix BatchMatMulGrad.backward (#4349)
  • Fix test of MultiprocessParallelUpdater (#4368)
  • Fix iDeep import error message not printed as expected (#4381)
  • Fixed a bug that occur when using iDeep in chainer/link.py (#4382, thanks @ken-nakanishi!)
  • Fix link.to_intel64 for persistent values (#4384)
  • Fix the index page of datasets in documentation (#4407)
  • Fix Variable tests (#4408)
  • Remove Swish function class alias (#4409)
  • Reset Link._device_id to None in to_intel64 (#4436)
  • Fix to disable using iDeep in optimizers if to_intel64 is not called (#4457)

Installation

  • Support using cupy wheels as a dependency (#4256)
  • Prefer pip in installation guide; Add sphinx in docs requirements (#4366)
  • Add cupy-cuda91 wheel dependency (#4392)
  • Use CuPy wheels in Dockerfiles (#4406)
  • Add Dockerfiles for Chainer with iDeep (#4476)

Enhancements

  • Delegate cuDNN convolution operation to CuPy (#3782)
  • Improve extensions part in Trainer tutorial (#3809)
  • Create chainer.optimizer_hooks namespace and move hooks there. (#3977)
  • Add warning for batch normalization when batchsize=1 and train=True (#3996)
  • Optimize rsqrt function with CuPy (#4108)
  • Fix progress bar exceeds 100% (#4152, thanks @ankokumoyashi!)
  • Require "scalar" variables to be 0-dim (#4199)
  • Update examples and tests for chainer.backends.cuda (#4259)
  • Child sum treelstm with less than 2 child (#4275)
  • Add cuDNN support for clipped_relu (#4307, thanks @tkerola!)
  • Improve error messages of array type check. (#4312, thanks @mizuno-gsinet!)
  • Use the same code for calculating forward and backward probabilities in CTC (#4313)
  • Refactor F.split_axis (#4328)
  • Refactor RNNs (#4341)
  • Detect backward of old-style functions in chainer.grad (#4352)
  • Extract a function that concatenates weight and bias matrixes for cuDNN (#4355)
  • Fix numpy dtypes bug in cuda.to_gpu and cuda.copy (#4380, thanks @tkerola!)
  • Improve ideep import error message fror missing shared objects (#4385)
  • Change group argument name of Convolution2D and Deconvolution2D (#4404)
  • Prefer chainer.backends.cuda in recent added codes (#4414)
  • Group imports in the order mentioned in the contribution guide (#4418)
  • Extract stack function from CTC loss (#4430)
  • Change permission of Python files (#4432)
  • Typecheck split_at (#4434, thanks @corochann!)
  • Remove future.types.newint (#4435)
  • Accelerate multi-add for intel64 backend (#4447, thanks @LuoYuanke!)
  • Fix to use CUDNN_BN_MIN_EPSILON (#4466)
  • Deprecate sep argument of PrintHook

Documents

  • Reorganize tutorial & add new contents (#3241)
  • Update comparison.rst to add tensorboardX as an Web interface (#3610, thanks @lanpa!)
  • Document missing environment variables (#3662)
  • Improve extensions part in trainer tutorial (#3809)
  • Update documentation for chainer.backends.cuda (#3981)
  • Fix sphinx build option assignment in Makefile (#4037)
  • Add autosummary check to documents (#4040)
  • Add explanation to the document of concat_examples (#4164)
  • Add link to CuPy upgrade guide (#4188)
  • Updated English for extension documentation (#4215, thanks @rcalland!)
  • Update examples and tests for chainer.backends.cuda (#4259)
  • Prefer pip in installation guide; Add sphinx in docs requirements (#4366)
  • Improve docs of get item (#4375, thanks @naoto0804)
  • Add F.shift to docs (#4379)
  • Fix typo in unpooling_nd (#4398), in the document of configuration (#4474), in a comment of UpdateRule (#4494)
  • Fix the index page of datasets in documentation (#4407)
  • Fix document in NStepLSTM/GRU/RNN (#4425)
  • Fix docs format in upgrade guide (#4428)
  • Fix Evaluator.device docs (#4437)
  • Add ChainerUI link to comparison table (#4438)
  • Fix underline in docs (#4446)
  • Fix trainer docs (#4464)
  • Fix sectioning of "Datasets" reference (#4486)
  • Fix for optimizer hook namespace change (#4488)
  • Deprecate sep argument of PrintHook (#4471)

Examples

  • Fix some options of MNIST example not working (#3500)
  • Add a tutorial of sequence-to-sequence models (#3984)
  • Update examples and tests for chainer.backends.cuda (#4259)

Tests

  • Add testing.patch which calls mock.patch with wraps argument (#3883)
  • Add test for inconsistent backend input outputs (#4044)
  • Test of RNN in the case when some gradients are None (#4049)
  • Fix test of MulprocessParallelUpdater (#4368)
  • Revert "Skip MultiprocessParallelUpdater test for timeout" (#4376)
  • Add backward test for old-styled sclar variable (#4416)
  • Reduce UserWarning (#4483)
  • Fix doctest failure in chainer.dataset.concat_examples (#4485)
  • Avoid running TestSplitAxis tests on NumPy 1.10 (#4500)
  • Add test cases for various shapes as function input (#3744)
  • add Codecov.io configuration (#4402)

Others

  • Fix autopep8 config (#4388)
  • Add license to source distribution (#4451)
chainer - v3.5.0

Published by kmaehashi over 6 years ago

This is the release note of v3.5.0. See here for the complete list of solved issues and merged PRs.

New Features

  • Add more Functions with double-backprop support: simplified_dropconnect function (#4403), zoneout (#4423), average_pooling_nd, max_pooling_nd, unpooling_nd (#4405)

Improvements

  • Accept NumPy ints as a device ID in cuda.to_gpu and cuda.copy (#4386)
  • Raise warning for batch normalization when batchsize=1 and train=True (#4433)
  • Add typechecking to split_at (#4441)
  • Update examples and tests for chainer.backends.cuda (#4463)

Bug Fixes

  • Improve ELU.backward (#4351)
  • Fix crash bug when a child is reporting a value when using MultiprocessParallelUpdater (#4367)
  • Fix BatchMatMulGrad.backward (#4393)

Installation

  • Use pip in installation guide (#4396)
  • Add sphinx in docs requirements (#4396)

Documentation

  • Fix sphinx build option assignment in Makefile (#4387)
  • Add autosummary check to documents (#4399)
  • Fix typo in F.unpooling_nd documentation (#4411)
  • Improve F.get_item documentation (#4415, thanks @naoto0804)
  • Update the comparison with other frameworks on web interfaces (#4439, thanks @lanpa!)
  • Fix markup in the upgrade guide (#4440)
  • Fix Evaluator.device documentation (#4442)
  • Fix English wording for extension documentation (#4444)
  • Add ChainerUI link to comparison table (#4445)
  • Add link to CuPy upgrade guide (#4462)
  • Document missing environment variables (#4465)
  • Fix document in NStepLSTM/GRU/RNN (#4490)

Examples

  • Fix some options of MNIST example not working (#4390)

Tests

  • Test of RNN in the case when some gradients are None (#4420)
  • Add test cases for various shapes as function input (#4389)
  • Fix autopep8 config (#4391)
  • Add Codecov.io configuration (#4410)

Others

  • Add license to source distribution (#4452)
chainer - v4.0.0b4

Published by hvy over 6 years ago

This is the release of v4.0.0b4. See here for the complete list of solved issues and merged PRs.

Highlights

  • This release provides experimental support for iDeep that accelerates DNN computations on Intel CPUs. Currently, to run your Chainer code with iDeep enabled, install iDeep using pip install ideep4py, set environment variable with export CHAINER_USE_IDEEP="auto", add model.to_intel64() to your code (where model is a Chain object; only needed if you are using supported Optimizers) and run the code in CPU mode. Currently the following functions and optimizers are supported:
    • Functions: F.relu, F.linear, F.local_response_normalization, F.batch_normalization, F.split_axis, F.average_pooling_2d, F.lstm, F.tree_lstm, F.convolution_2d, F.deconvolution_2d, F.max_pooling_2d, F.dropout, F.concat
    • Optimizers: optimizers.SGD, optimizers.MomentumSGD
  • Starting from v4.0.0b4, CuPy starts wheel package support. Be careful that the usual source-based installation is used if you just upgrade Chainer; see CuPy v4.0.0b4 Release Notes for how to upgrade CuPy with a wheel package.

Changes without Compatibility

Please see the Upgrade Guide for details.

  • Avoid keeping reference to function nodes in CupyMemoryProfileHook (#4300)
  • Avoid keeping reference to function nodes in TimerHook (#4323)

New Features

  • Variable statistics plot report (#3601)
  • New functions
    • erf (#3846)
    • floordiv and rfloordiv (#3967)
    • tensordot (#4253, thanks @anaruse!)
  • Deprecated array util (#3980)
  • New style ConvolutionND/DeconvolutionND (#4110)
  • Add support for post-update hook registration to optimizers (#4159, thanks @tkerola!)
  • iDeep support (#4276, #4277, #4278, #4279, #4280, #4281, #4282, #4283, #4284, #4285, #4286, #4288, #4289)

Improvements

  • Simplify CTC code (#3842)
  • Refactor RNNs (#4109)
  • Fix document about links in chainer.functions (#4192)
  • Optimize stack (#4203)
  • Raise friendly error for multi-gpu doctest failures (#4217)
  • Skip unnecessary backward computation in matmul (#4243, thanks @anaruse!)
  • Improve CTC loss (#4252)
  • Optimize CTC backward (#4258)
  • Use variable.array instead of variable.data in reporter.py (#4260, thanks @crcrpar!)
  • Add cudnn mode selection in softmax (#4261, thanks @anaruse!)
  • Fix creation method of identity matrix (#4263)
  • Enable opt = optimizers.SGD().setup() syntax (#4290)
  • Avoid keeping reference to function nodes in CupyMemoryProfileHook (#4300)
  • Avoid keeping reference to function nodes in TimerHook (#4323)
  • Fix some inconsistency in gradient variable names (#4331, thanks @mizuno-gsinet!)
  • Make ideep disabled by default (#4337)
  • Remove redundant stack (#4338)
  • Remove input type consistency checks (#4339)
  • Accept concat array in NStepRNNs (#4344)
  • Prefer data type objects over character codes in the NumPy (#4350)
  • Support cupyx namespace (#4363)

Bug Fixes

  • Fix crash bug when a child is reporting a value when using MultiprocessParallelUpdater (#3402, thanks @tkerola!)
  • Simplify CTC code (#3842)
  • Fix backward_accumulate to accept only tuples (#4186)
  • Fix backward of NormalizeL2 (#4190)
  • Fix backward of BatchRenormalizationFunction and add tests (#4191)
  • Improve ELU.backward (#4347)
  • Cast to int type in split_axis (#4348)
  • Skip multiprocess_parallel_updater test for timeout (#4370)

Documentation

  • Fix casing of header in README (#3816)
  • Deepcopy to Link.copy (#4066)
  • Update example list (#4150)
  • Fixing a minor notation error on the description of n_step_bigru (#4157, thanks @Yuichiroh!)
  • Update pydoc for Optimizer setup syntax sugar (#4167)
  • Add Slack Chat link to docs and issue template (#4178)
  • Fix document about links in chainer.functions (#4192)
  • Update documentation and examples (#4208)
  • Fix description of built-in batch converters (#4236)
  • Add missing extension doc ref (#4245)
  • Fix typo: backporp -> backprop (#4291)
  • Fix typo: word2vec example (#4305, thanks @koki0702!)
  • Fix to pass doctest around optimizer.setup (#4306)
  • Apply https in supported websites in the document (#4315)
  • Fix invisible indentation with tabs in the document (#4316)
  • Use version in external URL (#4317)
  • Fix docstring (#4322)
  • Add function hook changes to upgrade guide (#4324)
  • Fix error messages for incompatible arrays (#4346)
  • Prefer data type objects over character codes in the NumPy (#4350)

Examples

  • Update documentation and examples (#4208)
  • Fix unused argument (--out) of seq2seq example (#4238, thanks @okayu9!)
  • Fix typo in example code (#4239, thanks @rcalland!)

Tests

  • Fix initialization of random vectors in the test for RNNs (#4048)
  • Raise friendly error for multi-gpu doctest failures (#4217)
  • Refactor test_batch_normalization (#4224)
  • Refactor test_dropout (#4225)
  • Refactor test_concat (#4226)
  • Refactor test_split_axis (#4227)
  • Refactor test_lstm (#4228)
  • Refactor test_local_response_normalization (#4229)
  • Refactor test_max_pooling_2d (#4233)
  • Refactor test_average_pooling_2d (#4234)
  • Avoid mutable default argument (#4359)
  • Skip multiprocess_parallel_updater test for timeout (#4370)
chainer - v3.4.0

Published by mitmul over 6 years ago

This is the release note of v3.4.0. See here for the complete list of solved issues and merged PRs.

Enhancement

  • Raise appropriate error when Adam.lr is evaluated before updating starts (#4207)
  • Remove redundant stack (#4340)

Bug fixes

  • Serialize t of UpdateRule (#4184, #4214)
  • Fix a corner-case bug in gradient_check (#4202)
  • Fix backward of NormalizeL2 (#4268)
  • Fix the lack of type checkings in CTC loss (#4273)
  • Fix backward of BatchRenormalizationFunction and add tests (#4293)
  • Fix backward_accumulate to accept only tuples (#4334)

Examples

  • Fix typo in example code (#4244, thanks @rcalland!)
  • Fix unused argument (--out) of seq2seq example (#4297, thanks @okayu9!)
  • Update example list (#4254)

Documentation

  • Improve documentation of gru (#3928)
  • Fix the docstring of Variable.backward() (#4196)
  • Prefix class name to attributes in pydoc (#4209)
  • Add Slack Chat link to docs and issue template (#4255)
  • Fix document about links in chainer.functions (#4269)
  • Deep copy to Link.copy (#4295)
  • Fix notation error (#4321, thanks @Yuichiroh!)
  • Use version in external URL (#4327)
  • Fix docstring (#4330)
  • Update code in tutorials to use init_scope() in the mode definition (#4231)

Tests

  • Fix trigger test imports (#4213)
  • Raise friendly error for multi-GPU doctest failures (#4232)
  • Fix initialization of random vectors in the test for RNNs (#4335)
  • Avoid mutable default argument (#4361)
chainer - v4.0.0b3

Published by kmaehashi over 6 years ago

This is the release of v4.0.0b3. See here for the complete list of solved issues and merged PRs.

Highlights

  • Adam optimizer has been updated to support AdamW. See #4050 for details.
  • Shift function has been added. See #4041 for details.
  • Loss scaling option has been added to Variable.backward. You can also use it via Updaters.
  • Official Docker images now use CUDA 8.0. See #3902 for details.
  • A method for setting up an optimizer (introduced in the previous beta) has been reverted to avoid breaking backward compatibility; instead we have introduced a simpler syntactic sugar. See #4141 for details.
  • Upgrade Guide for Chainer v2 and v3 users has been added.

Changes without Compatibility

  • Fix Dockerfile to be able to train MNIST with GPU (#3902, thanks @mrteera!)
  • Optimizer.setup returns self to enable method chaining (#4141)

New Features

  • Loss Scaling (#3544, thanks @anaruse!)
  • Add shift function (#4041)
  • Add AdamW optimizer (#4050)
  • Optimizer.setup returns self to enable method chaining (#4141)
  • Add more Functions with double-backprop support: matmul (#3768), huber_loss (#3867)

Improvements

  • Move StandardUpdater and ParallelUpdater under chainer.training.updaters namespace (#3037)
  • Raise RuntimeError when Adam.lr is evaluated before updating starts (#3931)
  • Minor change to enable using Chainer with PyPy (#4072, thanks @ljmanso!)
  • Add get_training_length to IntervalTrigger (#4079, thanks @himkt!)
  • Reduce transpose operations in F.linear (#4093, thanks @jzhoulon!)
  • Remove an empty retain in F.identity (#4154)
  • Write comment about direction normalization in check_backward (#4156)
  • Fix GoogLeNet to define the network using init_scope (#4171)
  • Use log1p in CTC (F.connectionist_temporal_classification) for stable computation (#4194)
  • Optimize F.separate (#4195)
  • Optimize CTC (F.connectionist_temporal_classification) (#4201)

Bug Fixes

  • Update VariableNode.data if new data is assigned (#3869)
  • Add serialize to Summary (#4005, thanks @Hakuyume!)
  • Fix a corner-case bug in gradient_check (#4015)
  • Serialize t of UpdateRule (#4026)
  • Fix dilated convolutions to work with TensorCore in cuDNN7 (#4064, thanks @anaruse!)
  • Fix inconsistency on transferring copied link with uninitialized variables to GPU. (#4075)
  • Fix GradientMethod not to raise AttributeError caused by new optimizer setup (#4077)
  • Avoid using np.stack in grouped convolution/deconvolution in CPU mode (#4085)
  • Avoid using np.stack in examples (#4087)
  • Fix seq2seq example not working with GPU (#4117)
  • Change the size of out1 in inc4c and inc4d (#4121, thanks @takaaki82!)
  • Call iterators' finalizers in Evaluator's finalizer (#4145)
  • Support Caffe global pooling functions (#4161)

Examples

  • Add a text classification example (#3029)
  • Update the VAE examples (#4009)
  • Add image captioning example using MSCOCO (#4076)

Documentation

  • Fix the docstring of Variable.backward() (#3496)
  • Clarify color order of arguments in the documentation of vision models (GoogleNet, ResNet, VGG) (#3760, thanks @belltailjp!)
  • Fix incorrect use of single backslashes in docstrings (#3948)
  • Improve docs of huber_loss (#3950)
  • Fix typo in trainer tutorial (#4095)
  • Add undocumented functions and links to the document (#4101)
  • Fix typos in docs (#4126)
  • Add upgrade guides for v3 and v4 (#4151)
  • Fix docs to use NumPy 1.14 textual representation (#4177)
  • Prefix class name to attributes in pydoc (#4206)

Installation

  • Fix Dockerfile to be able to train MNIST with GPU (#3902, thanks @mrteera!)
  • Unify requirements (#3951)

Tests

  • Fix a corner-case bug in gradient_check (#4015)
  • Simplify RNN tests that check if cuDNN is called (#4047)
  • Fix test_init_docstring to use importlib to find package (#4091)
  • Fix trigger tests to run independently (#4103)
  • Separate unit test for NumPy 1.14 array textual representation (#4172)
chainer - v3.3.0

Published by bkvogel over 6 years ago

This is the release of v3.3.0. See here for the complete list of solved issues and merged PRs.

New Features

  • Add more Functions with double-backprop support: bilinear (#4023), Gaussian (#4024), det and batch_det (#4057), matmul (#4168), huber_loss (#4198)

Improvements

  • Backends subpackage (#4021)
    • This moves chainer.cuda to chainer.backends.cuda. However, note that chainer.cuda is still available as well.
  • Fix cuda import path (#4042)
  • Avoid unnecessary hasattr (#4060)
  • Fix cuda import to use chainer.backends (#4099)
  • Minor change to enable using Chainer with PyPy (#4104, thanks @ljmanso!)
  • Reduce transpose in F.linear (#4146, thanks @jzhoulon!)
  • Write comment about direction normalization in gradient_check.check_backward (#4158)
  • Remove empty retain (#4181)
  • Use log1p in ctc (F.connectionist_temporal_classification) (#4197)
  • Fix GoogLeNet to define network using init_scope (#4210)
  • Optimize ctc (#4212)

Bug fixes

  • Fix debug_print() with empty Variable (#4063)
  • Avoid using np.stack in examples (#4090)
  • Update VariableNode.data if new data is assigned (#4100)
  • Fix seq2seq example not working with GPU (#4118)
  • Add serialize to Summary (#4119, thanks @Hakuyume!)
  • Change the size of out1 in inc4c and inc4d (#4140, thanks @takaaki82!)
  • Call iterators' finalizers in Evaluator's finalizer (#4148)
  • Fix inconsistency on transferring copied link with uninitialized variables to GPU. (#4193)

Documentation

  • Fix the example in the document of Reporter (#4058)
  • Fix incorrect use of single backslashes in docstrings (#4088)
  • Fix typo in trainer tutorial (#4098)
  • Clarify color order of arguments in ImageNets documentation (#4102, thanks @belltailjp!)
  • Fix undocumented functions and links (#4128)
  • Fix typos in docs (#4160)
  • Add upgrade guide for v3 and v4 (#4176)
  • Fix docs to use NumPy 1.14 textual representation (#4183)

Examples

  • Update VAE examples (#4138)

Installation

  • Unify requirements (#4125)

Tests

  • Simplify RNN test that check if cuDNN is called (#4086)
  • Fix test_init_docstring to use importlib to find package (#4094)
  • Separate unit test for NumPy 1.14 array textual representation (#4173)
chainer - v4.0.0b2

Published by niboshi almost 7 years ago

This is the release of v4.0.0b2. See here for the complete list of solved issues and merged PRs.

  • In this release, you can set up an optimizer with a simpler syntax.
    In previous versions, the code would be written as

    optimizer = chainer.optimizer.SGD()
    optimizer.setup(model)
    

    We now also allow it to be written more concisely as

    optimizer = chainer.optimizers.SGD(link=model)
    

    The link argument should be specified as a keyword argument. Otherwise, some optimizers could wrongly interpret it as a hyperparameter (e.g. lr). We will enforce a keyword argument from the next release.

  • We introduced a check for mixed use of CuPy arrays and NumPy arrays in outputs returned from functions. Even though we have previously forbidden this, such functions may have worked without any errors. With the introduction of this check, however, those functions can begin raising errors.

Known Issues

  • Grouped convolution/deconvolution does not work in CPU mode with NumPy 1.9 (#4081). This issue is planned to be resolved in the next release.

Changes without compatibility

  • Check for mixed use of CuPy/NumPy ndarrays in functions (#4029)

New Features

  • Overlap data transfer and GPU kernels (#3336, thanks @anaruse!)
  • Add early stopping (#3351, thanks @himkt!)
  • Enable optimizer model setup with instantiation (#3488)
  • Grouped convolution (#3494, thanks @anaruse!)
  • Add extensions as a trainer argument (#3528, thanks @neka-nat!)
  • Support parameter update in FP32 (#3708, thanks @anaruse!)
  • sign function (#3678)
  • Add more Functions with double-backprop support: maximum (#3533), im2col (#3587), batch_l2_norm_squared (#3642), expm1 function (#3644), linear_interpolate (#3663), mean_absolute_error (#3672), squared_error (#3691), sigmoid_cross_entropy (#3705), absolute_error (#3707), Gaussian (#3759), det, batch_det (#3767), cross_covariance (#3866), normalize (#3870), bilinear (#3917), negative_sampling (#3992)

Improvements

  • User-friendly error checks for pooling input (#3555)
  • Verbose error messages in gradient check (#3833)
  • Verbose error messages in basic_math (#3839)
  • Refactor (de)convolution_2d (#3848)
  • Allow to_cpu and to_gpu to accept list, tuple and None (#3850)
  • Move should_use_cudnn and should_use_cudnn_tensor_core to chainer.cuda (#3851)
  • Skip unnecessary array util ops (#3932)
  • Avoid unnecessary hasattr (#3952)
  • Add chainer.backends subpackage (#3974)
  • Fix cuda import path (#4036)
  • Remove an unused function (#4051)

Bug Fixes

  • Fix backprop dim for unused lstm states on gpu (#3042, thanks @andreasgrv!)
  • Forget inputs as Variables (#3788)
  • Skip cuDNN in deconvolution_2d if dilate != 1 and deterministic (#3875)
  • Fix debug_print() with empty Variable (#4018)
  • Avoid mixing cupy.ndarray and numpy.ndarray in n_step_xxx links (#4030)
  • Fix F.convolution_2d and F.deconvolution_2d to work without cuDNN (#4062)
  • Fix test failure with cuDNN v6 (#4078)

Examples

  • Add --noplot option in MNIST example (#3925)

Documentation

  • Add word2vec tutorial (#3040)
  • Add ptb tutorial (#3073)
  • Replace array in type lists with numpy or cupy ndarray (#3259)
  • Improve documents of the debug mode (#3347)
  • Function references in docs to point to FunctionNode (#3626)
  • Fix the example in the documentation of Reporter (#3795)
  • Improve documentation of GRU (#3858)
  • Fix documentation in n_step_gru, n_step_bigru, n_step_bilstm, n_step_rnn and n_step_birnn (#3859)
  • Fix CuPy requirment version (#3899)
  • Add expm1 to the documentation. (#3900)
  • Fix a formula in the tutorial (#3909, thanks @keisuke-nakata!)
  • Fix typo (#3914, thanks @okayu9!)
  • Fix example code in the trainer tutorial (#3926, thanks @keisuke-nakata!)
  • Fix doctest in the trainer tutorial (#3942)
  • Fix GlorotUniform documentation (#3953, thanks @F-Tag!)
  • Add StatefulZoneoutLSTM to documentation (#3957)
  • Small fix for the seq2seq example (#3964)
  • Add ConcatWithAsyncTransfer to the reference manual (#3975, #3979)
  • Fix a code fragment in contribution guide (#3982, thanks @anaruse!)
  • Fix documentations of negative sampling (#3988)
  • Add dilate argument to documentation (#4011)
  • Fix broken link in chainer.function.pad documentation (#4028)

Tests

  • Add ability to check non-differentiable inputs in gradient_check.numerical_grad (#3551, #4003)
  • Refactor unit tests for various backend configuration (#3862)
  • Catch all exceptions in parameterized test (#3876)
  • Avoid NaN in test of F.classification_summary (#3927)
  • Avoid NaN error in test_pad_sequence in debug mode (#3946)
  • Show original traceback in testing.parameterized (#3954)
  • Fix macOS test in Travis (#3990)
  • Simplify to_gpu in RNN tests (#4046)
  • Adjust numerical tolerances: convolution_nd (#3910), im2col (#3933), triplet (#3939), linear_interpolate (#3944)
chainer - v3.2.0

Published by kmaehashi almost 7 years ago

This is the release of v3.2.0. See here for the complete list of solved issues and merged PRs.

New Features

  • Add get_variable_or_none to improve backward performance (#3843)
  • Add sign function (#3911)
  • Functions with double-backprop support: vstack (#3824), expm1 (#3901), batch_l2_norm_squared (#3904), maximum (#3905), mean_absolute_error (#3906), squared_error (#3907), im2col (#3920), linear_interpolate (#3923), normalize (#3961), cross_covariance (#3962), absolute_error (#3995), sigmoid_cross_entropy (#4007), copy (#4019)

Improvements

  • Make caffe model loading faster (~2.4 times on a benchmark) (#3779, thanks @grafi-tt!)
  • Use stderr for Downloading... message (#3891)
  • Verbose error message in basic_math (#3934)
  • Verbose error in gradient check (#3943)
  • Skip unnecessary array util ops (#3987)
  • Move should_use_cudnn and should_use_cudnn_tensor_core to chainer.cuda (#3989)
  • Fix cudnn path (#3999)
  • Allow list, tuple and None in to_cpu and to_gpu (#4000)
  • User-friendly error checks for pooling input (#4001)
  • Show original traceback in testing.parameterized (#4020)

Bug Fixes

  • Small fix for the example of seq2seq (#3971)
  • Forget inputs as Variables (#4022)
  • Avoid mixing cupy.ndarray and numpy.ndarray in n_step_xxx links (#4034)
  • Fix backprop dim for unused lstm states on gpu (#4043, thanks @andreasgrv!)

Documentation

  • Fix sigmoid_cross_entropy doc about t (#3884)
  • Add expm1 to the document. (#3912)
  • Replace array in type lists with numpy or cupy ndarray (#3913)
  • Fix typo (#3918, thanks @okayu9!)
  • Fix Link tutorial’s linear layer document math (#3919, thanks @keisuke-nakata!)
  • Fix trainer tutorial evaluate example code (#3941, thanks @keisuke-nakata!)
  • Fix GlorotUniform doc (#3956, thanks @F-Tag!)
  • Fix docs in n_step_xxx (#3958)
  • Add StatefulZoneoutLSTM to docs (#3960)
  • Small fix for the example of seq2seq (#3971)
  • Improve debug mode documentation (#3973)
  • Fix minor document bug (#3985, thanks @anaruse!)
  • Fix documentations of negative sampling (#3993)
  • Fix doctest in trainer tutorial (#3998)
  • Fix some markup (#4008)
  • Fix broken link in chainer.function.pad document (#4033)
  • Function references in docs to point to FunctionNode (#4039)

Examples

  • Add --noplot option in MNIST example (#3930)

Tests

  • Fix test instability for trigonometric functions (#3894)
  • Adjust convolution_nd testing tolerance. (#3916)
  • Adjust tolerances for im2col tests (#3935)
  • Adjust tolerance for triplet loss function test (#3945)
  • adjust tolerance for F.linear_interpolate double backward tests (#3947)
  • Useful error message for parameterized tests (#3959)
  • catch all exceptions in parameterized test (#3963)
  • Avoid NaN in test of F.classification_summary (#3970)
  • fix macOS test in Travis (#3991)
  • Avoid NaN error in test_pad_sequence in debug mode (#3997)
  • Show original traceback in testing.parameterized (#4020)
  • Refactor unit tests for various backend configuration (#4035)
  • Simplify to_gpu in RNN tests (#4053)
chainer - v3.1.0

Published by gwtnb almost 7 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

Spotlight features

  • A lot of new double-backpropable functions have been added.
  • Autotuner for cuDNN convolution functions is now available. Just add this one line chainer.global_config.autotune = True for optimizing your ConvNets.

New Features

  • New functions: F.fix (#3834)
  • Functions with double-backprop support: where (#3505), softplus (#3593), clipped_relu (#3594), broadcast (#3650), hstack (#3666), dstack (#3890), square (#3681), ELU (#3730), minmax (#3732), log, log2, log10 (#3733), abs (#3734), sqrt (#3738), inv, batch_inv (#3743), div (#3750), pow (#3804), clip (#3805), resize_images (#3806), PReLU (#3814), minimum (#3815), triplet (#3817), floor (#3819), squared_difference (#3823), fliplr (#3827), flipud (#3828), fmod (#3834), pad_sequence (#3835), log1p (#3847), hard_sigmoid (#3849), CReLU (#3852), rdiv (#3857), ceil (#3860), logsumexp (#3877), cosh, sinh (#3879), depth2space, space2depth (#3880), sin, cos, tan, arcsin, arccos, arctan, arctan2 (#3881), tile (#3825), pad (#3855)
  • cuDNN Convolution functions Autotuner (#3841)

Improvements

  • Relax int type restriction (#3700)
  • Allow to_gpu and to_cpu to accept NumPy scalars (#3748)
  • Support file-like object in npz serializer (#3758, #3882)
  • Avoid zero-division warning in F.r2_score (#3777)
  • Raise user-friendly error when FunctionNode is used like Function object (#3780)
  • fix F.inv to raise exception when input has singular matrices (#3784)
  • Remove unnecessary branch in minimum forward. (#3836)
  • Check too small eps in Adam and RMSprop optimizers (#3783)

Bug fixes

  • Prevent ZeroDivisionError in softmax_cross_entropy when input size is 0 (#3656, thanks @knorth55!)
  • Fix xxx_pooling_nd causes CUDNN_STATUS_NOT_SUPPORTED for dims > 3 (#3722)
  • Fix LSTM bias initialization (#3731)
  • Fix the problem with resuming training when switching the freezing layers (#3800, thanks @jinjiren!)
  • Avoid zero division error in linear init call (#3885)

Documents

  • Add tutorials
    • Trainer tutorial (#3803)
    • Trainer extensions tutorial (#3646)
  • Improve TupleDataset documentation (#3438)
  • Documentation fix in FunctionNode (#3444)
  • Improve docs of n_step_lstm (#3471)
  • Fix dead links to modules in tutorial (#3501)
  • Improve doc of sum (#3502, thanks @akitotakeki!)
  • Add get_conv_outsize and get_deconv_outsize to doc (#3597)
  • Improve docs of huber_loss (#3605, thanks @naoto0804!)
  • Improve docs of sigmoid_cross_entropy (#3606, thanks @naoto0804!)
  • Improve docs of contrastive and triplet (#3607, thanks @naoto0804!)
  • Fix documention error in Function (#3637)
  • Add experimental warning in docstring (#3648)
  • Add a note to the doc of Evaluator.evaluate (#3667)
  • Fix CuPy intersphinx mapping (#3687)
  • Document get_svhn (#3690)
  • Fix CuPy overview link not working (#3695)
  • Add CUDAProfileHook and CupyMemoryProfileHook to the reference (#3709, thanks @ronekko!)
  • Fix split_axis documentation (#3712)
  • Improve doc of context managers (#3719)
  • Improve doc of configuration flags (#3720)
  • Fix contribution guide for test framework change (#3726)
  • Fix case in doc (#3749)
  • Fix doc in Forget (#3773)
  • Improve docs of F.forget (#3791)
  • Document initializer criteria (#3801)
  • Sort out navigation menu (#3812)
  • Fix doctest failure in trainer tutorial (#3888)
  • Fix typo (#3635, #3638, #3639)
  • Fix doctest (#3647, #3651)

Tests

  • Move to PyTest
    • Replace the test framework with PyTest (#3591, #3602, #3694)
    • Configure AppVeyor to use PyTest (#3596)
    • Use pytest-warnings to set warnings configuration (#3778)
    • Remove nose dependency in tests (#3724)
  • Use Python 3.4.4 on Travis OSX Python 3.4 case (#3629)
  • Fix test_init_docstring (#3636)
  • Fix math function testing helper to support new style functions. (#3665)
  • Run OS X test only on master/stable branch to avoid delay (#3676)
  • Always cast all inputs to given dtype in gradient check (#3679)
  • Fix decorators to allow users to filter test cases by number of GPUs (#3683)
  • Fix to skip GPU tests on AppVeyor (#3693)
  • Fix math function test helper to support double backward test of linear functions(#3706)
  • Richer gradient check output (#3713)
  • Check deprecation warining in Travis (#3721)
  • Fix decorators to allow users to filter test cases by number of GPUs (#3723)
  • Use python 3.5 for doctest (#3727)
  • Fix normalization warning in F.average test (#3729)
  • Fix F.inv test does not test type error as expected (#3775)
  • Directional derivative (#3790)
  • Add double-backward test for F.inv and F.batch_inv (#3820)
  • Fix test condition in function tutorial (#3873)
  • Setup random of Python library in testing/random (#3655)
  • Fix coveragerc to measure branch coverage and only target chainer module (#3710)
  • Test stability fix
    • F.upsampling_2d (#3826)
    • F.deconvolution2d (#3640)
    • F.ceil and F.floor (#3439)
    • F.roi_pooling_2d (#3381)
    • F.roll_axis (#3384)
    • F.depth2space, F.space2depth (#3893)
    • F.fmod (#3838)

Others

  • Warn about vecLib on Mac OS X (#3692)
  • Update stable version link in README (#3746)
  • Improve version embedding (#3739)
  • Rename plot -> plt (#3714, thanks @Hakuyume!)

Install

  • Remove requirements for unit testing (#3682)
chainer - v4.0.0b1

Published by kmaehashi almost 7 years ago

This is the release of v4.0.0b1. See here for the complete list of solved issues and merged PRs.

Spotlight features

  • A lot of new double-backpropable functions have been added.
  • Dilated convolution got much faster by supporting cuDNN v6’s dilated convolution function.
  • Autotuner for cuDNN convolution functions is now available. Just add this one line chainer.global_config.autotune = True for optimizing your ConvNets.
  • A new example of linear-chain CRF has been merged . See https://github.com/chainer/chainer/tree/master/examples/pos

Changes without compatibility

  • Use a distinct RandomState in the prefetch thread of MultiProcessIterator (#3575, thanks @grafi-tt!)
  • Remove requirements for unit testing (#3604)

New Features

  • New functions: F.fix (#3645), F.cumsum (#3535, thanks @ronekko!), F.gumbel_softmax (#3359, thanks @ishihara1989!)
  • New dataset loader: Fashion MNIST (#3292, thanks @kashif!)
  • New iterator: MultithreadIterator (#3081)
  • Functions with double-backprop support: broadcast (#3420), hstack (#3489), scatter_add (#3493), ELU (#3504), CReLU (#3506), hard_sigmoid (#3507), abs (#3530), div (#3531), log, log2, log10 (#3537), floor (#3539), PReLU (#3540), dstack (#3546), vstack (#3547), title (#3548), squared_difference (#3549), fliplr (#3565), flipud (#3566), depth2space, space2depth (#3568), pad_sequence (#3573), pad (#3577), square (#3578), sqrt (#3581) resize_images (#3608), cosh, sinh (#3611), ceil (#3612), logsumexp (#3613), log1p (#3614), rdiv (#3615), inv, batch_inv (#3616), minimum (#3617), sin, cos, tan, arcsin, arccos, arctan, arctan2 (#3618), minmax (#3643), fmod (#3645), clip (#3737), pow (#3755)
  • cuDNN v6 dilated convolution (#2858, thanks @anaruse!)
  • Use CuPy default memory pool (#3329)
  • Add ignore option to NpzDeserializer (#3716)
  • Add fan_option to HeNormal initializer (#3482, thanks @yuyu2172!)
  • Define dummy cuda.cupy when cupy is not available (#3558)
  • Autotuner for cuDNN Convolution functions (#3669)

Improvements

  • Use correct module path (#3364)
  • Richer gradient check output (#3425)
  • Relax int type restriction (#3466)
  • Support file-like object in npz serializer (#3513)
  • Make caffe model loading faster (#3523, thanks @grafi-tt!)
  • Always cast all inputs to given dtype in gradient check (#3561)
  • Use a distinct RandomState in the prefetch thread of MultiProcessIterator (#3575, thanks @grafi-tt!)
  • Raise user-friendly error when FunctionNode is used like Function object (#3598)
  • Improve version embedding (#3628)
  • Rename plot -> plt (#3698, thanks @Hakuyume!)
  • Avoid zero-division warning in F.r2_score (#3703)
  • Allow to_gpu and to_cpu to accept NumPy scalars (#3741)
  • Check too small eps in Adam and RMSprop optimizers (#3753)
  • Fix F.inv to raise exception when input has singular matrices (#3770)
  • Use correct module path (#3781)
  • Remove function call to up performance (#3786)
  • Remove unnecessary branch in minimum forward (#3818)
  • Use stderr for “Downloading…” message of dataset (#3886)

Bug fixes

  • Fix the problem with resuming training when switching the freezing layers (#3125, thanks @jinjiren!)
  • Stop checking docstring compliance with autosummary (#3285)
  • Fix LSTM bias initialization (#3333)
  • Fix xxx_pooling_nd causes CUDNN_STATUS_NOT_SUPPORTED for dims > 3 (#3553)
  • Prevent ZeroDivisionError in softmax_cross_entropy when input size is 0 (#3559, thanks @knorth55!)
  • Fix document directive typo (#3627)
  • Setup random of Python library in testing/random (#3630)
  • Fix split_axis documentation (#3699)
  • Remove direction (#3716)
  • Use cudnn.DropoutStates instead of calling API directly (#3717)
  • Ignore DeprecationWarnig caused in theano (#3785)
  • Remove function call to up performance (#3786)
  • Add dilation to dict key and rename internal function (#3831)
  • Fix sigmoid_cross_entropy doc about t (#3840)
  • Use context to close opened file (#3853)
  • Fix test condition in function tutorial (#3854)
  • Avoid zero division error in linear init call (#3871)
  • Fix test condition in function tutorial (#3873)

Documentation

  • Add tutorial on trainers (#2620) and extensions tutorial (#3138)
  • Document improvements: sigmoid_cross_entropy (#3562, thanks @naoto0804!), huber_loss (#3563, thanks @naoto0804!), contrastive triplet (#3564, thanks @naoto0804!), context managers (#3619), configuration flags (#3620), F.forget (#3776)
  • New registered documents: get_conv_outsize, get_deconv_outsize to doc (#3379), CUDAProfileHook and CupyMemoryProfileHook to the reference (#3487, thanks @ronekko!), get_svhn (#3670), criteria of initializers (#3327), Evaluator.evaluate (#3203)
  • Fix the wrong discription of t option of F.sigmoid_cross_entropy (#3840)
  • Add warnings to documents of experimental features (#3392)
  • Fix contribution guide for test framework change (#3625)
  • Fix typos (#3582, #3583, #3634, #3627, #3543, #3747, 3772), doctests (#3285, #3641, #3654, #3757, #3810), and other minor errors (#3659, #3699, #3771)

Installation

  • Remove requirements for unit testing (#3604)
  • Improve version embedding (#3628)

Examples

  • New example: POS-tagging (#1375)
  • Add --log-interval and --validation-interval options to seq2seq example (#3430)

Tests

  • Stabilize deconvolution2d test (#3107)
  • Move to PyTest instead of nose (#3324)
  • Stabilize roi_pooling_2d test by adjusting tolerance (#3381)
  • Richer gradient check output (#3425)
  • Always cast all inputs to given dtype in gradient check (#3561)
  • Replace the test framework with PyTest (#3590)
  • Configure AppVeyor to use PyTest (#3595)
  • Fix test_init_docstring (#3599)
  • Fix for pytest migration (#3600)
  • Fix math function testing helper to support new style functions. (#3603)
  • Remove requirements for unit testing (#3604)
  • Use Python 3.4.4 on Travis OSX Python 3.4 case (#3609)
  • Remove nose dependency in tests (#3623)
  • Fix decorators to allow users to filter test cases by number of GPUs (#3624)
  • Fix doctest (#3641)
  • Directional derivative (#3652)
  • Use python 3.5 for doctest (#3653)
  • Fix doctest (#3654)
  • Useful error message for parameterized tests (#3661)
  • Run OS X test only on master/stable branch to avoid delay (#3674)
  • Remove PyTest global option configuration (#3685)
  • Fix to skip GPU tests on AppVeyor (#3689)
  • Fix math function test helper to support double backward test of line… (#3697)
  • Fix normalization warning in F.average test (#3704)
  • Use pytest-warnings to set warnings configuration (#3715)
  • F.tile doctest does not test our function (#3757)
  • Fix F.inv test does not test type error as expected (#3769)
  • Ignore DeprecationWarnig caused in theano (#3785)
  • Add double-backward test for F.inv and F.batch_inv (#3793)
  • Fix test tolerance for F.fmod double backward test (#3837)
  • Fix test condition in function tutorial (#3854)
  • Fix test condition in function tutorial (#3873)
  • Adjust tolerances of depth_2_space, space_2_depth tests (#3887)

Others

  • Add a stale bot configuration (#3498)
  • Warn about vecLib on Mac OS X (#3664)
  • Fix coveragerc to measure branch coverage and only target chainer module (#3684)
  • Update stable version link in README (#3745)
chainer - v3.0.0

Published by mitmul about 7 years ago

This is a major release of Chainer v3.0.0. All the updates from the previous major version (v2.0.0) are found in the release notes below:

The biggest change is the introduction of new-style differentiable functions and resulting support for double backward (gradient of gradient) in many functions. The details are linked below:

As for the backward compatibility, most users of v2.x are not affected by the introduction of new-style function FunctionNode because the conventional Function is still supported in v3 (and in the future versions). Even if you are using custom functions written with Function, you can continue running the same code with Chainer v3.0.0. You need to rewrite such custom functions only when you want to use new features added to the new-style function, e.g. double backprop.

The backward compatibility of the overall APIs is slightly broken, though most users are not affected. See the above release notes for the details of broken compatibility.

Examples of grad of grad in Chainer

Usage of the grad function

You can calculate gradients of any variables in a computational graph w.r.t. any other variables in the graph using the chainer.grad function with enable_double_backprop=True option.

# Both x and y are chainer.Variable objects
y = x * x * x / 3  # Construct a computational graph

gx, = chainer.grad([y], [x], enable_double_backprop=True)
ggx, = chainer.grad([gx], [x], enable_double_backprop=True)

Here, the above calculation of ggx is equal to:

gx.backward()
x.grad_var  # => This is equal to the above ggx

Of course, one more differentiation gives us 2:

gggx, = chainer.grad([ggx], [x], enable_double_backprop=True)

print(gggx)  #=> variable([ 2.])

The loss function of WGAN-GP

WGAN-GP (which stands for Wasserstein GAN with Gradient Penalty[1]) is one example of a GAN that uses gradients of gradients when calculating the loss. It penalizes the gradient norm for enforcing the Lipschitz constraint. The gradient norm is computed at a random interpolation x_hat between a generated point x_tilde and a real example x. Then, the loss including the penalty term will be further differentiated w.r.t. trainable parameters in the model, so that it actually performs double backward for the discriminator. The code below shows how to implement it using the backward() method with enable_double_backprop=True option:

# G (generator) and D (discriminator) should be implemented somewhere else

x_tilde = G(z)
x_hat = x + u * (x_tilde – x)

# 1st diff
D(x_hat).backward(enable_double_backprop=True)

gradient_penalty = lambda * (x_hat.grad_var  1) ** 2
loss = D(x_tilde) – D(x) + gradient_penalty

model.cleargrads()         # to clear the 1st diff of params
loss.backward()              # 2nd diff

You can also implement it using grad(), which may be faster because it omits the computation of gradients w.r.t. parameters.

x_tilde = G(z)
x_hat = x + u * (x_tilde – x)

# 1st diff
gx_hat, = chainer.grad([D(x_hat)], [x_hat], enable_double_backprop=True)

gradient_penalty = lambda * (gx_hat  1) ** 2
loss = D(x_tilde) – D(x) + gradient_penalty

model.cleargrads()         # to clear the 1st diff of params
loss.backward()              # 2nd diff

[1]: I. Gulrajani, et. al. “Improved Training of Wasserstein GANs,” https://arxiv.org/abs/1704.00028

Here are some simple comparisons of grad of grad in Chainer and other frameworks:
https://gist.github.com/delta2323/9bbca950ee32c523c7aec2e02ad7f85a

New features

  • Add F.flip function (#3532)
  • Functions with double-backprop support: F.swapaxis (#3480), F.permutate (#3481), F.transpose_sequence (#3525)

Bug fixes

  • Workaround for NumPy dot operation bug on non-contiguous arrays (#3478)
  • Fix KeyError when using evaluator without target 'main' (#3460)
  • Fix AttributeError for missing inv_std in F.fixed_batch_normalization backward (#3479, thanks @zaburo-ch!)

Improvements

  • Remove unused invoke_before_training argument from Trainer.extend (#3516)
  • Improve performance of MultiprocessIterator for non tuple/dict datasets (#3413, thanks @yuyu2172!)
  • Type check in chainer.grad (#3514)

Documentation

  • Document deprecation of stream option of to_gpu (#3519)
  • Add documentation for ParameterStatistics extension (#3323)
  • Fix typos: (#3414, thanks @knorth55!), (#3455, thanks @HusainZafar!),
  • Fix source links for functions defined with contextlib.contextmanager (#3567)
  • Improve or fix documentation: F.swapaxes, F.squeeze, F.transpose (#3415, thanks @naoto0804!), F.separate, F.select_item, and F.permutate (#3417, thanks @naoto0804!), Constant initializer (#3560), init_scope (#3520), F.reshape (#3515), ConvNet tutorial (#3509)
  • Add documentation of links for framework compatibility (#3476)
  • Fix documentation warnings (#3490)
  • Intoroduce docstring checker and fix markup of “returns” sections (#3510)
  • Remove obsolete statement about copy between devices in to_gpu (#3517)
  • Document deprecation of stream option of to_gpu (#3519)
  • Fix type-check reference (#3521)
  • Improve style of deprecation notification (#3522)
  • Avoid horizontal scroll of tables (#3538)
  • Add/modify supported versions of dependencies in the installation guide (#3580)

Tests

  • Skip multiprocess interrupt tests (#3412)
  • Add tests for __delattr__ in Link and Chain (#3416, thanks @naoto0804!)
  • Improve numerical_grad accuracy (#3495)
  • Improve test mode of VAE example (#3431)
  • Delete redundant test settings for F.get_item (#3469, thanks @yuyu2172!)
  • Avoid unwanted output of assert_allclose failure (#3518)
  • Stabilization of stochastic numerical errors
    • upsampling_2d (#3382)
    • gradient_check (#3461)
    • dilated_convolution_2d (#3462)
    • basic_math (#3463)
chainer - v4.0.0a1

Published by unnonouno about 7 years ago

This is the release of v4.0.0a1. See here for the complete list of solved issues and merged PRs.

New features

  • NCCL2 support (#3097, thanks @anaruse!)
  • Tensor-Core support for convolutions (#3388, thanks @anaruse!)
  • New functions
    • F.scatter_add (#3442, thanks @yuyu2172!)
    • F.flip (#3378, thanks @ronekko!)
  • Functions with double-backprop support
    • F.transpose_sequence (#3418)
    • F.copy (#3419)
    • F.swapaxis (#3421)
    • F.permutate (#3424)
    • F.softplus (#3454)
    • F.where (#3491)
    • F.clipped_relu (#3503)
  • Fused LSTM double backprop (#3256)
  • LogReport now serializes the trigger if it has serialize method (#3396, thanks @Hakuyume!)

Bug fixes

  • Fix KeyError when using evaluator without target 'main' (#2815, thanks @Hiroshiba!, #3445)
  • Workaround for NumPy dot operation bug for non-contiguous arrays (#3453)
  • Fix AttributeError for missing inv_std in F.fixed_batch_normalization backward (#3468, thanks @zaburo-ch!)

Improvements

  • Remove unused invoke_before_training argument from Trainer.extend (#3036)
  • Improve performance of MultiprocessIterator for non tuple/dict datasets (#3390, thanks @yuyu2172!)
  • Type check in chainer.grad (#3433)
  • Add VariableNode.get_variable_or_none to improve backward performance (#3448)
  • F.batch_normalization (and L.BatchNormalization) now supports cuDNN when the inputs are float16 (#3386, thanks @anaruse!)

Documentation

  • Fix and improve documentation:
    • init_scope (#3121)
    • FunctionNode (#3441)
    • constants (#3527)
    • F.swapaxes, F.squeeze, F.transpose (#3307, thanks @naoto0804!)
    • type-check reference (#3348)
    • F.n_step_lstm (#3349)
    • F.separate, F.select_item, and F.permutate (#3407, thanks @naoto0804!)
    • TupleDataset (#3432)
    • F.sum (#3497, thanks @akitotakeki!)
    • code highlighting in F.reshape (#3255)
  • Update documentation style:
    • Update navigation menu (#3281)
    • Fix markups (#3283)
    • Avoid horizontal scroll of tables (#3346)
    • Improve style of deprecation notification (#3395)
  • Intoroduce docstring checker and fix markup of “returns” sections (#3457)
  • Document deprecation of stream option of to_gpu (#3328)
  • Fix documentation warnings (#3464)
  • Remove obsolete statement about copy between devices in to_gpu (#3272)
  • Add seealso link in using_config documentation (#3335)
  • Fix typos (#3405, thanks @knorth55!, #3450, thanks @HusainZafar!)
  • Fix dead links to modules in tutorial (#3492)
  • Add documentation of links for framework compatibility (#3475)
  • Minor fixes in documentation (#3508)
  • Add/modify supported versions of dependencies in the installation guide (#3579)
  • Fix source links for functions defined with contextlib.contextmanager (#3245)

Examples

  • Change intervals in ImageNet example (#3362)

Tests

  • Skip multiprocess interrupt tests (#3393)
  • Write test for get_svhn.py (#3267, thanks @naoto0804!)
  • Improve test mode of VAE example (#3361)
  • Add tests for __delattr__ in Link and Chain (#3406, thanks @naoto0804!)
  • Avoid unwanted output of assert_allclose failure (#3277)
  • Delete redundant test settings for F.get_item (#3443, thanks @yuyu2172!)
  • Improve numerical_grad accuracy (#3472)
  • Stabilization of stochastic numerical errors
    • roll_axis (#3375)
    • roi_pooling_2d (#3376)
    • upsampling_2d (#3377)
    • basic_math (#3459)
    • upsampling_2d (#3410)
    • F.ceil and F.floor (#3427)
    • gradient_check (#3446)
    • dilated_convolution_2d (#3447)
  • Check deprecation warining in Travis (#3157)

Others

  • Define constant initializers as classes (#3294)
chainer - v2.1.0

Published by mitmul about 7 years ago

This minor release contains features, bug fixes and improvements to the documentation and installation procedure. See here for the complete list of solved issues and merged PRs.

New features

  • Use directional derivatives in numerical_grad (#3141)
  • axis argument in chainer.functions.average accepts tuples (#3264)
  • Add intensive_times to testing.condition.repeat to reduce test time. Also some tests are made deterministic (#3334)

Bug fixes

  • Pass raw arrays to the loss function in MultiprocessParallelUpdater (#2954)
  • Dynamically import matplotlib.pyplot in PlotReport (#3111)
  • Run NaN check of gradients in Variable.backward only when they are float (#3220)
  • Deny assigning links in ChainList.init_scope (#3230)
  • CaffeFunction to take BatchNorm scaling factor into account (#3295, thanks @hvy!)
  • F.softmax supports non-contiguous inputs (#3087)

Improvements

  • Use ndarray.copy instead of xp.copy or feed order=’C’ explicitly to keep the existing behavior (#3078)
  • Remove deprecated get_device from examples (#3140, thanks @naoto0804!)
  • Fix typo in the name of kernel for ROIPooling2D (#3186)
  • Add function name in the debug message of NaN check (#3197, #3207, thanks @knorth55!)
  • Speedup Upsampling2D on CPU (#3318)

Document

  • Typos:
    • Link.disable_update (#3063, thanks @Hakuyume!), docstring of gradient_check (#3160), VAE and CTC (#3167, thanks @zchenry!), Chainer's configuration (#3169, thanks @kristofbc!), comment in Optimizer (#3262)
  • Error fixes and improvements:
    • contribution guide (#3079), Updater (#3086, thanks @fiarabbit!), add warnings about preprocessing for dataset with both grayscale and RGB images to the docstring of ImageDataset (#3095, thanks @jinjiren!), add the explanation of the value range of ratio in F.dropout (#3112), docstrings for warnings (#3115), docstrings for doctest (#3162), example code in docstrings (#3165), Variable.__getitem__ (#3195), F.dropout(#3196, thanks @fiarabbit!), chainer.Link (#3226, thanks @chantera!), functions.linear (#3228, thanks @bonprosoft!), warning messages for cuDNN (#3231), clipped_relu (#3232), leaky_relu (#3238), FunctionHook and tutorial of Function (#3250), fix truncation of a summary line (#3284), "Introduction to Chainer" (#3286), BPTT example in RNN tutorial (#3291, thanks @fiarabbit!), GRU documentation where stateless/stateful were reversed (#3345), transpose (#3304, thanks @naoto0804!), where (#3309, thanks @naoto0804!), initializers.NaN (#3342)
  • Hide source link for alien objects (#3113)
  • Remove "Edit on GitHub" link (#3193)
  • Treat sphinx warnings as errors (#3246)
  • Add minor version policy and feature backport policy (#3366)

Test

  • Check DeprecationWarning in the tests of Variable (#3164)
  • Check the type of the arugment fed to to_gpu (#3313)
  • Insert assert_warns to ignore warnings (#3317)
  • Stabilize tests for functions in exponential (#3358)

Others

  • Import example modules and util in chainer.training.__init__ (#3055)
chainer - v3.0.0rc1

Published by bkvogel about 7 years ago

This is the release candidate (RC) of v3.0.0. See here for the complete list of solved issues and merged PRs.

CuPy has also been updated to v2.0.0 RC. Please see the release notes for CuPy.

Changes that break compatibility

  • use_cudnn argument is removed from spatial_transformer_grid and spatial_transformer_sampler (#2955). You can use chainer.using_config(‘use_cudnn’, ‘auto’) to enable cuDNN in these functions.
  • Almost no users will be affected by the following changes.
    • The code for supporting protobuf 2 is removed (#3090). Note that the support of protobuf 2 has been already removed in Chainer v2.
    • Variable.__hash__ is removed (#2961). Note that Variable does not support __eq__, so it was already not hashable.
    • cache_download now raises OSError instead of RuntimeError on a file system error (#2839, thanks @Hakuyume!)

New features

  • New-style function with double backprop support
    • Array: transpose (#3144), reshape expand_dims broadcast_to sum (#3188), concat split_axis (#3189), flatten (#3190), cast (#3145), rollaxis (#3306), select_item (#3308), __getitem__ (#3243)
    • Connection: linear (#3099), convolution_2d deconvolution_2d (#3163), embed_id (#3183), lstm (#3206),
    • Activation: sigmoid (#3119), relu (#3175), leaky_relu (#3177), softmax (#3213), log_softmax (#3217)
    • Pooling: max_pooling_2d average_pooling_2d upsampling_2d unpooling_2d
      spatial_pyramid_pooling_2d (#3257)
    • Math: unary - (#3142), binary - (#3143), tanh (#3200), exp (#3254)
    • Loss: mean_squared_error (#3194), softmax_cross_entropy (#3296)
    • Noise: dropout (#3356, thanks @bonprosoft!)
    • Normalization: layer_normalization (#3219), batch_normalization and fixed_batch_normalization (#3275)
  • New Functions and Links
    • MGU (#1101)
    • BatchRenormalization, batch_renormalization, and fixed_batch_renormalization (#2302)
    • batch_matmul, which existed in v2, is reimplemented for backward compatibility (#3016)
    • arctan2 (#3130)
    • prod (#3031, thanks @ronekko!)
  • New core features
    • chainer.as_variable() is added (#3218). It can be used to enforce the type of a value to be Variable.
    • Variable.array property is added (#3223). It is equivalent to Variable.data, but .array is safer; when you mixed up Variable with ndarray, .array immediately raises an error while .data does not.
    • chainer.FunctionHook, which is an alias to chainer.function_hook.FunctionHook, is added (#3152, #3153)
    • grad function (#3015). This function takes input and output variables and compute the gradient of outputs w.r.t. the inputs.
    • check_double_backward utility (#3096, #3268). It can be used to numerically check if the double backprop is consistent with the first-order gradient.
  • Other features
    • The axis argument of average now supports tuple values (#3118)
    • The performance of numerical_grad is improved (#2966). It now performs numerical check of a randomly chosen directional derivative instead of the full gradient check. This change reduces the number of forward computations run for numerical gradient to a constant of the input dimensionality.
    • Make double backprop support optional in Variable.backward() (#3298). To enable double backprop, you have to explicitly pass enable_double_backprop=True. Note that when you do not need double backprop, it is better to turn off this option, then backward() skips constructing the computational graph of backpropagation so that the performance overhead (esp. the memory consumption) is saved.
    • Cupy memory profiler with cupy memory hook (#2979)
    • Add rgb_format option to get_mnist (#3263)

Bug fixes

  • Dynamically import matplotlib.pyplot in PlotReport (#2740)
  • Fix _make_npz for ResNetLayers (#3062, thanks @Hakuyume!)
  • Support non-contiguous array in cuDNN path of softmax (#3072) and log_softmax (#3310, thanks @knorth55!)
  • Deny assigning links in ChainList.init_scope() (#3129)
  • Avoid running certain hooks on uninitialized params (#3170)
  • Call VariableNode.data from Parameter.initialize (#3204, thanks @bonprosoft!)
  • Fix nan check (#3208)
  • Fix DictDataset to work in Python 3 (#3237, thanks @bonprosoft!)
  • Always return variable in dropout (#3239, thanks @naoto0804!)
  • CaffeFunction to take BatchNorm scaling factor into account (#3261, thanks @hvy!)
  • Fix params option of check_double_backward (#3268, see the previous section)
  • Check the input type of to_gpu (#3269)
  • Fix fix_random (#3330)
  • Use test mode in predict methods of ResNet, VGG, GoogLeNet (#3201)

Improvements

  • Improve MultiprocessIterator performance, functionality and stability, using Pool (#3076, thanks @grafi-tt!)
  • Check the range of the dropout ratio (#3100)
  • Add function name in the debug message of NaN check (#3161)
  • Fix typo in the name of a kernel used in roi_pooling_2d (#3185, thanks @knorth55!)
  • Make F.cast skip FunctionNode application if no cast is needed (#3191)
  • Fix Variable.backward for manually edited requires_grad (#3192)
  • Avoid using deprecated stream option in to_gpu (#3278)
  • Always raise warning when stream option is specified in to_gpu (#3282)
  • Speedup upsampling_2d on CPU (#3316)
  • Remove unnecessary use of enumerate (#3326)
  • Optimize backward of log2 and log10 (#3352)
  • Fix warning message for cuDNN (#3227)
  • Reduce copy in check_backward (#3312)
  • Small improvement for transpose backward (#3154)
  • Modules related to IntervalTrigger are slightly reorganized (#2990, thanks @Hakuyume!).

Examples

  • New example of machine translation with seq2seq (#2070)
  • Avoid to import matplotlib to set its backend Agg in code (#3043)
  • Remove deprecated get_device from examples (#3122, thanks @naoto0804!)

Documentation

  • Improves "Introduction to Chainer" (#1879)
  • Add "How to write a training loop in Chainer" tutorial (#2736)
  • Add minor version policy and feature backport policy (#3297)
  • Update coding guidelines on shortcut aliases (#3198)
  • Add warnings about preprocessing for dataset with both grayscale and RGB images. (#3093, thanks @jinjiren!)
  • Hide source link for alien objects (#3110)
  • Add missing items to the reference manual: FunctionNode FunctionAdapter (#3117), initializers.NaN (#3293)
  • Remove “Edit on GitHub” link (#3080)
  • Treat Sphinx warnings as errors (#3069)
  • Fix example code: RNN tutorial (#3149, thanks @fiarabbit!), fixed doctest failures (#3114, #3247)
  • Fix typos: README (#3156, thanks @lc0!), gradient_check (#3158), Configuration documentation (#3166, thanks @kristofbc!), Variable.grad (#3265), Hyperparameter (#3248), BatchNormalization (#3137)
  • Fix docs: clipped_relu (#3178), leaky_relu (#3179), Variable.__getitem__ (#3180), linear (#3224, thanks @bonprosoft!), link document (#3240), GRU stateless/stateful (#3340), Updater (#3084, thanks @fiarabbit!), backslash escaping (#3174), summary markup with periods (#3235), fix for warnings (#3068)
  • Improve docs: dropout (#3184, thanks @fiarabbit!) (#3116, thanks @naoto0804), where (#3301, thanks @naoto0804!), transpose (#3302, thanks @naoto0804!), GRU (#3089), doctest code in training loop tutorial (#3249), ‘hinge’ (#3108), ‘softmax_cross_entropy(#3105),LSTM(#3104),Linear(#3103),binary_accuracy(#3102),embed_id` (#3091)

Test

  • Check DeprecationWarning in Variable (#2932)
  • Dump more info on assert_allclose failure (#2936)
  • Check deprecated method in tests of Link (#3155)
  • Avoid using deprecated stream option in to_gpu (#3278)
  • Insert assert_warns to ignore warnings (#3280)
  • Stabilize numerical tests: relu (#3299), tanh (#3305), exponentials (#3354), unpooling_2d (#3341), local_response_normalization (#3355)
  • Ignore warnings for to_gpu (#3322)
  • Improve activation function tests (#3332)
  • Replace get_device (#3363)

Others

  • Configure flake8 to ignore the .git directory (#3077)
  • Improve example index (#3135)
Package Rankings
Top 1.15% on Pypi.org
Top 29.43% on Anaconda.org
Badges
Extracted from project README
pypi GitHub license travis coveralls Read the Docs Optuna
Related Projects