chainer

A flexible framework of neural networks for deep learning

MIT License

Downloads
86.6K
Stars
5.9K
Committers
298
chainer - v6.1.0

Published by beam2d over 5 years ago

This is the release note of v6.1.0. See here for the complete list of solved issues and merged PRs.

Enhancements

  • Avoid unnecessary updates in F.batch_renormalization, and related fixes (#7197)
  • Fix typo in Variable.backward (#7208)
  • MultiprocessParallelUpdater to support new devices (#7246)
  • Add type hints to Variable (#7445)
  • Improve get_device error message when ChainerX is not available (#7461)
  • Check positive dilation in F.convolution_2d (#7499)
  • Check positive dilation in F.deconvolution_2d (#7500)

Bug Fixes

  • Fix uncopyable MultiNodeBatchNormalization (#7254)
  • Fix initialization of L.Linear when called with n_batch_axes (#7300)
  • Improve type check in _values_to_dicts so it works with unicode of python 2 too (#7323)
  • Fix a bug in Bernoulli.log_prob (#7334, thanks @seiyab!)
  • Fix a bug that root is ignored in scatter_dataset and bcast (#7360)
  • Fix condition to invoke cuDNN dropout (#7374, thanks @crcrpar!)
  • Fix mypy errors (#7465)
  • Make WeightDecay aware of loss scale (#7510)
  • Fix AdamW update rule regression on CPU (#7516)
  • Fix type check of F.where (#7532)

Code Fixes

  • Fix code style for long expressions (#7542)

Documentation

  • Fix to clarify the description about initializer argument (#7070)
  • Remove extra spaces in docstrings (#7130)
  • Fix link to ChainerMN docs in performance guide (#7131)
  • Document passive attributes in FunctionTestCase (#7134)
  • Fix dead sphinx links (#7159)
  • Document backend.get_device_from_array (#7168)
  • Document F.copy view behavior (#7174)
  • Add optimizers.MSVAG to documentation (#7193)
  • Add missing doc entry for CommunicatorBase.allgather (#7195)
  • Remove chainerx.md (#7218)
  • Fix grammatical errors in documentation (#7219)
  • Fix typos in chainer.utils.type_check (#7274, thanks @ktns!)
  • Improve device documentation (#7288)
  • Fix capitalization of F.relu in doc (#7299)
  • Fix invalid escape sequences in ChainerX routine docstrings (#7336)
  • Fix F.normalize documentation (#7337, thanks @crcrpar!)
  • Fix format of static_graph.rst (#7399)
  • Avoid setting test_iter.epoch manually in the tutorial of training loop (#7410)
  • Avoid installing ChainerX when building docs of other projects on ReadTheDocs (#7426, thanks @knorth55!)
  • Fix robots.txt to allow indexing root (#7458)
  • Add reference and warning to F.swish document (#7467, thanks @fiarabbit!)
  • Change Deformable Convolution 2D docs to match arguments (#7468, thanks @higumachan!)
  • Remove test coverage from ChainerX contribution guide (#7469)
  • Remove "Comparison with other frameworks" from docs (#7477)
  • Improve F.normalize documentation (#7482, thanks @crcrpar!)

Installation

  • Fix ChainerX compilation with MSVC (#7173, thanks @durswd!)
  • Fix typing requirements (#7566)

Examples

  • Support device specifiers in examples:
    • Support device specifier in image captioning example (#7229)
    • Support device specifiers in MNIST data parallel example (#7233)
    • Support device specifiers in pix2pix example (#7235)
    • Support device specifiers in static graph example (#7236)
    • Support device specifiers in PTB example (#7263)
    • Support device specifiers in ImageNet data parallel example (#7303)
    • Support ChainerX in PTB gentxt example (#7340)
  • Fix sentiment example test (#7238)
  • Warn NaN in FP16 mode in examples:
    • Warn NaN in FP16 mode in wavenet example (#7376)
    • Warn NaN in FP16 mode in static_graph_optimizations/mnist example (#7377)
    • Warn NaN in FP16 mode in word2vec example (#7378)
    • Warn NaN in FP16 mode in sentiment example (#7380)
    • Warn NaN in FP16 mode in static_graph_optimizations/cifar example (#7381)
    • Warn NaN in FP16 mode in reinforcement learning examples (#7382)
    • Warn NaN in FP16 mode in dcgan example (#7383)
    • Warn NaN in FP16 mode in memnn example (#7386)
    • Warn NaN in FP16 mode in pos example (#7387)
    • Warn NaN in FP16 mode in pix2pix example (#7388)
    • Warn NaN in FP16 mode in vae example (#7412)
  • Implement reset method in the PTB example (#7535)

Tests

  • Use CUDA_VISIBLE_DEVICES in ChainerX tests (#7294)
  • Move test_cuda.py to backends_tests (#7295)
  • Improve mergify configuration (#7301)
  • Add configuration of new CI system (#7403)
  • Change 0 to 0.0 for python2 (#7508)
  • Add a test to reproduce the bcast deadlock problem (#7554)

Others

  • Add .mergify.yml (#7151)
  • Remove "Research projects using Chainer" from README (#7459)
chainer - v7.0.0a1

Published by kmaehashi over 5 years ago

This is the release note of v7.0.0a1. See here for the complete list of solved issues and merged PRs.

Highlights

  • Many examples including ImageNet, DCGAN and VAE start supporting ChainerX arrays

New Features

  • Support orthogonal embedding initialization (#6031)
  • Add an option in links.loss.CRF1d to automatically sort the input sequence (#6351)
  • Add AdaBound (and AMSBound) (#6388, thanks @hitsgub!)
  • Add squared_difference to chainerx (#6501, thanks @aksub99!)
  • Implement array vs array functionality to chainerx.minimum (#6541, thanks @aksub99!)
  • Add FP16 support to send/recv (#6552)
  • Implement array to array functionality to chainerx.maximum (#6570, thanks @aksub99!)
  • Add Mean Var Python Bindings to ChainerX (#6640, thanks @kshitij12345!)
  • Add chainerx.ceil (#6705, thanks @kshitij12345!)
  • Add chainerx.floor (#6707, thanks @kshitij12345!)
  • Add chainerx.absolute (#6715, thanks @dido1998!)
  • Add chainerx.argmin and chainerx.ndarray.argmin (#6740, thanks @Harshan01!)
  • Add chainerx.amin and chainerx.min (#6752, thanks @Harshan01!)
  • Add chainerx.a/sinh,chainerx.a/cosh (#6776, thanks @kshitij12345!)
  • Add chainerx.fabs and chainerx.sign (#6777, thanks @kshitij12345!)
  • Add chainerx.logical_and chainerx.logical_or (#6779, thanks @kshitij12345!)
  • Add chainerx.all and chainerx.any (#6781, thanks @kshitij12345!)
  • Add chainerx::Softmax and chainerx.softmax (#6814, thanks @tohmae!)
  • Add zero fill mode in allreduce of chainermn (#6817)
  • Make BatchNorm states public (#6847)
  • Introduce Native/CUDA macros for registering standard elementwise ops (#6870, thanks @kshitij12345!)
  • Make adam variants more accessible (#6874, thanks @crcrpar!)
  • Add chainerx::Swapaxes and chainerx.swapaxes (#6897, thanks @kshitij12345!)
  • Add chainerx.logical_xor (#7014, thanks @ishanrai05!)
  • Add chainerx.log10 (#7015, thanks @ishanrai05!)
  • Add chainerx.isfinite (#7016, thanks @kshitij12345!)
  • Add bitwise ops to ChainerX (#7017, thanks @kshitij12345!)
  • Add chainerx.arctan2 (#7028, thanks @kshitij12345!)
  • Add chainerx.expand_dims (#7029, thanks @kshitij12345!)
  • Add chainerx.flip, chainerx.fliplr and chainerx.flipud (#7065, thanks @kshitij12345!)
  • Add chainerx.where (#7067, thanks @kshitij12345!)
  • Add F.arctanh (#7095)

Enhancements

  • Improve error message of gradient_check.check_double_backward (#6427)
  • Improve link_hooks.SpectralNormalization (#6655, thanks @crcrpar!)
  • ChainerX Op registration: normalization (#6719)
  • ChainerX Op registration: arithmetic (#6723)
  • Implement Relu in ChainerX (#6731, thanks @dido1998!)
  • Make device functions public (#6744)
  • ChainerX Op registration: creation (#6745)
  • ChainerX Op registration: linalg (#6746)
  • Allow snapshot_object have condition and writer option (#6762)
  • Support fallbacks of ChainerX on GetItem fail when indices contains chainerx.ndarray (#6769)
  • Fix Evaluator for chainer.dataset.converter (#6768)
  • Rename patients argument to patience in EarlyStoppingTrigger (#6784)
  • Remove Backend ctor and use CreateBackend (#6785)
  • ChainerX Op registration: pooling (#6800)
  • Define __str__ for Device classes (#6816, thanks @nishnik!)
  • Simplify numeric.h (#6832)
  • ChainerX Op registration: connection (#6833)
  • ChainerX Op registration: array members (#6834)
  • ChainerX Op registration: math (#6842)
  • Mixed dtypes: chainerx::Minimum (#6858)
  • Update distributions.independent (#6860, thanks @ganow!)
  • Add chainerx.ndarray.all and chainerx.ndarray.any (#6926)
  • Fix HuberLoss.forward avoid loss of significance (#6940)
  • Support Tensor Core in chainerx::Dot (#6960)
  • Fix F.get_item backward for ChainerX (#6991)
  • Support NumPy scalars in ChainerX arithmetics (#7004)
  • Implement NumPy-like pairwise reduction for stability (#7043, thanks @grafi-tt!)
  • Support mixed dtypes in Stack (#7058)
  • ChainerX Scalar / Array divisions (#7075)
  • Fix Reshape copy condition (#7080)
  • Fix trigger constructors to raise errors instead of assertion failures (#7101)
  • Support Tensor Core in chainerx::Conv (#7112)

Performance Improvements

  • Optimized ChainerX-to-CuPy ndarray conversion (#6204)
  • Use cuDNN in ReLU (#6993)
  • Fast integer scale unpooling (#7114, thanks @tkerola!)

Bug Fixes

  • Avoid throwing in destructors (#6725)
  • Fix TypeError during BN deserialization on Win64 (#6765, thanks @hyabe!)
  • Fix chainerx.astype casting from float16 to bool in CUDA (#6780, thanks @kshitij12345!)
  • Fix ArgMax of CUDA when all values are negative (#6783)
  • Fix unchain gradient pull (#6804, thanks @Rishav1!)
  • Remove chainerx.square fallback since it is implemented in C++ (#6823)
  • Fix stack overflow caused when to_gpu/to_cpu/to_intel64 were overridden (#6824)
  • Fix filename arg of PlotReport (#6866)
  • Make InvalidType picklable (#6884, thanks @zaltoprofen!)
  • Rename the macro name for AMinOp (#6922)
  • Fix terminal column width retrieval in backprop traceback in Python 2 (#6949)
  • Avoid using ImportError during import cupy (#6954)
  • Fix cuDNN descriptor double destroy (#6972)
  • Fix ConcatWithAsyncTransfer (#6992)
  • Set allow_pickle=True (#7036)
  • Fix subview of zero-sized arrays (#7037)
  • Fix At output offset (#7046)
  • Fix handling of ndarray offsets (#7047)
  • Fix construction of std::shared_ptr with custom deleter in chianer_interop.cc (#7107)
  • Fix build with clang (#7119)

Code Fixes

  • Check headers with clang-tidy (#6441)
  • Refactor CUDA batch norm tensor descriptor (#6724)
  • Fix comments and add TODO to indexing routines (#6789)
  • Add cuda_internal::DeviceInternals to wrap handle etc. (#6820)
  • Clean up DeviceInternals (#6827)
  • Rename CHAINERX_REGISTER_OP_{NATIVE,CUDA} to CHAINERX_{NATIVE,CUDA}_REGISTER_OP (#6865)
  • Add comments on del (#6933)
  • Unify variable names in gradient_check (#6935)
  • Align macro parameter name (#6941)
  • Introduce chainerx/kernels/ and rename existing device "op"s to "kernel"s (#6944)
  • Remove obsolete "Op" files (#6959)
  • Prefix macro with CHAINERX as per convention (#7022)
  • Use macro in exp_log.{cc/cu} (#7068)
  • Pass arguments by value in native::Float16 and cuda::Float16 (#7069)
  • Avoid importing object (#7110)

Documentation

  • Fix to clarify the description about initializer argument (#6317)
  • Add docs for two loss functions (#6349, thanks @hsezhiyan!)
  • Improve docs of square, maximum and squared_difference (#6451, thanks @aksub99!)
  • Append to v6 upgrade guide about Python 3.4 support drop (#6493)
  • Add reference and warning to F.swish document (#6509, thanks @fiarabbit!)
  • Document fix in default initializer (#6519)
  • Convert utilities docs to one page (#6595, thanks @trancenoid!)
  • Add chainer.get_device to doc (#6735)
  • Use search index (#6881)
  • Add chainerx.sigmoid docs (#6889, thanks @crcrpar!)
  • Fix typo in F.convolution_2d (#6890, thanks @crcrpar!)
  • Document chainer.testing.LinkTestCase (#6895, thanks @crcrpar!)
  • Update README.txt for a link to the tutorial (#6896)
  • Fix broken link in chainerx.md (#6899, thanks @tkat0!)
  • Document passive attributes in FunctionTestCase (#6931)
  • Fix documentation of renamed arguments (#6932)
  • Fix typo in pickle_dataset.py (#6942)
  • Update ChainerX contribution guide (#6951)
  • Support Sphinx 2.0 and use absolute path to support the latest RTD (#7027)
  • Fix link to ChainerMN docs in performance guide (#7044)
  • Update supported MPI list (#7086)
  • Document CHAINERX_ENABLE_BLAS environment variable (#7098, thanks @durswd!)
  • Move backend docs to a separate page (#7099)
  • Document backend and device objects (#7102)
  • Remove extra spaces in docstrings (#7125)
  • Fix AdamW docstring (#7137, thanks @crcrpar!)
  • Fix spelling of AMSGrad (#7138, thanks @crcrpar!)

Installation

  • CMake for Windows(clang-cl) (#7039, thanks @durswd!)
  • Exclude protobuf 3.8.0rc1 from dependencies (#7083)

Examples

  • Improve chainer examples (#6399, thanks @crcrpar!)
  • Fix reinforcement_learning example to work with default dtype (#6624)
  • Support default dtype in vae example (#6717)
  • Support ChainerX in reinforcement learning example (#6733)
  • Support ChainerX in wavenet example (#6736)
  • Trivial fixes to Wavenet example (#6737)
  • Support ChainerX in VAE example (#6739)
  • Support ChainerX in text classification example (#6769)
  • Support ChainerX in DCGAN example (#6773)
  • Support ChainerX in word2vec example (#6774)
  • Show download progress bar in image-captioning example (#6775)
  • Support ChainerX in memnn example (#6854)
  • Use filename in PlotReport example (#6880, thanks @crcrpar!)
  • Support ChainerX in CIFAR example (#6936)
  • Support ChainerX in POS-tagging example (#7081)
  • Support ChainerX in Sentiment example (#7087)
  • Add progress bar to sentiment analysis example (#7103)
  • Support ChainerX in Model Zoo example (#7129)

Tests

  • Simplify F.mean_absolute_error test (#6253, thanks @aksub99!)
  • Simplify F.bilinear test (#6488, thanks @ishanrai05!)
  • Simplify F.deconvolution_2d test (#6498, thanks @ishanrai05!)
  • Display pytest summary (#6625, thanks @kshitij12345!)
  • Travis test against v6 branch (#6749)
  • Fix Travis with macOS (#6754)
  • Dodge nondifferentiable inputs in chainerx.max test (#6761)
  • Make too slow initializers' tests faster (#6792)
  • Fix test failures in math test (#6798)
  • Simplify F.flip test (#6801, thanks @ishanrai05!)
  • Simplify F.where test (#6802, thanks @ishanrai05!)
  • Simplify F.repeat test (#6803, thanks @ishanrai05!)
  • Fix F.elu test numeric error (#6841)
  • Relax tolerance for float16 in unary_math_function_unittest (#6845)
  • Relax tolerances and avoid non-differentiable points for FP16 in triplet loss tests (#6855)
  • Simplify F.unpooling_nd test (#6861, thanks @ishanrai05!)
  • Simplify F.local_response_normalization test (#6867, thanks @ishanrai05!)
  • Simplify F.reshape test (#6868, thanks @ishanrai05!)
  • Simplify F.layer_normalization test (#6871, thanks @ishanrai05!)
  • Fix test failure in test_spatial_transformer_sampler.py (#6883)
  • Simplify F.prelu test (#6887, thanks @ishanrai05!)
  • Simplify F.flatten test (#6888, thanks @ishanrai05!)
  • Simplify F.dstack test (#6891, thanks @ishanrai05!)
  • Simplify F.sign test (#6898, thanks @hikjik!)
  • Simplify F.ceil test (#6900, thanks @hikjik!)
  • Simplify F.floor test (#6901, thanks @hikjik!)
  • Fix F.rrelu test instability (#6915)
  • Fix F.max_pooling_nd test instability (#6917)
  • Fix flaky Huber loss test (#6924)
  • Simplify F.fmod test (#6937, thanks @hikjik!)
  • Simplify F.fix test (#6938, thanks @hikjik!)
  • Fix test parameters in ChainerX math tests (#6946)
  • Increase the default columns in Travis CI (#6948)
  • Fold Travis test outputs (#6961)
  • Simplify 'F.min', 'F.max' test (#6962, thanks @hikjik!)
  • Simplify 'F.exp', 'F.log' test (#6963, thanks @hikjik!)
  • Simplify F.expm1 test (#6965, thanks @hikjik!)
  • Fix flaky ChainerX max_pool test (#6975)
  • Simplify F.bias test (#6976, thanks @hikjik!)
  • Simplify F.cumsum test (#6977, thanks @hikjik!)
  • Refactor Variable.addgrad test (#6979)
  • Simplify F.cosh, F.sinh test (#6980, thanks @hikjik!)
  • Simplify F.log1p test (#6981, thanks @hikjik!)
  • Simplify F.linear_interpolate test (#6984, thanks @hikjik!)
  • Simplify F.fft, F.ifft test (#6985, thanks @hikjik!)
  • Simplify F.matmul test (#6987, thanks @ishanrai05!)
  • Fix flaky TestLogSumExp (#6988)
  • Fix flaky TestMin (#6989)
  • Simplify F.get_item test (#6990)
  • Simplify F.inv, F.batch_inv test (#6994, thanks @hikjik!)
  • Simplify F.batch_l2_norm_squared test (#6996, thanks @hikjik!)
  • Simplify F.accuracy test (#7006, thanks @hikjik!)
  • Simplify F.binary_accuracy test (#7007, thanks @hikjik!)
  • Simplify F.r2_score test (#7008, thanks @hikjik!)
  • Simplify F.permutate test (#7010, thanks @hikjik!)
  • Simplify F.scatter_add test (#7012, thanks @hikjik!)
  • Simplify F.separate test (#7013, thanks @hikjik!)
  • Simplify F.logsumexp test (#7018, thanks @hikjik!)
  • Skip tests that fail with NumPy 1.16.3 (#7021)
  • Add broadcast test in test_math.py (#7023)
  • Fix flaky chainerx.abs test (#7024)
  • Remove ChainerX acceptance tests (#7026)
  • Fix flaky chainerx.tan test (#7033)
  • Display pytest summary (cont.) (#7089)

Others

  • Make it easier to copy the instruction in the issue template (#6665)
  • Make git ignore chainerx/libchainerx.dylib (#6666)
  • Add .mergify.yml (#7074)
  • Improve mergify configuration (#7111)
chainer - v6.0.0

Published by beam2d over 5 years ago

This is the release note of v6.0.0. See here for the complete list of solved issues and merged PRs.

This release note only covers the difference from v6.0.0rc1; for all highlights and changes, please refer to the release notes of the pre-releases:

See the Upgrade Guide if you are upgrading from previous versions.

Highlights

  • AdaBound and AMSBound are now supported by Adam
  • The performance of unpooling with integer scaling is improved
  • Many examples including ImageNet, DCGAN and VAE support ChainerX

New Features

  • Implement array vs array functionality to chainerx.minimum (#6813, thanks @aksub99!)
  • Add logical_and and logical_or to ChainerX (#6821, thanks @kshitij12345!)
  • Add squared_difference to ChainerX (#6822, thanks @aksub99!)
  • Add AdaBound (and AMSBound) (#6846, thanks @hitsgub!)
  • Add condition and writer option to snapshot_object (#6943)
  • Add chainerx.ceil (#6852, thanks @kshitij12345!)

Enhancements

  • Make ChainerX device functions public (#6760)
  • Fix Evaluator for chainer.dataset.converter (#6790)
  • Remove ChainerX Backend ctor and use CreateBackend (#6809)
  • Improve link_hooks.SpectralNormalization (#6877, thanks @crcrpar!)
  • Update distributions.independent (#6945, thanks @ganow!)
  • Define __str__ for Device classes (#7092, thanks @nishnik!)
  • Fix trigger constructors to raise errors instead of assertion failures (#7105)

Performance Improvements

  • Fast integer scale unpooling (#7127)

Bug Fixes

  • Avoid throwing in destructors (#6755)
  • Fix ArgMax of CUDA when all values are negative (#6796)
  • Fix chainerx.astype casting from float16 to bool in CUDA (#6797, thanks @kshitij12345!)
  • Fix TypeError during BN deserialization on win64 (#6812, thanks @hyabe!)
  • Remove chainerx.square fallback since it is implemented in C++ (#6828)
  • Fix stack overflow caused when to_gpu/to_cpu/to_intel64 were overridden (#6849)
  • Fix unchain gradient pull (#6918, thanks @Rishav1!)
  • Fix filename arg of PlotReport (#6928)
  • Make InvalidType picklable (#6934, thanks @zaltoprofen!)
  • Fix terminal column width retrieval in backprop traceback in Python 2 (#6958)
  • Avoid using ImportError during import cupy (#7011)
  • Fix ConcatWithAsyncTransfer (#7019)
  • Set allow_pickle=True (#7048)
  • Fix subview of zero-sized arrays (#7051)
  • Fix At output offset (#7054)
  • Fix handling of ndarray offsets (#7056)
  • Fix construction of std::shared_ptr with custom deleter in chianer_interop.cc (#7109)
  • Add zero fill mode in allreduce of chainermn (#7142)

Code Fixes

  • Fix comments and add TODO to indexing routines (#6793)
  • Refactor CUDA batch norm tensor descriptor (#6805)
  • Add cuda_internal::DeviceInternals to wrap handle etc. (#6826)
  • Clean up DeviceInternals (#6830)
  • Avoid importing object (#7121)
  • ChainerX op registration: normalization (#6851)

Documentation

  • Append to v6 upgrade guide about Python 3.4 support drop (#6751)
  • Fix broken link in chainerx.md (#6916, thanks @tkat0!)
  • Use search index (#6930)
  • Fix typo in pickle_dataset.py (#6964)
  • Update ChainerX contribution guide (#6971)
  • Document chainer.testing.LinkTestCase (#7001, thanks @crcrpar!)
  • Update supported MPI list (#7113)
  • Document CHAINERX_ENABLE_BLAS environment variable (#7120)
  • Fix documentation of renamed arguments (#7123)
  • Backport #6595, #7099 and #7102 (#7152)

Installation

  • Exclude protobuf 3.8.0rc1 from dependencies (#7088)

Examples

  • Improve Chainer examples (#6753, thanks @crcrpar!)
  • Support ChainerX in reinforcement learning example (#6787)
  • Support ChainerX in VAE example (#6791)
  • Support ChainerX in word2vec example (#6795)
  • Support ChainerX in DCGAN example (#6799)
  • Support ChainerX in wavenet example (#6806)
  • Support ChainerX in CIFAR example (#6957)
  • Support ChainerX in text classification example (#6997)
  • Use filename in PlotReport example (#7009, thanks @crcrpar!)
  • Fix reinforcement_learning example to work with default dtype (#7049)

Tests

  • Travis test against v6 branch (#6750)
  • Fix Travis with macOS (#6758)
  • Dodge nondifferentiable inputs in chainerx.max test (#6766)
  • Fix F.elu test numeric error (#6844)
  • Fix test failures in math test (#6850)
  • Relax tolerance for float16 in unary_math_function_unittest (#6919)
  • Fix F.rrelu test instability (#6920)
  • Fix F.max_pooling_nd test instability (#6927)
  • Relax tolerances and avoid non-differentiable points for FP16 in triplet loss tests (#6929)
  • Fold Travis test outputs (#6967)
  • Increase the default columns in Travis CI (#6973)
  • Fix flaky TestLogSumExp (#6999)
  • Fix flaky ChainerX max_pool test (#7002)
  • Fix test failure in test_spatial_transformer_sampler.py (#7020)
  • Quickfix: skip tests that fail with NumPy 1.16.3 (#7025)
  • Fix flaky Huber loss test (#7052)
  • Fix flaky chainerx.tan test (#7053)
  • Display pytest summary (#7090)
  • Display pytest summary (cont.) (#7091)
  • Make too slow initializers' tests faster (#7122)

Others

  • Make git ignore chainerx/libchainerx.dylib (#6885)
chainer - v6.0.0rc1

Published by kmaehashi over 5 years ago

This is the release note of v6.0.0rc1. See here for the complete list of solved issues and merged PRs.

Announcements

  • After this release, the master branch is switched to the development of v7 series. v6.0.0 will continue developing at the v6 branch.
  • (#6629) You can now access the product backlog (the task list that ChainerX core team is willing to work on) as a spreadsheet here. Note that the sheet is actively edited by ChainerX core dev team. The items are NOT promises; we may drop any features from the list any time, but you can use it to know in which direction the development is going on in the near future.

Highlights

  • Mixed precision training support is improved.
    • In particular, mixed precision mode (a.k.a. mixed16 dtype) is added. You can set an environment variable CHAINER_DTYPE=mixed16 to make Chainer choose appropriate dtypes for mixed precision training (in most places it is float16, but it automatically chooses float32 when it’s better for precision and performance reasons).
    • Loss scaling for avoiding underflow in backprop with float16 now supports dynamic mode. In this mode, the scaling factor is adjusted during training so that backprop does not overflow. You can use it with (optimizer).loss_scaling(). See the documentation for details.

Changes without compatibility

  • Deprecate old NCCL versions and related communicators (#6506)
    • Support of NCCL<2.3 is deprecated. We encourage users to use NCCL 2.3 or later ones.

New Features

  • Human readable representation of link and chain (#4853, thanks @wkentaro!)
  • Add variable.item() (#5797, thanks @crcrpar!)
  • Refactor Link.to_device family (#5986)
  • Add decorrelated batch normalization (#6150, thanks @crcrpar!)
  • Add option unit to CupyMemoryProfileHook.print_report() (#6256, thanks @hitsgub!)
  • Add distributions.Independent (#6324, thanks @ganow!)
  • Dynamic loss scaling (#6337, thanks @anaruse!)
  • Add ChainerX FloorDivide (#6350)
  • Customizable forward output check in testing.FunctionTestCase (#6444)
  • Adding fp16 support to the ChainerMN communicators (#6448)
  • mixed16 mode and its support in L.BatchNormalization (#6456)
  • Add shape and dtype check before allrecuce (#6461)
  • Add F.relu6 as an alias to F.clipped_relu (#6463, thanks @aksub99!)
  • Implementation of sigmoid for ChainerX (#6472, thanks @dido1998!)
  • Add minimum to chainerx (#6477, thanks @aksub99!)
  • Add square to chainerx (#6486, thanks @aksub99!)
  • Add chainerx.testing.integral_dtypes (#6526)
  • Support for chainer.mixed16 data type in PureNcclCommunicator (#6548)
  • Add LinkTestCase to simplify link tests (#6559)
  • Add Sin and Cos to chainerx (#6601, thanks @kshitij12345!)
  • Support for fp16 and mixed16 in MultiNodeBatchNormalization of ChainerMN (#6619)
  • Add tan, arcsin, arccos, arctan to ChainerX (#6703, thanks @IvanYashchuk!)

Enhancements

  • Improve F.resize_images speed (#5753, thanks @grafi-tt!)
  • Improve F.group_normalization via cuDNN call (#5924, thanks @grafi-tt!)
  • Fix backward of F.average_pooling_nd with pad_value of None (#6332, thanks @crcrpar!)
  • Support for fp16 in naive comm (#6333)
  • Change backward of F.log_ndtr to avoid NaN (#6340)
  • Stop retaining y.grad on y.backward(retain_grad=False) (#6348)
  • Set requires_grad explicitly in gradient_check and function test (#6364)
  • Fix error messages in get_fans (#6365)
  • ChainerX dtype promotion: mathematical functions (#6379)
  • Mixed dtype: concatenate (#6381)
  • ResultType to take kind into account (#6419)
  • Improve FunctionTestCase error message (#6426)
  • Mixed dtype: arithmetics (#6432)
  • Change intermediate dtype of Adam for float16 parameters to float32 (#6442)
  • Mixed dtype: dot (#6443)
  • Avoid using pytest attributes during import (#6453)
  • Dot product for higher dimensions chainerX (#6476, thanks @dido1998!)
  • Remove dtype from chainerx.Scalar (#6481)
  • Mixed dtype: BatchNorm and FixedBatchNorm (#6484)
  • Support chainerx::Take indices other dtype than int64 (#6485)
  • Keep backward compatibility on cupy.cudnn.batch_normalization_forward_training (#6497)
  • Deprecate old NCCL versions and related communicators (#6506)
  • Mixed dtype chainerx::conv and chainerx::conv_transpose (#6510)
  • Support non-float cast in F.cast (#6518)
  • Remove restriction of x.dtype == b.dtype in F.convolution_nd and F.deconvolution_nd (#6524)
  • Avoid exposing chainerx.Scalar to Python (#6535)
  • Fix parameterize_pytest to allow parameterizing with tuples (#6554)
  • Change device spec (#6563)
  • Mixed dtype support in chainerx.linear (#6569)
  • Check lengths of args of chainer.grad (#6580)
  • Mixed dtype: comparison (#6590)
  • Fix linspace (#6605, thanks @kshitij12345!)
  • Add PerformanceWarning (#6617)
  • Implemented ChainerX version of Clipped ReLU forward (#6627, thanks @Harshan01!)
  • Allow comma separated keys in testing.product (#6635)
  • BatchNormalization to only allocate dummy mean and var in cuDNN path (#6656)
  • Generate shorter class names for parameterized tests (#6660)
  • ChainerX dynamic op registry (#6675)
  • Remove unnecessary broadcasts from F.layer_normalization (#6680, thanks @hitsgub!)
  • Remove unnecessary broadcasts from F.l2_normalization (#6681, thanks @hitsgub!)
  • Support cupy-cuda101 package (#6700)
  • Properly handle FP16 in D.Normal (#6709)
  • Mixed-dtype: minimum and maximum (#6713)
  • Op registration: indexing (#6718)
  • Op registration: logic (#6727)
  • Op registration: trigonometric (#6729)

Bug Fixes

  • Forbid calling empty Sequential (#6304)
  • Fix fp16 issue in batch normalization (#6323, thanks @anaruse!)
  • Fix F.softmax_cross_entropy float16 under/overflow (#6366)
  • Fix lazy init of BatchNormalization link (#6369)
  • Fix str.join TypeError in FunctionTestCase helper (#6370)
  • Fix chainer.links.NStepRNN and its variants (#6415, thanks @crcrpar!)
  • Fix an off-by-one in slicing of chainerx::Array (#6540)
  • Fix more corner cases in chainerx::Slice (#6557)
  • Fix dimension check of chainerx::Linear (#6593, thanks @crcrpar!)
  • Fix ChainerX optimzer fallback for non-default devices (#6699)
  • Fix DeviceResident.to_gpu fallback argument (#6712)

Code Fixes

  • Fix F632 (use == / != to compare str) (#6346)
  • Avoid # NOQA in docstrings (cont.) (#6356)
  • Fix comment style of op_utils.py (#6421)
  • Refactor chainerx::Linear (#6425)
  • Fix ResultTypeResolver multiple definitions (#6439)
  • Assert that input to array props formatter is a list or tuple (#6440)
  • Fix style of .clang-tidy (#6445)
  • Remove unnecessary AsContiguous in CudaConv::ConvGradWeight (#6520)
  • Remove commented out code from _BNMode (#6582)
  • Change the deprecated collections (#6645)
  • Remove obsolete assertions (#6648)
  • Allow ArrayBody::GetArrayNode to return null (#6658)
  • Make BackwardBuilder::Target less stateful (#6659)
  • Clean up test code (#6688)

Documentation

  • Write guides to implement new-style functions (#4986)
  • Fix typo (#6384, thanks @aksub99!)
  • Fix Sphinx markups in RNNs docs (#6412, thanks @crcrpar!)
  • Fix docment in TimerHook (#6433, thanks @hitsgub!)
  • Refactor documentation of F.prelu (#6455, thanks @fiarabbit!)
  • Fixes typo in docstring for classification_summary (#6515, thanks @yewang!)
  • Write TODOs to address Dot backward cast (#6537)
  • Override forward in LinkHook documentation (#6546, thanks @crcrpar!)
  • Remove duplicated entry in reference (#6571)
  • Fix F.rrelu documentation (#6581, thanks @fiarabbit!)
  • Add gradient_check.check_double_backward in reference (#6584)
  • Fix :meth: link (#6603, thanks @23pointsNorth!)
  • Update broken link in chainerx.md (#6610, thanks @kshitij12345!)
  • Improve docs and exception message in F.erfcx, F.erfcinv and F.erfinv (#6618)
  • Include a link to ChainerX product backlog (#6630)
  • Fix missing module declaration (#6662)
  • Fix chainer.backend.get_array_module documentation (#6663)
  • Fix typo: 'Notatition' -> 'Notation' (#6673, thanks @nai62!)
  • Fix test failures in FunctionNode implementation guide (#6734)

Installation

  • Environment variable to set ChainerX Python binding build type (#6647)
  • Check CMAKE_BUILD_TYPE (#6664)

Examples

  • Use args.out in train_cifar_custom_loop.py (#6378, thanks @crcrpar!)
  • Fix to use right device for DALI iterator in imagenet example (#6397)
  • Properly pass device ID to DALI pipelines in imagenet example (#6429)
  • Use __future__.division in imagenet example with Python2 (#6462)
  • Fix broken imagenet example (#6489)
  • Fix wavenet example to support the default dtype (#6536)
  • Use float division instead of __future__.division for Python2 (#6562)
  • Fix DCGAN example to work with default dtype (#6585)
  • Use F.matmul instead of F.batch_matmul in memnn example (#6611)
  • Remove unnecessary unchain_backward() in pix2pix example (#6634, thanks @hayato-maki!)
  • Fix file mode of mushrooms.csv (#6693)
  • Replace deprecated URLopener in download.py (#6694)

Tests

  • Test all codes in guides/functions.rst (#6194)
  • Test various spatial_scale for roi_average_pooling_2d (#6238, thanks @knorth55!)
  • Test simplifications
    • Simplify F.swish test (#6306, thanks @ishanrai05!)
    • Simplify F.log_softmax test (#6320, thanks @ishanrai05!)
    • Simplify F.softmax_cross_entropy test (#6363)
    • Simplify F.softmax test (#6371, thanks @aksub99!)
    • Simplify F.flipr test (#6389, thanks @ishanrai05!)
    • Simplify F.flipud test (#6390, thanks @ishanrai05!)
    • Simplify F.moveaxis test (#6392, thanks @ishanrai05!)
    • Simplify F.pad test (#6393, thanks @ishanrai05!)
    • Simplify F.test_squared_difference test (#6395, thanks @aksub99!)
    • Simplify F.minimum test (#6396, thanks @aksub99!)
    • Simplify F.maximum test (#6400, thanks @aksub99!)
    • Simplify tests of F.convolution_2d and F.convolution_nd (#6406, thanks @crcrpar!)
    • Simplify F.rollaxis test (#6408, thanks @ishanrai05!)
    • Simplify F.vstack test (#6410, thanks @ishanrai05!)
    • Simplify F.transpose test (#6458, thanks @ishanrai05!)
    • Simplify F.tile test (#6459, thanks @ishanrai05!)
    • Simplify F.swapaxes test (#6460, thanks @ishanrai05!)
    • Simplify F.resize_image test. (#6464, thanks @ishanrai05!)
    • Simplify F.expand_dims test (#6473, thanks @ishanrai05!)
    • Simplify F.prod test (#6479, thanks @aksub99!)
    • Simplify F.squeeze test (#6487, thanks @ishanrai05!)
  • Fix examples/.gitignore (#6391, thanks @crcrpar!)
  • Suppress warning in caffe test (#6402)
  • Add ChainerX test to FunctionTestCases (#6416)
  • Remove SPHINXOPTS env from Makefile (#6417)
  • Rewrite ChainerX connection tests (#6424)
  • Fix regex in test_print_report (#6430)
  • Fix duplicated test (#6434)
  • Add strides check in NumpyOpTest (#6437)
  • Rewrite ChainerX indexing tests (#6438)
  • Add float16 and float 64 to F.group_normalization test (#6468, thanks @crcrpar!)
  • Rewrite ChainerX linalg tests (#6469)
  • Fix F.pad test for Python2 (#6478)
  • Fix input of F.vstack to a list of ndarrays (#6494, thanks @crcrpar!)
  • Change pytest version requirement (#6502)
  • Force camel case class name for OpTest (#6507)
  • Test result dtype permutation (#6511)
  • Fix test class name (#6532)
  • Rewrite ChainerX batch_norm test (#6542)
  • Rewrite ChainerX sorting tests (#6550)
  • Rewrite ChainerX logic tests (#6551)
  • Rewrite ChainerX activation tests (#6553)
  • Rewrite ChainerX manipulation tests (#6556)
  • Rewrite ChainerX fixed_batch_norm test (#6558)
  • Rewrite ChainerX pooling tests (#6560)
  • Rewrite ChainerX arithmetics tests (#6566)
  • Rewrite ChainerX math tests (#6568)
  • Fix tolerance in chainerx.divide test (#6573)
  • Improve arithmetics tests (#6577)
  • Adjust tolerances of F.einsum tests (#6588)
  • Check grads of inputs to test backward of collective communication (#6589)
  • Avoid mutating FunctionTestBase class attributes (#6599)
  • Avoid mutating LinkTestCase and LinkInitializersTestCase class attributes (#6600)
  • Make op_test decorator remove the previous class (#6602)
  • Use compute_60 instead of compute_50 to run test on P100 (#6633)
  • Destroy NCCL communicator after every use (#6636)
  • Run ChainerX python tests in debug build (#6649)
  • Suppress numpy warnings in math tests (#6651)
  • Fix testing condition of BatchNormalizationMultiGpuTest (#6652)
  • Remove C++ routines tests (#6667)
  • Minimize the Travis CI matrix (#6677)
  • Fix conflicts between 6432 and 6486 (#6679)
  • Stop clang-tidy test in Travis CI (#6682)
  • Fix tolerance in TestConvTranspose (#6691)
  • Rewrite the rest of math tests (#6695)
  • Fix test failure in cuDNN v7.5 (#6710)
  • Fix F.convolution_nd test for flake8 (#6711)
  • Relax tolerances in convolution_nd function test (#6728)
chainer - v5.4.0

Published by niboshi over 5 years ago

This is the release note of v5.4.0. This is the final release of v5.x series. See here for the complete list of solved issues and merged PRs.

Enhancements

  • Fix error messages in get_fans (#6413)
  • Change backward of F.log_ndtr to avoid NaN (#6431)
  • Avoid using pytest attributes during import (#6470)
  • Support cupy-cuda101 package (#6701)

Bug Fixes

  • Fix text_classification example fails on Python 3 (#5651, thanks @koreyou!)
  • Fix lazy init of BatchNormalization link (#6480)
  • Fix chainer.links.NStepRNN and its variants (#6517, thanks @crcrpar!)
  • Fix NCCL version check error in ChainerMN (#6504)

Code Fixes

  • Avoid # NOQA in docstrings (#6549)
  • Change the deprecated collections (#6676)
  • Fix F632 (use ==/!= to compare str) (#6714)

Documentation

  • Remove duplicated entry in reference (#6578)
  • Fix F.rrelu documentation (#6586, thanks @fiarabbit!)
  • Add gradient_check.check_double_backward in reference (#6587)
  • Override forward in LinkHook documentation (#6594, thanks @crcrpar!)
  • Fix :meth: link (#6614, thanks @23pointsNorth!)
  • Improve docs and exception message in F.erfcx, F.erfcinv and F.erfinv (#6632)
  • Fix missing module declaration (#6671)
  • Fix chainer.backend.get_array_module documentation (#6685)
  • Fix typo: 'Notatition' -> 'Notation' (#6686, thanks @nai62!)
  • Fixes typo in docstring for classification_summary (#6697, thanks @yewang!)
  • Write guides to implement new-style functions (#6730)

Examples

  • Fix dali_util in imagenet example for fp16 (#6377, thanks @anaruse!)
  • Use args.out in train_cifar_custom_loop.py (#6411, thanks @crcrpar!)
  • Remove FP16 specific models from imagenet example (#6564)
  • Fix iterator syntax in MNIST custom loop example (#6565)
  • Use float division instead of __future__.division for Python2 (#6567)
  • Fix DCGAN example to work with default dtype (#6591)
  • Use F.matmul instead of F.batch_matmul in memnn example (#6631)

Tests

  • Do not ignore FutureWarning other than experimental features (#6052)
  • Suppress warning in caffe test (#6409)
  • Test all codes in guides/functions (#6428)
  • Remove SPHINXOPTS env from Makefile (#6491)
  • Fix Python 3.4 NumPy Accelerate polyfit error (#6495)
  • Change pytest version requirement (#6513)
  • Adjust tolerances of F.einsum tests (#6672)
  • Fix test failure in cuDNN v7.5 (#6716)
chainer - v6.0.0b3

Published by beam2d over 5 years ago

This is the release note of v6.0.0b3. See here for the complete list of solved issues and merged PRs.

Highlights

  • Spectral Normalization is supported as a link hook
  • Kuzushiji-MNIST dataset is now available at chainer.datasets

Changes without compatibility

  • Raise NotImplementedError if Extension.__call__ is not overridden (#6095)
  • Fix get_retained_{in/out}puts to return None for None inputs/outputs (#6121)
  • Rename chainerx -> chx in public API (#6312)

New Features

  • Unchain all variables after running extensions (#5539, thanks @hitsgub!)
  • Add spectral normalization link hook (#5742, thanks @crcrpar!)
  • Add non-deterministic warning (#5977)
  • Add finished property to once_trigger (#6023, thanks @hitsgub!)
  • Call Iterator.finalize from __del__ and __exit__ (#6098)
  • Add dilate argument to L.Deconvolution2D (#6175, thanks @crcrpar!)
  • Add create_mnbn_model (#6245)
  • Add option align_units to TimerHook.print_report() (#6254, thanks @hitsgub!)
  • Add Kuzushiji-MNIST dataset (#6295, thanks @wintercarver!)
  • Add synchronized iterator (#6345)
  • Converter decorator for ChainerX device support (#5832)
  • Add ChainerX CUDA float16 (#5845)
  • chainerx.ndarray.item (#6050)
  • chainerx.grad Python binding (#6063)
  • Unwrap ChainerX connected array from Variable (#6284)
  • chainerx::ResultType (#6347)

Enhancements

  • Unify arguments of file names (#5357, thanks @crcrpar!)
  • support spatial_scale >= 1.0 in roi_average_align_2d.py (#5634, thanks @knorth55!)
  • Support spatial_scale >= 1.0 in F.roi_max_align_2d (#5635, thanks @knorth55!)
  • Fix pseudo_connect with None input (#5652)
  • Enforce Link.__init__ in subclasses (#5927)
  • Add sequence and numpy array indices support to ndarray.take (#6081)
  • Reduce memory usage in MultiprocessParallelUpdater (#6100)
  • Fix get_retained_{in/out}puts to return None for None inputs/outputs (#6121)
  • Check input size consistency in RNN and LSTM when using cuDNN (#6169)
  • Add support for importing and exporting Caffe Sigmoid layer (#6234, thanks @notogawa!)
  • Add group option value of Convolution2D to Caffe exporter (#6241, thanks @ohnabe!)
  • Improve errors for disabled Variable operators (#6255)
  • DimsFormatter to print a list of dimensions (#6064)
  • Support FunctionNode None inputs in ChainerX (#6122)
  • Fix ChainerX fallback for replaced optimizer state (#6218)
  • Use FMA in NativeDevice::Dot (#6227)
  • Use float accumulation in ChainerX float16 Dot (#6246)
  • Make Chainer backprop modes affect ChainerX counterparts (#6278)
  • Support ChainerX TrueDivide for integer types (#6281)
  • Rename chainerx -> chx in public API (#6312)
  • Improve accuracy of ChainerX native float16 Sum (#6313)

Performance Improvements

  • Optimize Variable.xp to avoid creation of Device instance (#6016)
  • Add Variable._init_unchecked() static method for faster instantiation (#6033)
  • Avoid contextmanager in backprop (#6264)
  • Improve F.relu performance with CuPy (#6268)
  • Improve get_variable performance (#6269)
  • Pass debug flag to backprop_step (#6286)
  • Improve hook handling in backward (#6289)
  • Improve performance of using_config (#6290)
  • Reduce chainer.is_debug() overhead (#6291)
  • Improve performance of using_device for NumPy and Intel64 devices (#6292)
  • Support NumPy integers in chainerx.ndarray.__getitem__ (#5989)

Bug Fixes

  • Make signs generated by initializers.Orthogonal unbiased (#5615)
  • Use ideep in optimizers properly (#5985)
  • Fix warning message for backward on a scalar array (#6026)
  • Validate {Max,Average}Pool kernel_size and stride (#6066)
  • Validate Conv, ConvTranspose stride (#6067)
  • Fix cupy import failure detection (#6085)
  • Fix memory leak during backprop in Python 2 (#6105)
  • Fix FunctionNode.get_retained_outputs to return () if no output is retained (#6118)
  • Do not compare xp with numpy for cupy code path (#6126)
  • CuPy cannot be enabled when cuDNN is unavailable (#6138)
  • Fix double-backprop of F.rrelu (#6139)
  • Check Array constructor for nullptr (#6156)
  • Do not compare xp with numpy for cupy code path (cont.) (#6159)
  • Fix type of internally held grad after Parameter.to_device (#6170)
  • Fix Optimizer to convert state arrays back to ChainerX (#6171)
  • Fix error message of parameterized test (#6287)
  • Add Device.__ne__ for Python 2 (#6335)
  • Fix pickling of ChainerX link (#5988)
  • Fix thread safety of CUDA memory pool FreeUnusedBlocks (#5992)

Code Fixes

  • Fix import order (#6128)
  • Simplify _check_grad_type (#6213)
  • Cosmetic fix to test_gradient_check (#6271)
  • Fix inappropriate usage of is_arrays_compatible (#6274)
  • Use utils.size_of_shape in F.convolution_nd and F.deconvolution_nd (#6329)
  • Use single quotes (#6352)
  • Simplify _array_to_gpu with stream argument (#6358)
  • Add NOLINT to reinterpret_cast (#6051)
  • Wrap platform specific operations and reduce macro usage (#6054)
  • Use py::isinstance to check types (#6083)
  • Use _has_chainerx_array in Variable (#6214)
  • Write comment about CHAINERX_VISIBILITY_HIDDEN (#6231)
  • Fix clang-tidy errors (#6267)

Documentation

  • Make docs of functions refer ndarray (#6042)
  • Fix typo in classifier.py (#6090, thanks @hiden-cubist!)
  • Document NumPy 1.16 support (#6111)
  • Remove anchor to non-existing section (#6130)
  • Reorganize documentation for easier access to tutorials and examples (#6142)
  • Fix old and broken PTB url (#6177)
  • Add imports of initializers and math, required in "Define your own function" examples (#6179, thanks @Qwinpin!)
  • Update URL of PTB dataset (#6182)
  • Add upgrade guide for use of Link.forward method (#6183)
  • Avoid # NOQA in docstrings (#6184)
  • Add FunctionTestCase to documentation (#6189)
  • Add references for n-dimensional arrays (#6219)
  • Imagenet README.md typo (#6223)
  • Update docs for Python 3.4 end-of-life (#6300)
  • Remove duplicate periods in Installation section of README.md (#6339, thanks @crcrpar!)
  • Avoid # NOQA in docstrings (#6355)
  • Fix ChainerMN Step-by-Step Troubleshooting (#6328)
  • Document chainermn.links.create_mnbn_model (#6360)
  • Document ChainerX op test tool (#6354)

Installation

  • Remove bad brew option from Travis CI (#6202)
  • Upgrade clang-tidy to 6.0 (#6062)
  • Use CMAKE_CURRENT_BINARY_DIR in CMakeLists.txt (#6114)
  • Set CMake policy in a proper way (#6166)
  • Make chainerx compiled on Windows (#6176, thanks @durswd!)

Examples

  • Fix seq2seq example (#6091)
  • Fix iterator syntax in MNIST custom loop example (#6099)
  • Fix seq2seq example encoding problem on Python3 (#6205)
  • Minor fix on README of seq2seq example (#6206)
  • Remove FP16 specific models from imagenet example (#6215)
  • Remove PrintReport entries in seq2seq example (#6308)
  • Fix dali_util in imagenet example for fp16 (#6342, thanks @anaruse!)
  • ChainerX seq2seq example (#5830)
  • Fix Chainer X train_mnist.py example for NumPy 1.16 (#5999, thanks @Guriido!)
  • Fix to check chainerx device in ImageNet example (#6280)

Tests

  • Simplify F.batch_renormalization test (#5817)
  • Simplify F.mean_squared_error test (#5822)
  • Simplify F.concat test (#5823)
  • Add Windows matrix in Travis CI (#5888)
  • Limit the length of parameterized test class name (#6060)
  • Simplify F.crelu and F.elu test (#6070)
  • Fix Travis CI ignoring non-last command errors in each step (#6082)
  • Fix chainermn tests (#6048)
  • Remove Travis macOS Py34 job (#6107)
  • Remove unused test step (#6123)
  • Move Jenkins mypy check to misc matrix (#6124)
  • Fix filtering FutureWarning (#6135)
  • Fix tolerance and numeric grad precision in F.triplet test (#6136)
  • Remove Travis Ubuntu Py34 job (#6149)
  • Remove commented-out Py34 matrix from AppVeyor (#6160)
  • Fix unit test collection timeout (#6164)
  • Add x_dtype and W_dtype to the if statement of FunctionTestCase._skip_if_chainerx_float16 (#6167, thanks @crcrpar!)
  • Stop mypy in CIs (#6172)
  • Simplify F.tanh test (#6173, thanks @crcrpar!)
  • Simplify F.sigmoid test (#6174, thanks @crcrpar!)
  • Simplify F.hard_sigmoid test (#6192, thanks @crcrpar!)
  • Rewrite the tests of F.average_pooling_2d (#6211, thanks @crcrpar!)
  • Rewrite linear function test (#6236, thanks @crcrpar!)
  • Simplify F.selu test (#6243, thanks @aksub99!)
  • Simplify F.softplus test (#6298, thanks @ishanrai05!)
  • Simplify F.leaky_relu test (#6301, thanks @aksub99!)
  • Simplify F.maxout test (#6302, thanks @aksub99!)
  • Simplify F.sum test (#6307, thanks @aksub99!)
  • Improve accuracy of test of F.rrelu (#6318)
  • Simplify F.diagonal test (#6322, thanks @ishanrai05!)
  • Write test types in Travis CI job names (#6361)
  • Check CUDA device after each test case of chainerx_tests (#6049)
  • Skip ChainerX float16 tests when FunctionTestCase is used (#6069)
  • Remove legacy CHAINERX_CUDA_MULTITHREAD_TEST_SEGV_WORKAROUND from Jenkins script (#6108)
  • Run ChainerX python tests in Travis CI (#6109)
  • Enable ChainerX C++ test in Travis CI (#6110)
  • ChainerX test tool for ops (#6248)
  • Use Chainer-style parameterization in ChainerX op test (#6334)
chainer - v5.3.0

Published by hvy over 5 years ago

This is the release note of v5.3.0. See here for the complete list of solved issues and merged PRs.

Enhancements

  • Reduce memory usage in MultiprocessParallelUpdater (#6113)
  • Check input size consistency in RNN and LSTM when using cuDNN (#6186)
  • Add group option value of Convolution2D to Caffe exporter (#6293, thanks @ohnabe!)
  • Add support for importing and exporting Caffe Sigmoid layer (#6294, thanks @notogawa!)

Performance Improvements

  • Improve F.relu performance with CuPy (#6270)
  • Reduce chainer.is_debug() overhead (#6297)

Bug Fixes

  • Bugfix of MultiNodeOptimizer with loss scaling (#5783)
  • Fix BN+F.forget (#6076)
  • Fix cupy import failure detection (#6112)
  • Fix memory leak during backprop in Python 2 (#6125)
  • Use ideep in optimizers properly (#6143)
  • Fix dump_graph not to memory leak (#6147, thanks @hitsgub!)
  • Fix warning message for backward on a scalar array (#6319)

Documentation

  • Fix wrong MNIST MLP anchor (#6055)
  • Fix document in NStepLSTM/NStepRNN (#6074)
  • Fix typo in classifier.py (#6102, thanks @hiden-cubist!)
  • Document NumPy 1.16 support (#6141)
  • Reorganize documentation for easier access to tutorials and examples (#6152)
  • Fix old and broken PTB url (#6180)
  • Add upgrade guide for use of forward method (#6193)
  • Add imports of initializers and math, required in "Define your own function" examples (#6220, thanks @Qwinpin!)
  • Add references for n-dimensional arrays (#6221)
  • Imagenet README.md typo (#6224)
  • Update URL of PTB dataset (#6239)
  • Make docs of functions refer ndarray (#6288)

Examples

  • Refactor train_mnist_dual_parallel.py (#5716)
  • Fix seq2seq example (#6093)
  • Minor fix on README of seq2seq example (#6208)
  • Fix seq2seq example encoding problem on Python3 (#6209)
  • Remove PrintReport entries in seq2seq example (#6321)

Tests

  • Fix tolerance and numeric grad precision in F.triplet test (#6144)
  • Fix chainermn tests (#6086)
chainer - v6.0.0b2

Published by niboshi over 5 years ago

This is the release note of v6.0.0b2. See here for the complete list of solved issues and merged PRs.

New Features

  • Asynchronous snapshot writers (#4472, thanks @tyohei!)
  • Add D.Cauchy (#5337)
  • Add D.Geometric (#5343)
  • Add cached_property decorator (#5416)
  • Make build_computational_graph accept single output (#5445)
  • Add trigger to be fired only once (#5565, thanks @hitsgub!)
  • Use default dtype in L.NegativeSampling (#5664)
  • Add optional property finished to trigger object (#5681, thanks @hitsgub!)
  • Support all float dtypes in F.spatial_transformer_sampler (#5751)
  • Add a naive TimerHook link hook. (#5842, thanks @crcrpar!)
  • Add F.as_strided (#5902, thanks @fiarabbit!)
  • Add 'mean' value as an option for VAE loss reduce (#5966, thanks @23pointsNorth!)

Enhancements

  • Support inputs with ndim!=2 for F.huber_loss (#5534)
  • Show forward stacktrace in backward (#5603)
  • Add type check for r arg of F.rrelu (#5619)
  • Support unretained Variables in _check_grad_type (#5640)
  • FunctionNode automatic fallback of array attributes in forward (#5745)
  • Switch device during gradient_check (#5777)
  • Raise CuPy not available error early in cuda.GpuDevice initialization (#5780)
  • Add hasattr check to user-specified flush call to file-like objects. (#5794, thanks @grafi-tt!)
  • Support custom initializer in links.CRF1d (#5807, thanks @himkt!)
  • Remove F.clip type restriction (#5813)
  • Batched pack/unpack params before/after allreduce (#5829, thanks @anaruse!)
  • Remove unnecessary cast in F.huber_loss (#5835)
  • Reimplement F.LocalResponseNormalization as FunctionNode (#5851)
  • Stop managing memory in max pooling specific manner (#5861)
  • Do not retain input on iDeep F.relu (#5871, thanks @grafi-tt!)
  • Set grad of F.clip 1 at x_min and x_max (#5876, thanks @grafi-tt!)
  • Warn if reset method is not implemented in an iterator (#5882)
  • Cache attributes of distributions (#5892)
  • Use FunctionNode on ROIPooling2D (#5957)
  • Use more precise timer in function_hooks/timer.py (#5971, thanks @crcrpar!)
  • Improve F.elu memory consumption by retaining output (#5972, thanks @grafi-tt!)

Bug Fixes

  • Fix dump_graph not to memory leak (#5538, thanks @hitsgub!)
  • Fix F.batch_normalization + F.forget combination (#5557)
  • Bugfix of MultiNodeOptimizer with loss scaling (#5659)
  • Fix usage of downsample_fb in resnet (#5737, thanks @milhidaka!)
  • Fix device argument passed to MultiprocessParallelUpdater being modified (#5739, thanks @Guriido!)
  • Fix bug when CuPy not installed and cuda.fuse decorator used without parentheses (#5809, thanks @grafi-tt!)
  • Fix F.cast gradient for casts between the same dtypes (#5811)
  • Accept splitting at the tail of dataset in split_dataset (#5895)
  • Fix broken F.leaky_relu grad when slope = 0 (#5898, thanks @grafi-tt!)
  • Add copyparams method to Sequential (#5914)
  • Override _to_device for consistency (#5948)
  • Allow import chainer.testing without pytest (#5973)
  • Raise an appropriate error on cuDNN RNN backward in testing mode (#5981)
  • Fix stochastic failure in WalkerAlias (#6057)

Documentation

  • Remove deprecation notices for v1 and v2 in documentation (#5081)
  • Add description for initializer dtype (#5246)
  • Add Code of Conduct (#5629)
  • Improve installation guide of ChainerMN (#5656)
  • Add explanations for LeNet5 (#5686)
  • Make docs of activation functions refer ndarray (#5718)
  • Add robots.txt to hide older versions from search results (#5768)
  • Fix typo in v2 Upgrade Guide (#5771)
  • Fix a couple of broken links from markdown files (#5789)
  • Model Parallel Documentation (#5791, thanks @levelfour!)
  • Fix wording in documentation (#5795)
  • Write "Wx + b" in the document of Linear. (#5852)
  • Make docs of array functions refer ndarray (#5863)
  • Some small fixes to grammar and spelling (#5869)
  • Make docs of connection functions refer ndarray (#5875)
  • Fix static_graph module path in documentation (#5883)
  • Correct the stable version in master branch (#5891, thanks @jinjiren!)
  • Change .data to .array in Guides and Examples docs (#5907, thanks @jinjiren!)
  • Fix typo (#5915, thanks @MannyKayy!)
  • Transform dataset documentation fix (#5938, thanks @23pointsNorth!)
  • Fix typo (#5942)
  • Update the note in DCGAN example to be compatible with the code. (#5951, thanks @jinjiren!)
  • Fix doc of F.softmax_cross_entropy on output shape with reduce=no (#5965)
  • Make some docs of functions refer ndarray (#5975)
  • Fix document in NStepLSTM/NStepRNN (#5979)
  • Make docs of math functions refer ndarray (#6032)
  • Fix wrong MNIST MLP anchor (#6046)

Installation

  • Check integrity of CuPy wheel for CUDA 10 (#5955)

Examples

  • Add inference code to MNIST example (#4741)
  • Use iter.reset() in PTB example (#5834)
  • Some small improvements to the Mushrooms example (#5982)

Tests

  • FunctionTestCase for function tests (#3499)
  • Test statistics of initializers (#5511)
  • Add test mode to text classification example (#5666)
  • Fix test of F.connectionist_temporal_classification (#5727)
  • Refactor tests of F.split_axis and F.concat (#5733)
  • Return exitcode of make html to Travis (#5769)
  • Fix testing.BackendConfig context for repeated use (#5779)
  • Encode parameters in parameterized class name (#5782)
  • Add test for conveter device argument in Evaluator (#5806)
  • Fix error message of testing.assert_allclose (#5814)
  • Refactor CI scripts (#5858)
  • Refactor Travis script (#5859)
  • Remove some CI requirements (#5865)
  • Allow multiple application of testing.parameterize (#5893)
  • Allow mixing testing.inject_backend_tests and testing.parameterize (#5904)
  • Adjust testing tolerance of numerical gradient (#5923)
  • Adjust testing tolerance of F.connectionist_temporal_classification (#5928)
  • Do not ignore FutureWarning other than experimental features (#5949)
  • Move mypy to static checks (#5987)
  • Skip test on Theano<=1.0.3 and NumPy>=1.16.0 (#6001)
  • Fix travis script to continue on failure in each step (#6002)
  • Fix inject_backend_tests multi_gpu test mark (#6028)
  • Allow doctest to run in single-GPU environment (#6029)
  • Test if the default CUDA device keeps being 0 after each test (#6044)

ChainerX

  • Add ChainerX native float16 (#5761)
  • CuPy/ChainerX memory pool sharing (#5821)
  • Automatic ChainerX fallback of array attributes in Function (#5828)
  • ChainerX backward w.r.t. inputs (C++ chainerx.grad ) (#5747)
  • Improve gradient mismatch error (#5748)
  • Forbid fallback get/setitem for arrays with backprop required (#5754)
  • Implement BFC algorithm in ChainerX CUDA memory pool (#5760)
  • Resolve _as_noncontiguous_array workaround for ChainerX (#5781)
  • L.NegativeSampling ChainerX support (#5816)
  • Stop using Unified Memory by default (#5912)
  • Avoid cudaMemcpyAsync for pinned memory for faster host-to-device transfer (#5940)
  • Remove chainerx.asscalar (#6007)
  • Fix scalar handling of indices_and_sections in chainerx.split (#5788)
  • Fix ChainerX Python docstring allocation issue (#5815)
  • Fix chainerx.maximum to restore CUDA device (#6043)
  • Build ChainerX on ReadTheDocs (#5766)
  • Add chainerx.ndarray to the ndarray doc (#5864)
  • Document CuPy memory pool sharing (#6017)
  • Do not overwrite CMAKE_CXX_FLAGS a user specified (#5770)
  • Patch files for macOS (#5776, thanks @ktnyt!)
  • Update pybind dependency to v2.2.4 (#5798)
  • Update gsl-lite to v0.32.0 (#5849)
  • Enable ChainerX in docker image (#5879)
  • Update third-party.cmake to follow the recent way (#5911)
  • Made ChainerX setup and compile on Windows (#5932, thanks @durswd!)
  • Fix visibility for pybind exception registration for macOS (#5936)
  • Fix manifest typos (#6065)
  • ChainerX MNIST C++ example (#5746)
  • Remove some TODOs of the chainerx resnet example (#5775)
  • Fix jenkins script to allow explicit repo root (#5774)
  • Fix to test against new chainerx.GradientError (#5787)
  • Add Travis matrix for macOS ChainerX tests (#5846)
  • Remove .circleci (#5860)
  • Add C++ linter checks in Travis CI (#5867)
  • Fix FixedCapacityDummyAllocator in CUDA memory pool test (#5993)
  • Fix CUDA specific Python binding (#6037)
  • Add chainerx-generated reference docs to .gitignore (#5805, thanks @knorth55!)
  • Disable clang-tidy modernize-use-auto (#5839)

Code Fixes

  • Simplify batch normalization with cuDNN (#5568)
  • Add type hints for Link, LinkHook, Initializer and ChainerX (#5675)
  • Refactor gradient setter in gradient_check (#5699)
  • Use new RNN implementation (#5726)
  • Backprop from multiple variables (#5741)
  • Fixes for clang (#5744)
  • Improve coding style (#5763)
  • Fix style of setup.py (#5764)
  • Code enhancements: avoid array copies (#5800)
  • Random code enhancements (#5801)
  • Add comment to MultiprocessIterator.__copy__ (#5833)
  • Move workaround utils._getitem/_setitem to chainerx (#5840)
  • Fix clang-tidy error (#5870)
  • Fix typo on internal attribute (#5894)
  • Fix clang-tidy warnings on clang-tidy 6 (#5901)
  • Fix for clang-tidy 7 (#5933)
  • Fix code formatting (#5941)
  • Remove @overload annotations outside the stub files (#5960)
  • Avoid deprecated numpy.asscalar (#5994)
  • Post macro comment for consistency (#6014)
  • Remove chainerx.asscalar from mypy stub file (#6024)

Others

  • Fix .gitignore to avoid ignoring some necessary files (#5836)
  • Allow skipping linkcode in docs with environment variable (#5868)
chainer - v5.2.0

Published by mitmul over 5 years ago

This is the release note of v5.2.0. See here for the complete list of solved issues and merged PRs.

New Features

  • Support default dtype in L.BinaryHierarchicalSoftmax (#5714)
  • Support all float dtypes in F.embed_id (#5926)
  • Support all float dtypes in F.spatial_transformer_sampler (#6003)
  • Support all float dtypes in F.connectionist_temporal_classification (#6011)
  • Support all float dtypes in F.det and F.inv (#6012)
  • Use default dtype in L.NegativeSampling (#6013)
  • Introduce utils.mixed_presision decorator (#6022)
  • Add a naive TimerHook link hook (#6038, thanks @crcrpar!)

Enhancements

  • Change Link.add_hook to return self (#5750, thanks @crcrpar!)
  • Add hasattr check to user-specified flush call to file-like objects (#5803, thanks @grafi-tt!)
  • Support unretained Variables in _check_grad_type (#5826)
  • Use new RNN implementation (#5827)
  • Simplify batch normalization with cuDNN (#5853)
  • Reimplement F.LocalResponseNormalization as FunctionNode (#5900)
  • Support custom initializer in links.CRF1d (#5905, thanks @himkt!)
  • Use FunctionNode on ROIPooling2D (#5967)
  • Fix error message of testing.assert_allclose (#5984)
  • Use more precise timer in function_hooks/timer.py (#6021, thanks @crcrpar!)

Bug Fixes

  • Fix BatchNormalization with lazy initialization fail on GPU (#5713, thanks @koreyou!)
  • Fix device argument passed to MultiprocessParallelUpdater being modified (#5790, thanks @Guriido!)
  • Fix F.cast gradient for casts between the same dtypes (#5818)
  • Fix bug when CuPy not installed and cuda.fuse decorator used without parentheses (#5825, thanks @grafi-tt!)
  • Fix usage of downsample_fb in resnet (#5850, thanks @milhidaka!)
  • Accept splitting at the tail of dataset in split_dataset (#5899)
  • Fix broken F.leaky_relu grad when slope = 0 (#5922, thanks @grafi-tt!)
  • Raise an appropriate error on cuDNN RNN backward in testing mode (#5983)
  • Add copyparams method to Sequential (#5990)
  • Allow import chainer.testing without pytest (#5998)
  • Fix .gitignore to avoid ignoring some necessary files (#5838)

Documentation

  • Fix image URL in README (#5755, thanks @levelfour!)
  • Fix typo in v2 Upgrade Guide (#5772)
  • Fix a couple of broken links from markdown files (#5792)
  • Fix wording in documentation (#5820)
  • Make docs of activation functions refer ndarray (#5831)
  • Model Parallel Documentation (#5843, thanks @levelfour!)
  • Add explanations for lenet5 (#5855)
  • Add description for initializer dtype (#5872)
  • Add Code of Conduct (#5873)
  • Make docs of array functions refer ndarray (#5881)
  • [v5] Document optional arguments as None (#5886)
  • Make docs of connection functions refer ndarray (#5889)
  • Fix static_graph module path in documentation (#5906)
  • Change .data to .array in Guides and Examples docs (#5913, thanks @jinjiren!)
  • Fix typo (#5917, thanks @MannyKayy!)
  • Write "Wx + b" in the document of Linear. (#5919)
  • Improve installation guide of ChainerMN (#5937)
  • Transform dataset documentation fix (#5947, thanks @23pointsNorth!)
  • Update the note in DCGAN example to be compatible with the code. (#5962, thanks @jinjiren!)
  • Fix doc of F.softmax_cross_entropy on output shape with reduce=no (#5969)
  • Make some docs of functions refer ndarray (#5976)
  • Make docs of math functions refer ndarray (#6034)

Installation

  • Check integrity of CuPy wheel for CUDA 10 (#5956)

Examples

  • Use iter.reset() in PTB example (#5857)

Tests

  • Add test mode to text classification example (#5784)
  • Adjust testing tolerance of numerical gradient (#5946)
  • Test statistics of initializers (#5961)
  • Fix pytest plugin version (#5968)
  • Adjust testing tolerance of F.connectionist_temporal_classification (#6035)
  • Test if the default CUDA device keeps being 0 after each test (#6047)
chainer - v5.1.0

Published by niboshi almost 6 years ago

This is the release note of v5.1.0. See here for the complete list of solved issues and merged PRs.

New Features

  • Added support for float dtypes in some functions
    • F.negative_sampling (#5593)
    • F.scatter_add and F.get_item (#5594)

Enhancements

  • Avoid unnecessary copy in ndarray.astype (#5623)
  • Add compute_stream argument in ConcatWithAsyncTransfer to allow more overlap between computation and transfer in CUDA (#5684, thanks @anaruse!)
  • Add gradient consistency checks in numerical_grad (#5705)
  • Code enhancements
    • Avoid cuDNN handle around DropoutStates (#5644)
    • Import testing/backend.py definitions in testing/__init__.py (#5639)
    • Simplify pooling and softmax with cuDNN (#5637, #5672)
    • More consistent use of Variable.array in codes under links (#5689, thanks @crcrpar!)
    • Use automatic broadcasting instead of F.repeat (#5708)

Bug Fixes

  • Fix D.Uniform.log_prob to avoid returning -inf at boundary (#5550)
  • Fix reporter.Summary float value deserialization (#5584)
  • Fix F.negative_sampling output dtype in CPU mode (#5625)

Documentation

  • Add ChainerMN paper to references (#5583)
  • Fix docstring of F.forget (#5588, thanks @fiarabbit!)
  • Fix typo in updaters (#5598, thanks @okayu9!)
  • Update documentation for iDeep constraints (#5601)
  • Fix the method name in the extension guide (#5605, thanks @lehy!)
  • Update F.roi_average_align_2d doc to refer wrapper function (#5617, thanks @knorth55!)
  • Update installation guide of numpy with openblas on macOS (#5630)
  • Fix a typo in Chain example code (#5655)
  • Fix a typo in chainer.distributions documentation (#5661)
  • Fix typo in L.ResNetLayers (#5667, thanks @takaaki82!)
  • Minor typo correction (in docs/variables). (#5671, thanks @grigorisg9gr!)
  • Add links to ChainerCV documentation (#5677)
  • Fix typo in docstrings (#5679)
  • Add documentation of ndarray (#5704)
  • Fix docs for backprop_step (#5710)
  • Make docs in chainer.distributions refer ndarray (#5719)

Examples

  • Use SerialIterator in train_mnist_custom_loop.py (#5544)

Test

  • Fix test warnings in NumPy 1.15 (#5599)
  • Fix test of F.rrelu (#5673)
  • Ignore h5py warning in Python 3.7 (#5694)
  • Fix regex of protobuf modules warned by Python 3.7 (#5711)

Others

  • Update style check tools to the versions compatible with pycodestyle 2.4 (#5715)
chainer - v6.0.0b1

Published by hvy almost 6 years ago

This is the release note of v6.0.0b1. See here for the complete list of solved issues and merged PRs.

Highlights

ChainerX

ChainerX is an ndarray implementation with Define-by-Run automatic differentiation capability. It roughly corresponds to "NumPy/CuPy + Chainer Variable", while some additional features follow:

  • Speed: The whole ndarray and autograd implementation is written in C++, with a thin Python binding. It lowers the overhead existing in the pure Python implementation of Chainer.
  • Extensibility: The backend is pluggable so that it is much easier to add support of new devices.

The speed is best achieved by directly using ChainerX APIs,
while it also provides a compatibility layer through the conventional Variable interface for easier adoption of ChainerX in existing projects.
See the ChainerX Tutorial for more details and concrete examples.

New Features

  • Implement double backward of SLSTM function (#4824, thanks @tohmae!)
  • Add F.roi_max_align_2d (#5198, thanks @knorth55!)
  • Add F.roi_average_pooling_2d (#5285, thanks @knorth55!)
  • Add F.roi_max_pooling_2d (#5304, thanks @knorth55!)
  • Support all float dtypes in F.negative_sampling (#5336)
  • Add D.Chisquare (#5338)
  • Add D.Gumbel (#5352)
  • Add D.Poisson (#5364)
  • Add D.OneHotCategorical (#5372)
  • Serialize BestValueTrigger (#5402, thanks @ktns!)
  • Add return_samples argument to F.negative_sampling and L.NegativeSampling (#5597)
  • Support all float dtypes in F.embed_id (#5624)
  • Support default dtype in L.BlackOut (#5638)
  • Support default dtype in L.BinaryHierarchicalSoftmax (#5648)
  • Support all float dtypes in F.connectionist_temporal_classification (#5680)
  • ChainerX (#5725)

Enhancements

  • Add type compatibility check in npz deserializer (#5483)
  • Use cupy.linalg.det in F.det (#5525)
  • Avoid unnecessary copy in ndarray.astype (#5547)
  • Avoid cuDNN handle around DropoutStates (#5563)
  • Simplify softmax with cuDNN (#5566)
  • Simplify pooling with cuDNN (#5567)
  • Add KL divergence test for D.OneHotCategorical (#5587)
  • Add compute_stream argument in ConcatWithAsyncTransfer to allow more overlap between computation transfer in CUDA (#5606, thanks @anaruse!)
  • Use chainer.utils.size_of_shape in ChainerMN (#5610)
  • Import testing/backend.py definitions in testing/__init__.py (#5633)
  • Avoid using dype char codes (#5646)
  • More consistent use of Variable.array in codes under links (#5657, thanks @crcrpar!)
  • Use automatic broadcasting instead of F.repeat (#5662)
  • Refactor the statemachine of iterators that iterates indices (#5669, thanks @grafi-tt!)
  • Refactor train_mnist_dual_parallel.py (#5678)
  • Change Link.add_hook to return self (#5736, thanks @crcrpar!)

Bug Fixes

  • Fix reporter.Summary float value deserialization (#5482)
  • Fix text_classification example fails on Python 3 (#5591, thanks @koreyou!)
  • Improve iDeep version checking (#5600)
  • Fix D.OneHotCategorical (#5604)
  • Fix Python 3.7 test failures in F.roi_average_pooling_2d (#5611)
  • Fix F.negative_sampling output dtype in CPU mode (#5613)
  • Fix args check in F.roi_average_align_2d and F.roi_average_pooling_2d (#5627, thanks @knorth55!)
  • Fix L.BatchNormalization with lazy initialization fail on GPU (#5683, thanks @koreyou!)

Documentation

  • Simplify array type information fields in function documentation (#4887)
  • Update installation guide of numpy with openblas on macOS (#5021)
  • Add links to ChainerCV documentation (#5434)
  • Add ChainerMN paper to references (#5570)
  • Fix docstring of F.forget (#5586, thanks @fiarabbit!)
  • Fix typo in updaters (#5589, thanks @okayu9!)
  • Fix extensions guide error regarding method to implement (#5602, thanks @lehy!)
  • Update F.roi_average_align_2d doc to refer wrapper function (#5609, thanks @knorth55!)
  • Fix a typo in Chain example code (#5653)
  • Fix typo in F.max_pooling_nd docstring (#5654)
  • Fix a typo in chainer.distributions documentation (#5658)
  • Add documentation of ndarray (#5660)
  • Fix typo in L.ResNetLayers (#5665, thanks @takaaki82!)
  • Minor typo correction (in docs/variables). (#5670, thanks @grigorisg9gr!)
  • Fix typo in docstrings (#5676)
  • Fix docs for backprop_step (#5692)
  • Make docs in chainer.distributions refer ndarray (#5717)
  • Fix image URL in README (#5720, thanks @levelfour!)
  • Add warning in ChainerX documentation (#5752)

Installation

  • Require setuptools and add docs for (#5532)

Examples

  • Add WaveNet example (#4922, thanks @dhgrs!)
  • Rewrite the example of VAE using Chainer distributions (#5356, thanks @ganow!)

Tests

  • Fix test warnings in NumPy 1.15 (#5596)
  • Fix test of F.rrelu (#5618)
  • Fix regex of protobuf modules warned by Python 3.7 (#5642)
  • Ignore h5py warning in Python 3.7 (#5691)
  • Add gradient consistency checks in numerical_grad (#5698)

Other

  • Update style check tools to the versions compatible with pycodestyle 2.4 (#5643)
chainer - v6.0.0a1

Published by kmaehashi almost 6 years ago

This is the release note of v6.0.0a1. See here for the complete list of solved issues and merged PRs.

New Features

  • Add error handler interface to trainer extensions (#4630)
  • Add discriminative margin based clustering loss (#5313, thanks @dBeker!)
  • Support all float dtypes in F.det and F.inv (#5323)
  • Support all float dtypes in F.scatter_add and F.get_item (#5335)
  • Add probability distribution functions
    • D.Gamma (#5310)
    • D.Exponential (#5341)
    • D.Pareto (#5371)

Enhancements

  • Add maxtasksperchild parameter for MultiprocessIterator (#4972, thanks @jnishi!)
  • In-place update in F.batch_renormalization (#5014)
  • Introduce utils._fp16_mixed_precision_helper decorator (#5306)
  • Remove unnecessary version checking in ChainerMN (#5312)
  • Dynamically import matplotlib (#5320)
  • Use automatic broadcast and force_array (#5409)
  • Refactor gradient_check.check_backward (#5411)
  • Rename Adam.lr to Adam.alpha_t (#5420)
  • Grouped convolutions using matmul (#5459)
  • Validate shape of weight in F.convolution_2d (#5460)
  • Avoid Iterable in CaffeFunction (#5477)
  • Support negative axis for F.softmax (#5497)
  • Use arr.item() instead of numpy.asscalar(arr) to support NumPy 1.16 (#5510)
  • ChainerMN: Forward-port recent enhancements and bug-fixes (ChainerMN v1.3.1 release note) (#5535)
  • Make type_check.argname private (#5552)
  • Un-deprecate Link.add_param and Link.add_link (#5553)
  • ChainerMN: add an error message when mpi4py is missing (#5559)
  • Fix code for python 3.7 (#5577)
  • Improve iDeep 2.0 support
    • Update packaging for iDeep 2.0 (#5029)
    • Update Adam for iDeep 2.0 (#5033, thanks @mingxiaoh!)
  • Code enhancements
    • Fix some E241 style errors (#5431)
    • Fix style of imports (#5433)
    • Simplify scalar handling in basic_math (#5428, #5439)
    • Dedup assertion in MpiCommunicatorBase.allreduce (#5473)
    • Remove debug print (#5430)
    • Implement no-double-backprop version of F.softmax_cross_entropy using FunctionNode (#5478, #5508)
    • Consistently use Variable.array instead of .data (#5417, #5495, thanks @crcrpar!)

Bug Fixes

  • For proper resuming, don't raise KeyError at UpdateRule deserialization (#5353, thanks @grafi-tt!)
  • Support 0-size shape in D.Beta (#5382)
  • Fix re-creation of retained output variable nodes in backward (#5424)
  • CaffeFunction ignores pad_w (#5463, thanks @koreyou!)
  • Fix train_imagenet_data_parallel.py example cannot be run (#5469, thanks @Lynkzhang!)
  • Fix backward of HuberLoss for ndim >= 3 (#5493)
  • Fix F.softmax and F.log_softmax with axis=-1 on gpu (#5496)
  • Fix D.Uniform.log_prob to avoid returning -inf at boundary (#5548)

Documentation

  • Merge ChainerMN docs from master branch (#5300)
  • Update ChainerMN documents (#5302)
  • Replace Variable.data with Variable.array in examples and functions (#5386, thanks @crcrpar!)
  • Improve code sample appearance in docs (#5388)
  • Fix typos in doc of chainer.report (#5410)
  • Fix a ReST escape (#5415)
  • Add document for D.Beta (#5419)
  • Fix docstring of discriminative loss (#5423)
  • Fix docstrings to follow OpenStack Style Guidelines (#5427)
  • Fix docstring of chainer.Sequential (#5438)
  • Add Google Colaboratory installation steps and link to community examples (#5446)
  • Use "documentation" instead of "document" in our documentation (#5450)
  • fix typo in static graph optimization (#5453, thanks @crcrpar!)
  • Add support for NumPy 1.15 in docs (#5500)
  • Fix dead fragments to CuPy docs (#5504)
  • Fix a typo in Extension.on_error (#5523)
  • Improve FunctionNode upgrade guide (#5527)
  • Chainer v5 requires CuPy v5 (#5531)
  • Add upgrade guide for get_device_from_array (#5558)
  • Add Python 3.7 support to installation docs (#5573)

Installation

  • Fix typo in setup.py (#5397, thanks @toshihikoyanase!)
  • Check optional dependencies at runtime (#5425)
  • Update base docker image (#5521)

Examples

  • Use SerialIterator in train_mnist_custom_loop.py (#5519)

Tests

  • Fix occasional test failure of l2normalize with float16 (#5380)
  • Add missing test in Variable test (#5385)
  • Travis test against v5 branch (#5394)
  • Ignore scipy<1.0 is warned by using deprecated feature of numpy>=1.15 (#5471)
  • Relax tolerance of check_double_backward test (#5486)
  • Ignore protobuf is warned by Python 3.7 (#5514)
  • Fix tests with maxtasksperchild=1 or 10 are slow (#5516)
  • Fix test for python 3.7 (#5572)
chainer - v5.0.0

Published by beam2d almost 6 years ago

This is the release note of v5.0.0. See here for the complete list of solved issues and merged PRs.

This is the fifth major release of Chainer. This release note only covers the difference from v5.0.0rc1; for all highlights and changes, please refer to the blog post and release notes of the pre-releases:

See the Upgrade Guide if you are upgrading from previous versions.

Highlights

  • Chainer now supports Python 3.7.
  • iDeep 2.0 has been supported. Existing iDeep 1.x users must update iDeep using pip install -U ideep4py.
  • Link parameter and child link initializations via __init__(...), add_param, and add_link are undeprecated. They are useful when one builds a link as a container of parameters and links, and therefore we decided to leave these APIs besides init_scope.

New Features

  • Add discriminative margin based clustering loss (#5505, thanks @dBeker!)

Enhancements

  • Update Adam for iDeep 2.0 (#5407, thanks @mingxiaoh!)
  • Fix some E241 style errors (#5437)
  • Fix style of imports (#5458)
  • Validate shape of weight in F.convolution_2d (#5466)
  • Dedup assertion in MpiCommunicatorBase.allreduce (#5475)
  • Replace variable.data with variable.array in variable.py (#5488, thanks @crcrpar!)
  • Update packaging for iDeep 2.0 (#5513)
  • Consistently use Variable.array instead of .data (#5517)
  • Use arr.item() instead of np.asscalar(arr) to support NumPy 1.16 (#5529)
  • Support negative axis for F.softmax (#5543)
  • In-place update in F.batch_renormalization (#5546)
  • Grouped convolutions using matmul (#5549)
  • ChainerMN: Forward-port recent enhancements and bug-fixes (ChainerMN v1.3.1 release note) (#5554)
  • Make type_check.argname private (#5556)
  • ChainerMN: add an error message when mpi4py is missing (#5562)
  • Un-deprecate Link.add_param and Link.add_link (#5569)
  • Fix code for Python 3.7 (#5578)
  • Remove unnecessary version checking in ChainerMN (#5400)

Bug Fixes

  • Fix beta distribution (#5455)
  • [bugfix] CaffeFunction ignores pad_w (#5468, thanks @koreyou!)
  • Fix FunctionNode.retained_outputs (#5476)
  • Fix train_imagenet_data_parallel.py example cannot be run (#5499, thanks @Lynkzhang!)
  • Fix F.softmax and F.log_softmax with axis=-1 on gpu (#5502)
  • For proper resuming, don't raise KeyError at UpdateRule deserialization (#5506, thanks @grafi-tt!)
  • Fix backward of HuberLoss (#5520)

Documentation

  • Merge ChainerMN docs from master branch (#5399)
  • Add document for D.Beta (#5426)
  • Fix typos in doc of chainer.report (#5447)
  • Fix a ReST escape (#5449)
  • Fix typo in static graph optimization (#5456, thanks @crcrpar!)
  • Fix docstring of chainer.Sequential (#5461)
  • Add Google Colaboratory installation steps and link to community examples (#5464)
  • Fix docstrings to follow OpenStack Style Guidelines (#5465)
  • Fix dead fragments to CuPy docs (#5515)
  • Improve code sample appearance in docs (#5522)
  • Use "documentation" instead of "document" in our documentation (#5533)
  • Fix docstring of discriminative loss (#5537)
  • Chainer v5 requires CuPy v5 (#5540)
  • Improve FunctionNode upgrade guide (#5541)
  • Add support for NumPy 1.15 in docs (#5545)
  • Add upgrade guide for get_device_from_array (#5560)
  • Update ChainerMN documents (#5564)
  • Add Python 3.7 support to installation docs (#5574)

Installation

  • Fix typo in setup.py (#5398)
  • Update base docker image (#5571)

Tests

  • Travis test against v5 branch (#5395)
  • Add missing test in Variable test (#5406)
  • Fix occasional test failure of l2normalize with float16 (#5448)
  • Relax tolerance of check_double_backward test (#5490)
  • Ignore scipy<1.0 is warned by using a deprecated feature of numpy>=1.15 (#5491)
  • Ignore protobuf is warned by Python 3.7 (#5518)
  • Fix test for python 3.7 (#5576)
chainer - v5.0.0rc1

Published by hvy about 6 years ago

These are the releases notes for v5.0.0rc1. See here for the complete list of solved issues and merged PRs.

Highlights

Static subgraph optimization

Static subgraph optimization feature has been introduced. The CPU (Python) overhead of graph construction and traversal in backward is removed with it.

By applying @static_graph decorator to functions or methods (typically it is the forward method of a chain), you can let Chainer cache the computational graph collected at the first call and reuse it from the subsequent calls. To use this feature safely, your define-by-run code must always perform the same computations each iteration.

Advanced graph optimizations/transformations are not implemented yet, so currently it only reduces the CPU overhead. We will consider adding more sophisticated graph-level optimizations to improve the GPU utilization as well as further reduce CPU overhead.

This feature is experimental. We may change the interface in the future releases.

ChainerMN integration

ChainerMN has been integrated into Chainer. ChainerMN module (chainermn) will become available just by installing Chainer (note that installation of MPI is still required separately). Please uninstall ChainerMN (pip uninstall chainermn) if you already have ChainerMN installed before updating to this version of Chainer.

iDeep 2.0

iDeep 2.0 has been supported. iDeep 2.0 provides accelerations on Intel architecture for more functions than iDeep 1.x. Be aware that iDeep 1.x is incompatible with this version of Chainer; please update to iDeep 2.x if you already have iDeep 1.x installed.

NVIDIA DALI support

NVIDIA DALI has been supported.
DALI is a library to construct data preprocessing pipeline.
New DaliIterator converts the data pipeline for DALI into an iterator that can be used from any updaters.
Currently, users need to write a custom converter function to use it in Trainer.
See the imagenet example and its dali_util.py for how to use it.

This feature is experimental. We may change the interface in the future releases.

New Features

  • Add LinkHook (#4730)
  • Implement static subgraph optimizations (#4811)
  • Improve performance of sparse_matmul (#4831, thanks @anaruse!)
  • Allow weighted scalar reporting by providing a tuple to the reporter (#4844, thanks @hknerdgn!)
  • Add 1d/3d aliases for convolution/pooling functions and links (#4851)
  • Add params and xp to Distribution (#4925)
  • Support default dtype in F.spatial_transformer_grid (#5114)
  • Add Dirichlet distribution (#5115)
  • Include platform information to chainer.print_runtime_info() (#5163, thanks @himkt!)
  • Support all float dtypes in F.sigmoid_cross_entropy (#5211)
  • Add L.VGG19Layers (#5213, thanks @crcrpar!)
  • Integrate ChainerMN into Chainer repository (#5226)
  • Support all float dtypes in F.normalize (#5256)
  • Add contains_nan (#5270)
  • Support all float dtypes in F.roi_pooling_2d (#5281)
  • Support all float dtypes in F.gaussian (#5284)
  • Refine F.roi_average_align_2d interface (#5305, thanks @knorth55!)
  • Automatic broadcast: minimum, maximum, where (#5330)
  • Support NVIDIA DALI (#5387, thanks @anaruse!)

Enhancements

  • More checks in function_node (#3983)
  • Support None for the in_channels argument in ConvolutionND/DeconvolutionND (#4587)
  • Update for iDeep4py 2.0 (#4933, thanks @mingxiaoh!)
  • Refactor NaN check in chainer.grad in debug mode (#5228)
  • Improve type check messages for some functions (#5251)
  • Use log_softmax in D.Categorical (#5255)
  • Avoid mutating in_params of coo_matmul (#5258)
  • Use with cuda_device (#5269)
  • Avoid using pkg_resources to retrieve Chainer version (#5298)
  • Move get_array_module to backends from cuda (#5327)
  • Disallow summing over ellipsis in F.einsum (#5328)
  • Fix input variable of FunctionNode of where (#5340)
  • Move copyto to chainer.backend (#5344)
  • Fix input variable of FunctionNode of permutate (#5349)
  • Add binary_check option to D.Bernoulli (#5363)
  • Support negative axis for F.log_softmax (#5381)

Bug Fixes

  • Fix performance regression in #4772 (#5267)
  • Fix import order in _runtime_info.py (#5271)
  • Make Link.to_*pu return self (#5322)
  • Fix PrintHook fails if grad is None (#5333)
  • Fix a shape of W in L.ConvolutionND (#5370)
  • Make deserializers overwrite intel64.mdarray (#5373)
  • Fix serialize in link to support iDeep2 (#5374)

Documentation

  • Add DCGAN tutorial (#4544)
  • Add a comment that explains why the Classifier chain clears Variable attributes (#5069, thanks @grafi-tt!)
  • Fix FunctionHook documentation (#5188)
  • Add explanation about switching the three modes (#5283, thanks @fiarabbit!)
  • Amend chainer.Chain document (#5294, thanks @fiarabbit!)
  • Fix location of chainermn docs in sidebar (#5296)
  • Update Chainer at a Glance (#5316)
  • Fix invalid escape sequence warnings in Python 3.6 (#5317)
  • Add quotes to stylecheck install (#5318)
  • Add long_description for PyPI (#5345)
  • Fix typo (#5346)
  • Fix dead link (#5358)
  • Fix sphinx version to 1.7.9 (#5359)
  • Mention chainer.backend in upgrade guide (#5384)

Installation

  • Delete a commented out line (#5354)

Examples

  • Fix seq2seq example to support --resume option and improve docs (#4977)
  • Fix snapshot trigger in CIFAR example (#5325, thanks @akitotakeki!)

Tests

  • Avoid using deprecated is_linear argument in test (#5307, thanks @knorth55!)
  • Stabilize backward tests in TestTriangularInv (#5329)
  • Avoid printing in test_kldivergence (#5366)
  • Fix tolerance of matmul tests (#5369)
  • Minor fixes to TestKLDivergence (#5379)

Others

  • Update issue-template to encourage users to use chainer.print_runtime_info (#5272, thanks @himkt!)
chainer - v4.5.0

Published by mitmul about 6 years ago

This is the release note of v4.5.0. See here for the complete list of solved issues and merged PRs.

New Features

  • Include platform information to chainer.print_runtime_info() (#5268, thanks @himkt!)

Enhancements

  • Support None for the in_channels argument in ConvolutionND/DeconvolutionND (#5279)

Bug Fixes

  • Fix snapshot trigger in CIFAR example (#5331, thanks @akitotakeki!)
  • Fix PrintHook fails if grad is None (#5361)
  • Make Link.to_*pu return self (#5362)
  • Make deserializers overwrite intel64.mdarray (#5377)

Documentation

  • Amend chainer.Chain document (#5299)
  • Fix invalid escape sequence warnings in Python 3.6 (#5334)
  • Fix FunctionHook documentation (#5339)
  • Fix typo (#5348)

Examples

  • Fix seq2seq example to support --resume option and improve docs (#5275)
  • Fix snapshot trigger in CIFAR example (#5331, thanks @akitotakeki!)

Others

  • Fix Dockerfile in v4 branch to use CuPy minor versions (#5291)
chainer - v5.0.0b4

Published by niboshi about 6 years ago

This is the release note of v5.0.0b4. See here for the complete list of solved issues and merged PRs.

Highlights

Changes without compatibility

  • Change the initial avg_var of L.BatchNormalization to 1 (#4742)
  • Fix backward computation in F.forget (#5179). In this fix, the double backprop capability of F.forget is removed, since it did not work correctly in some cases.

New Features

  • Add new functions:
    • Add F.rrelu, Randomized Leaky ReLU (RReLU) activation function (#3059, thanks @raven38!)
    • Add F.erfcx, scaled complementary error function (#5195)
    • Add F.erfcinv, inverse complementary error function (#5202)
    • Add F.ndtr, normal cumulative distribution function (#5237)
    • Add F.log_ndtr (#5239)
    • Add F.ndtri, the inverse of ndtr (#5247)
    • Add F.roi_average_align_2d (#5070, thanks @wkentaro!, #5259)
    • Add F.cumprod (#5074)
  • Add new distributions:
    • D.MultivariateNormal (#4899)
    • D.Beta (#5088)
    • D.Categorical (#5028)
    • D.Uniform (#5123)
    • D.LogNormal (#5124)
  • Support default dtype in some links:
    • L.GoogLeNet (#5099)
    • L.ResNetLayers (#5101)
    • L.VGG16Layers (#5107)
  • Support all float dtypes in some functions:
    • F.absolute_error (#5145)
    • F.contrastive (#5152)
    • F.cross_covariance (#5158)
    • F.decov (#5174)
    • F.hinge (#5175)
    • F.huber_loss (#5176)
    • F.squared_error (#5212)
    • F.triplet (#5214)
    • F.batch_l2_norm_squared (#5235)
    • F.mean_squared_error (#5052)
  • Implemented dataset using pickle (#4581)
  • Add WarmupShift and MultistepShift extensions (#4935, thanks @mingxiaoh!)
  • Improve initializer support in L.Maxout (#5068)
  • Support n_batch_axis in L.Linear(#5103)
  • Add raw kernel function (#5106)
  • Support axis argument for F.log_softmax (#5215)

Enhancements

  • Improve type check messages for some functions (#5189, #5200, #5224, #5248)
  • Detect stalled datasets in MultiprocessIterator (#4607)
  • New backward_accumulate (#4772)
  • Avoid numpy.ascontiguousarray when iDeep is used (#5063)
  • Use automatic broadcasting in distributions (#5086)
  • Use F.cumprod in backward of F.prod (#5094)
  • Return params and children as ordered by name in Link and Chain (#5119)
  • Use cuda.get_array_module in fused function (#5120)
  • Fix L.Convolution2D error message (#5138, thanks @fiarabbit!)
  • Remove cuda_fusion.py (#5144)
  • Implement eps_inside_sqrt option to RMSprop (#5150)
  • Normalize chainer.config.dtype in chainer.get_dtype() (#5167)
  • Use collections.abc to avoid DeprecationWarning in Python 3.7 (#5172)
  • Minor fixes to D.MultivariateNormal (#5173)
  • Avoid collections.Iterable (#5180)
  • Make imports in alphabetical order (#5181)
  • Avoid keyword arguments in FunctionHook callbacks (#5191)
  • Retain outputs in F.erfinv (#5199)
  • Use xp.einsum in F.bilinear (#5207)
  • Minor fixes to D.Beta (#5219)
  • Minor fixes to D.Uniform (#5225)
  • Check eps < CUDNN_BN_MIN_EPSILON in FixedBatchNormalization (#5232, thanks @cycentum!)
  • Use ndtr and log_ndtr in normal distribution (#5240)
  • Use erfcinv for Normal.icdf (#5242)
  • Use ndtri in normal distribution (#5254)
  • Use normcdfinv in F.ndtri (#5260)

Bug Fixes

  • Move backends.cuda.copyto to backends.copyto and make it work with iDeep (#5095)
  • Fix the condition for the switching of cuDNN in F.deconvolution_nd (#5129, thanks @fiarabbit!)
  • Fix test failure of TestResNetLayers (#5133)
  • Fix backward compatibility of Link.__call__ MRO (#5141)
  • Flush the output stream after PrintReport reports (#5146)
  • Support old numpy in F.split_axis (#5157)
  • Avoid cancellation in D.Normal (#5185)
  • Support 0-dim input in F.logsumexp (#5190, thanks @cadenacchi!)
  • Fix cpu codes of indexing in F.softmax_cross_entropy (#5238)
  • Comment out extreme test of D.Categorical (#5261)

Documentation

  • Add iDeep to backend docs (#5121)
  • Improve iterator description in Chainer at a glance documentation (#5132, thanks @fiarabbit!)
  • Avoid use of ideep in doctest (#5148)
  • Fix grammar in PR template (#5178)
  • Fix docstring of F.erfinv (#5201)
  • Fix Sphinx issues in the reference of probability distributions (#5203)
  • Change iDeep in tips.rst to Chainer Backend for Intel Architecture (#5208, thanks @mingxiaoh!)
  • Fix toc level in iterator documentation(#5257)

Examples

  • Rename VAE hyperparameter C to beta in the example (#5135, thanks @Evanc123!)
  • Override Link.forward in MNIST model parallel example (#5159)

Tests

  • Simplify and stabilize softmax_cross_entropy test (#3409)
  • Ignore float warnings if testing extreme value (#5122)
  • Trivial fix for parameterized test case of F.contrastive (#5147)
  • Fix a class name in test_erfinv (#5165)
  • Fix doctests of open_pickle_dataset (#5182)
  • Fix test failure on Windows (#5186)
  • Add sphinx to doctest requirements (#5187)
  • Fix occasional test failure of l2normalize (#5210)
  • Fix occasional test failure of contrastive (#5218)
  • Ignore Theano warnings in Python 3.7 (#5223)
  • Ignore DeprecationWarnings at importing Theano (#5230)
  • Adjust tolerance of TestMatMul (#5236)
  • Add .pytest_cache/ to .gitignore (#5193)
chainer - v4.4.0

Published by kmaehashi about 6 years ago

This is the release note of v4.4.0. See here for the complete list of solved issues and merged PRs.

Enhancements

  • Fix L.Convolution2D error message (#5140, thanks @fiarabbit!)
  • Use collections.abc to avoid DeprecationWarning in Python 3.7 (#5177)
  • Avoid collections.Iterable (#5220)

Bug Fixes

  • Flush the output stream after PrintReport reports (#5149)
  • Fix backward compatibility of Link.__call__ MRO (#5151)
  • Support old numpy in F.split_axis (#5164)
  • Support 0-dim input in F.logsumexp (#5196, thanks @cadenacchi!)
  • Fix cpu codes of indexing in F.softmax_cross_entropy (#5241)

Documentation

  • Add docs for Extension.name (#5110)
  • Add iDeep to backend docs (#5162)
  • Avoid use of ideep in doctest (#5168)
  • Fix cross-link and format of Chainer at a glance documentation (#5170)
  • Fix grammar in PR template (#5204)
  • Improve Iterator description in Chainer at a glance documentation (#5250, thanks @fiarabbit!)

Tests

  • Simplify and stabilize softmax_cross_entropy test (#5216)
  • Fix occasional test failure of l2normalize (#5222)
  • Ignore Theano warns in Python 3.7 (#5227)
  • Ignore DeprecationWarnings at importing Theano (#5243)
  • Trivial fix for parameterized test case of F.contrastive (#5252)
  • Fix occasional test failure of contrastive (#5253)
  • Add .pytest_cache/ to .gitignore (#5194)
chainer - v4.3.1

Published by kmaehashi about 6 years ago

This is the release note of v4.3.1. See here for the complete list of solved issues and merged PRs.

This is a hot-fix release for v4.3.0 to address the backward incompatibility issue reported in #5078 (thanks @grafi-tt and @tkanmae for reporting this!). Users implementing __call__ method of their own Link using mix-in (multiple inheritance) may have been affected by this issue.

Bug Fixes

  • Fix backward compatibility of Link.__call__ MRO (#5154)
chainer - v4.3.0

Published by beam2d over 6 years ago

This is the release note of v4.3.0. See here for the complete list of solved issues and merged PRs.

Enhancements

  • Run gradient clipping on GPU, if possible (#5000, thanks @shinh!)
  • Fix for NumPy 1.15.0rc1 (#5008)
  • Avoid hasattr in L.BatchNormalization (#5066)
  • Fix dataset path to use os.path.join (#5102)
  • Avoid zero division in F.normalize (#5108)

Bug Fixes

  • Fix exception not raised when unsupported format is specified when dumping computational graphs (#5005)
  • Fix GetItem.backward for 0-dim boolean index (#5045)
  • Fix kernels not memorized (#5065)

Documentation

  • Improve CNN example docs (#4962)
  • Add ZippedImageDataset and MultiZippedImageDataset to documentation (#4963, thanks @d0i!)
  • Fix Adam alpha argument explanation (#4970)
  • Fix cross-reference links in StandardUpdater (#4993)
  • Update docs in F.upsampling_2d according to new F.max_pooling_2d (#4995)
  • Fix docs of L.NStepBiRNNTanh, L.NStepLSTMBase, L.NStepLSTM and L.NStepBiLSTM (#4996, thanks @mori97!)
  • Fix docstrings in computational_graph (#4998)
  • Add support for NumPy 1.14 in docs (#4999)
  • Fix verb error in chainer.functions.fft docstring (#5004, thanks @butsugiri!)
  • Fix typo in n_step_gru docs (#5007)
  • Fix typo in README of seq2seq (#5022, thanks @MannyKayy!)
  • Add notes about relationship between F.dilated_convolution_2d and F.convolution_2d (#5023)
  • Clarify how arguments are handled in L.Linear docs (#5024)
  • Fix typo in docs template (#5040)
  • Add rules regarding use of pytest module (#5041)
  • Fix attribute name collisions in docstring (#5047)
  • Fix dead link to numpy.dtype.kind in Tips (#5054)
  • Clarify distinction between chainer.dataset and chainer.datasets (#5072)
  • Improve Variable guide (#5073)
  • Add “Chainer at a Glance” documentation (#5080)
  • Update caffe.rst docs (#5092)

Examples

  • Fix invalid keyword arguments to L.Linear in ImageNet example (#4994)

Tests

  • Fix test_default_backward (#5003)
  • Remove unused parameter from TestBatchRenormalization (#5020)
chainer - v5.0.0b3

Published by niboshi over 6 years ago

This is the release note of v5.0.0b3. See here for the complete list of solved issues and merged PRs.

Highlights

  • New functions have been added: F.einsum, F.lgamma, F.digamma, F.polygamma
  • More built-in Links support chainer.config.dtype configuration introduced in v5.0.0b2.

Changes without compatibility

Please refer to the Upgrade Guide for details.

  • Link.copyparams has been changed to copy persistent values in addition to parameters (#4997). You can use newly-introduced copy_persistent=False option to emulate the previous behavior.
  • FunctionNode classes exposed under chainer.functions namespace have been removed (#4421). Please use wrapper functions under chainer.functions instead of directly using classes.

New Features

  • Add F.einsum (#4644)
  • Add logarithmic gamma and related functions: F.lgamma, F.digamma, and F.polygamma (#4720)
  • Improve performance of batch normalization (#4798, thanks @anaruse!)
  • Add StepShift extension (#4894, thanks @jinjiren!)
  • Add Laplace distribution (#4932)
  • Add LabeledZippedImageDataset (#4961, thanks @d0i!)
  • Add Bernoulli distribution (#5025)
  • Support default dtype: L.BatchNormalization and L.BatchRenormalization (#5034), L.Maxout (#5058), L.InceptionBN (#5062), L.StatefulMGU (#5084)
  • Support all float dtypes in F.mean_absolute_error (#5053)

Enhancements

  • Fix cuda.elementwise to up performance (#3787)
  • Support cuDNN in F.dropout (#3369, thanks @bonprosoft!)
  • Hide FunctionNode classes from chainer.functions namespace (#4421)
  • Infer input size in batchnorm using aggregate axes (#4673, thanks @tkanmae!)
  • Avoid zero division in F.normalize (#4769)
  • Fix for NumPy 1.15.0rc1 (#4832)
  • Create function to identify fashion-MNIST labels (#4860)
  • Rename all __call__ methods in Links to forward (#4912)
  • Refactor distribution (#4923)
  • Cleanup F.batch_normalization (#4964)
  • Run gradient clipping on GPU, if possible (#4982, thanks @shinh!)
  • Add log_scale option of Normal distritbuion (#4987)
  • Copy persistent values in Link.copyparams (#4997)
  • Remove obsolete code from batch (re)normalization (#5013)
  • Avoid hasattr in L.BatchNormalization (#5017)
  • Let F.depthwise_convolution_2d use F.convolution_2d internally (#5046)
  • Initialize gradient of uninitialized parameter with default dtype when initializer is callable (#5064)
  • Support 0-dim params in distributions (#5077)
  • Fix F.einsum to support NumPy 1.15rc1 (#5079)
  • Fix dataset path to use os.path.join (#5100)

Bug Fixes

  • Fix GetItem.backward for 0-dim boolean index (#4958)
  • Fix exception not raised when unsupported format is specified when dumping computational graph (#4971)
  • Fix iDeep call in MultiAdd (#5056)
  • Fix kernels not memorized (#5061)

Documentation

  • Add Chainer at a Glance documentation (#3127)
  • Add upgrade guide for auto_new_epoch (#4956)
  • Fix cross-reference links in StandardUpdater (#4968)
  • Add docs for Extension.name (#4980)
  • Fix docs of chainer.config.dtype (#4981)
  • Clarify how arguments are handled in L.Linear docs (#4983)
  • Fix docstrings in computational_graph (#4984)
  • Add support for NumPy 1.14 in docs (#4990)
  • Fix docs of L.NStepBiRNNTanh, L.NStepLSTMBase, L.NStepLSTM and L.NStepBiLSTM (#4991, thanks @mori97!)
  • Update docs in F.upsampling_2d according to new F.max_pooling_2d (#4992)
  • Fix verb error in chainer.functions.fft docstring (#5002, thanks @butsugiri!)
  • Fix typo in n_step_gru docs (#5006)
  • Add notes about relationship between F.dilated_convolution_2d and F.convolution_2d (#5010)
  • Fix broken notations in F.linear docs (#5011)
  • Add rules regarding use of pytest module (#5012)
  • Fix typo in README of seq2seq (#5018, thanks @MannyKayy!)
  • Improve Variable guide (#5030)
  • Fix typo in docs template (#5035)
  • Fix attribute name collisions in docstring (#5037)
  • Fix cross-link and format of Chainer at a glance documentation (#5044)
  • Fix dead link to numpy.dtype.kind in Tips (#5051)
  • Clarify distinction between chainer.dataset and chainer.datasets (#5057)
  • Fix broken docs in PolynomialShift (#5089)
  • Update caffe.rst docs (#5090)
  • Add upgrade guide for Link.copyparams changes (#5093)
  • Fix typo in the docstring of ChainList (#5098)

Examples

  • Fix invalid keyword arguments to L.Linear in ImageNet example (#4975)

Tests

  • Update style check tools (#4864)
  • Eliminate no_grads and squares in double backward tests (#4978)
  • Fix test_default_backward (#5001)
  • Remove unused parameter from TestBatchRenormalization (#5016)
  • Remove test_get_dummy_device_for_empty_array (#5071)
Package Rankings
Top 1.15% on Pypi.org
Top 29.43% on Anaconda.org
Badges
Extracted from project README
pypi GitHub license travis coveralls Read the Docs Optuna
Related Projects