chainer

A flexible framework of neural networks for deep learning

MIT License

Downloads
86.6K
Stars
5.9K
Committers
298
chainer - v3.0.0b1

Published by mitmul about 7 years ago

This is the v3 beta release. See here for the complete list of solved issues and merged PRs.

CuPy has also been updated to v2.0.0b1. Please see the release notes for CuPy. In particular, the updates on memory allocator may be relevant to many existing users of Chainer.

Changes without compatibility

The new-style differentiable function (#2970)

This change provides the core API support of writing functions that support

  • Differentiable backprop (a.k.a. gradient of gradients, higher order differentiation)
  • Economical backprop (i.e., backward can skip computation of unnecessary input gradients)

You can write your own function node by implementing a subclass of FunctionNode. The following is a simple example of writing an elementwise multiplication function (which is already provided by this beta version):

class ElementwiseAdd(chainer.FunctionNode):
    def check_type_forward(self, in_types): ...

    def forward(self, inputs):
        lhs, rhs = inputs
        self.retain_inputs((0, 1))  # New-style function does not retain inputs by default!!!
        return lhs * rhs,

    def backward(self, indexes, grad_outputs):
        grad_out, = grad_outputs
        lhs, rhs = self.get_retained_inputs()
        return rhs * grad_out, lhs * grad_out

There are mainly three differences from the conventional definition using Function.

  1. The index (or target_input_indexes as the full name) is added. It indicates the set of inputs for which gradients are required. There are two ways to return gradients by backward: gradients for all inputs, or gradients for inputs selected by indexes. In the latter case, you can skip computing the gradients for inputs not listed in indexes.
  2. The backward method implements computation on top of Variable instead of ndarray so that the resulting gradients can be further backpropagated. The grad_outputs is a tuple of Variable s, and the new get_retained_inputs() and get_retained_outputs() methods return a tuple of Variable s corresponding to retained inputs/outputs. Note that the inputs are not retained by default (which is also different from Function).
  3. The forward computation is invoked by apply() method instead of __call__() operator.

There is also a variant of backward() method named backward_accumulate() which includes the accumulation of input gradients to existing ones. It enables us to improve the performance in some case.

This change also provides the following changes.

  • A new class FunctionAdapter provides an implementation of FunctionNode interface on top of Function interface. It can be used to convert Function into new-style function nodes. Note that it does not mean the converted function supports differentiable backprop; it is required to rewrite the implementation with FunctionNode directly to support it.
  • Function.__call__ is updated so that users do not need to update their implementation of custom Function definitions; it automatically creates a FunctionAdapter object, lets the adapter wrap the Function object itself, and inserts the adapter object (which implements FunctionNode) into the computational graph.
  • Currently, only elementwise addition and multiplication (+ and *) and F.identity (which exists just for testing purpose) supports differentiable (and economical) backprop. We are planning to widen the set of functions with differentiable backprop support in the up-coming releases.
  • Note that this change breaks the object structure of the computational graph; now FunctionNode objects act as function nodes in the computational graph, and Function is just an object referenced by a FunctionAdapter object (which implements FunctionNode).

New features

  • When using Trainer, any exceptions raised during training are now immediately shown before entering the finalization procedures. It helps users to know the cause of the error before waiting for the finalization which sometimes hangs up (esp. when using multiprocessing) (#2216)
  • Support a mask pattern shared among examples within each batch in F.simplified_dropconnect (#2534, thanks @fukatani!)
  • Enable strict option in load_npz to skip non-existing entries (#2599 #2601)
  • L.Classifier is extended so that users can feed multiple input features. An argument that should be treated as the ground truth labels is specified by label_key option. Keyword arguments are also supported. (#2834)
  • Add print_report() and summary() method to TimerHook (#2927)
  • Support all float types in F.upsampling_2d (#2978)
  • Add chainer.testing.fix_random decorator to make tests deterministic (#2985)
  • Add F.selu activation function (#2989)
  • Add iter_per_epoch option to get_trainer_with_mock_updater (#2913, thanks @Hakuyume!)

Improvements

  • Automatically import submodules in chainer.training (#3032)
  • Remove redundant type checking in backward implementations and improve the performance of type equality checking (#2891)
  • Remove code for old cuDNN (v2 and v3) support (#2920)
  • Use np.einsum in foward_cpu of negative sampling (#2931)
  • Direct initialization of Variable on device (#2983)
  • Select the best-resolution timer function (#2991)
  • Improve numerical grad performance on GPU (#3018)
  • Make TimerHook reentrant (#3019)
  • Fix the way to copy arrays along with cupy/cupy#159 (#3047 #3054)

Bug fixes

  • Fix incorrect firing of interval_trigger on resuming the training procedure (#2244 #2484, thanks @Hakuyume!)
  • Support correct serialization of ManualScheduleTrigger (#2988, thanks @Hakuyume!)
  • Use np.zeros as the initialization of arrays to return in the CPU-mode F.roi_pooling_2d (#2872, thanks @yuyu2172!)
  • add serialization to ManualScheduleTrigger (#2988, thanks @Hakuyume!)
  • Add mock as an installation dependency (#2973 #2992)

Examples

  • Deep reinforcement learning examples (#1991)
    • It includes example code for DQN, DoubleDQN, and DDPG using OpenAI Gym environments.

Document

  • Update “Comparison with Other Frameworks” (#2717, thanks @jekbradbury!)
  • Add documentation for
    • Updater (#3012, thanks @fiarabbit!)
    • MultiprocessParallelUpdater (#3038)
  • Improve the documentation for
    • F.softmax (#2362)
    • F.tile (#2825)
    • F.upsampling_2d (#2977)
    • StandardUpdater and ParallelUpdater (#2993)
    • Link.disable_update (#3061, thanks @Hakuyume!)
    • Link.__call__ (#3007)
  • Add links to the Slack archive to README (#2998)
  • Fix GitHub link (#2984 #2999)
  • Fix typo in the Upgrade Guide for volatile mode changes (#3005, thanks @evdcush!)
  • Wording in contribution guide (#3067)
  • Fix the file permission of conf.py (#3064)

Test

  • Reduce the test time (e.g. by removing redundant test cases or parameterization) (#2948)
  • Add tests for deep-copying Link (#2974)
  • Use get_trainer_with_mock_updater in tests of ManualScheduleTrigger (#2987, thanks @Hakuyume!)
  • Remove debug print (#2997)
  • Remove cython coding style check to speedup tests (#3008)

Others

  • Update pfnet/chainer -> chainer/chainer (#3049, thanks @Hakuyume!)
  • Add generated documents and tags to .gitignore (#3050)
chainer - v2.0.2

Published by delta2323 about 7 years ago

This revision release contains bug fixes and improvements to the documentation and installation procedure. See here for the complete list of solved issues and merged PRs.

Document

  • Fix typos in L.NStepGRU, L.NStepBiGRU, L.NStepLSTM, L.NStepBiLSTM, L.NStepRNNTanh, L.NStepRNNReLU, L.NStepBiRNNTanh, and L.NStepRNNReLU (#2964, thanks @tomohideshibata!)
  • Add F.squared_error (#2963), chainer.Parameter (#2965) and chainer.VariableNode (#2965) to the document.
  • Fix the documents of Variable.__init__ (#2965), F.Upsampling2D (#2986), chainer.training.StandardUpdater (#3009), chainer.training.ParallelUpdater (#3009), chainer.Link.__call__ (#3025, thanks @Hakuyume!), chainer.training.MultiprocessParallelUpdater (#3039)
  • Add links to the archive of slack channels to README (#3011)
  • Fix a typo in the upgrade guide to v2 (#3024, thanks @evdcush!)
  • Improve the document of F.tile (#3034, thanks @keisuke-umezawa!), chainer.training.Updater (#3058, thanks @fiarabbit!).
  • Fix the missing link to GitHub (#3045)
  • Replace the old repository URL (pfnet/chainer) with new one (chainer/chainer) (#3051, thanks @Hakuyume!)

Install

  • Require mock package in installation (#2986)

Bug

  • Initialize some data with a zero array instead of an empty one in the CPU mode of F.ROIPooling. (#3003, thanks @yuyu2172!)

Example

  • Add examples that features Deep Reinforcement learning. (#2996)

Enhancement

  • Avoid unneeded copy to GPU in initialization of chainer.Parameter (#3014)
  • Improve the performance of gradient_check.numerical_grad on GPU (#3021)
  • Use einsum in forward_cpu of F.negative_sampling (#3026)

Other

  • Several test improvements (#3000, #3002, #3023)
  • Add generated references and tags to .gitignore (#3052)
  • Fix the permission of a configuration file (#3065)
chainer - v2.0.1

Published by delta2323 over 7 years ago

This revision release contains bug fixes and improvements to the documentation and installation procedure. See here for the complete list of solved issues and merged PRs.

Enhancements

  • Stop using INFINITY in MaxPoolingND (#2917)
  • Stop using use get_device, which is deprecated (#2924)
  • Use init_scope instead of deprecated methods to register links and parameters (#2947)
  • Use cleargrads instead of zerograds (#2956)

Bug fixes

  • Fix F.pad_sequence error on 64bit Windows GPU (#2867, thanks @ronekko)
  • Fix trainer mock to call update_core() (#2878)
  • Fix resuming issue of *Shift extensions (#2879, thanks @Hakuyume)
  • Make vision models copyable (#2885)
  • Restore changes unexpectedly overwritten in get_trainer_with_mock_updater (#2887, thanks @Hakuyume)
  • Change the type of several hidden variables in Link and Chain (#2901)
  • Fix Variable repr and str failure when data is None (#2902)
  • Use sorted list of link parameters in gather and scatter functions of MultiProcessUpdater (#2914).
  • Fix a bug dependent on glibc version (#2959, thanks @ken-nakanishi)
  • Fix TrainerTest where elapsed time had been zero with imprecise clock (#2878)

Documentation

  • Fix a typo (#2844, thanks @levelfour!)
  • Add F.pad_sequence to the reference (#2884, thanks @YamaneSasuke!)
  • Other document improvements (#2848, #2850, #2868, #2883, #2888, #2889, #2899, #2915, #2916, #2951)

Examples

  • Add ResNetLayers arguments of `n_layers in the ResNet example (#2882)
  • Use Evaluator instead of TestModeEvaluator in the data-parallel example (#2886)

Others

  • Installation improvement: (#2922)
  • Test improvements (#2935, #2953)
chainer - v3.0.0a1

Published by mitmul over 7 years ago

This is the alpha release of v3. See here for the complete list of solved issues and merged PRs.
 

Important Updates

 

  • The specification of F.matmul is changed so that the behavior is compatible with numpy.matmul. This change affects existing code that uses F.matmul.
  • TreeLSTMs of this paper including Child-Sum LSTMs and N-ary LSTMs are added
     

New Features and Changed APIs

 

  • Numpy like matmul: the specification of F.matmul is changed so that it is compatible with numpy.matmul (#2426, thanks @fukatani!)
  • F.layer_normalization is added and it improves the performance of L.LayerNormalization (#2857)
  • Add initializers.LeCunNormal (#2764)
  • Add links implementing TreeLSTMs (#2606)
  • Improve asynchronous host to device copy (#2238). Note that the stream option of cuda.to_gpu is now deprecated.
     

Enhancement

 

  • Use init_scope in built-in links (#2934, #2949)
  • Avoid itervalues for performance (#2895)
  • Improve _check_grad_type (#2894)
  • Matmul cleanup (#2892, thanks @fukatani!)
  • Remove get_device from the internal code (#2890)
  • Remove INFINITY (#2877)
  • Optimized F.dropout forward (#2873)
  • Option to skip backprop computation of convolution (#2358)
  • Improve performance of negative sampling forward on GPU (#2829)
     

Bug fixes

 

  • Fix stream option not working with to_gpu (#2907, thanks @kmaehashi!)
  • Fix hidden variable in Link and Chain (#2897)
  • Fix gather and scatter (#2896)
  • Fix F.pad_sequence error on 64bit Windows GPU (#2866, thanks @ronekko!)
  • Check for None in HyperparameterProxy descriptor (#2852)
  • Fix resuming issue of *Shift extensions (#2845, thanks @Hakuyume!)
  • Fix get_trainer_with_mock_updater (#2824, thanks @Hakuyume!)
  • Fix a minior bug of resnet.py (#2821)
  • Pass raw arrays to the loss function in MultiprocessParallelUpdater (#2811, #2817)
  • Make vision models copyable (#2810)
  • Fix Variable repr and str failure when data is None (#2787, #2806)
     

Installation

 

  • Remove an install helper file (#2921)
     

Documentation

 

  • Add additional special methods in class template (#2846)
  • Add contribution guidelines under .github (#2750, #2774)
  • Add references to the document
     - F.pad_sequence (#2863, thanks @YamaneSasuke!)
     - Hyperparmaeter and UpdateRule (#2826)
     - squared_error (#2940)
     - ParameterStatistics extension (#2799, thanks @hvy!)
     - BestValueTrigger (#2819)
  • Improve the document
     - flatten, reshape (#2678)
     - L.BatchNormalization (#2900, #2926)
     - Link (#2814, thanks @chantera!)
     - Variable (#2827)
     - F.squared_error and F.mean_squared_error (#2941)
  • Fix typos (#2779, thanks @Paosder!, #2830, thanks @tomohideshibata!, #2837, thanks @levelfour!, #2841)
  • Minor improvement for README and the document (#2803, #2805, #2820, #2804, #2835, thanks @Hakuyume!, #2840, #2861, #2893)
     

Examples

 

  • Update target URL for PTB dataset (#2856, thanks @himkt!)
  • Use Evaluator instead of TestModeEvaluator in the data-parallel example (#2816)
     

Tests

 

  • Remove get_device in tests (#2950)
  • Replace assertRegexp with assertRegex (#2946)
  • Use cleargrads instead of zerograds (#2945)
  • Replace DebugMode with using_debug (#2933)
  • Fix TrainerTest where elapsed time had been zero with imprecise clock (#2853, #2855)
chainer - v2.0.0

Published by beam2d over 7 years ago

This is the second major version. See the list for the complete list of solved issues and merged PRs (the list only shows the difference from v2.0.0b1; see the Release Note section below for the difference from v1.24.0).

Announcements

  • CuPy has been separated from Chainer into an independent package: CuPy.
    • It means you need to install CuPy if you want to enable GPU for Chainer.
    • Following this installation guide is recommended to enable GPU.
  • Related to the CuPy separation, we cut the support of some old versions of CUDA and cuDNN. The following versions will be supported in Chainer v2.0.0 and CuPy 1.0.0.
    • CUDA 7.0 or later
    • cuDNN 4.0 or later
  • The repository of Chainer is moved from pfnet/chainer to chainer/chainer. The old URL can be used by git, with which any operations will be redirected to the new one.
  • For Chainer v1.x Users:
  • For contributors:
    • We strongly recommend you to read the Contribution Guide again, which contains many updates.
    • As is explained in the Contribution Guide, we have changed the development and release cycle.
      The main development will be continued on the master branch, which will correspond to the next pre-releases of v3 (including alpha, beta, and RC). The maintenance of v2 will be done at v2 branch.
    • If you want to send a pull request, please send it to the master branch unless you have a special reason.

Release Notes

It should be noted that these release notes contain only the differences from v2.0.0b1. See the release notes of v2.0.0a1 and v2.0.0b1 to confirm the full set of changes from v1.

New Features and Changed APIs

  • Add L.StatelessGRU and change the implementation of L.GRU (#2769)
  • Make input size/channels optional (#2159, #2045)
  • Aggressive Buffer Release (#2368, #2586 (thanks @anaruse!))
  • Related to the buffer release, the following functions release inputs:
    • transpose_sequence (#2631)
    • select_item (#2630)
    • get_item (#2629)
    • array method (#2628)
    • copy (#2627)
    • flip functions (#2624)
    • cast (#2623)
    • broadcast (#2622)
    • noise functions (#2661)
    • pooling functions (#2660)
    • broadcast (revisited) (#2703)
    • stack methods (#2625)
    • math functions (#2659)
    • depth2space (#2626)
  • chainer.config.cudnn_deterministic: cuDNN Deterministic mode (#2574, #2710)
  • Remove wscale option from L.MLPConvolution2D (#2690)
  • Add new APIs of parameter/link registration to Link/Chain (#1970, #2657)
  • Purge the graph when reporting a variable (#2054, #2640)
  • Add Extension.initialize and remove invoke_before_training (#2639, #2611)
  • Make None serializable (#2638)
  • Raise an error when an obsolete argument is given (#2556)
  • Use cleargrads instead of zerograds by default (#2521, #2549)
  • Fix the inconsistent naming convention between LSTM and GRU (#2285, #2510, #2537)
  • Add requires_grad property to Variable (#2493)
  • Support numpy like repr function of Variable (#2455, thanks @fukatani!)
  • Clean APIs of L.Linear and convolution-like links related to the bias argument (#2180, #2185)
  • Remove deprecated methods of Optimizer (#2509, #2404)
  • Make bias vector enabled by default in L.ConvolutionND and L.DeconvolutionND (#2018)

Enhancement

  • Remove unnecessary imports from functions and links (#2755)
  • Check old arguments which are not supported in v2 to show an error message. (#2641)
  • Raise an error when the volatile flag is given (#2718)

Bug fixes

  • Fix a bug of Hyperparameter on deep copy (or, strictly speaking, on unpickling) in Py3.6 (#2761)
  • Fix Copy.backward to check input device (#2668)
  • Fix AlexFp16 example (#2637)
  • Fix VariableNode to add creator setter (#2770)
  • Fix for the environment without cuDNN (#2790)
  • Check h5py version when serializing None (#2789, #2791)
  • Fix the initial weight of EmbedID (#2694, thanks @odanado!)
  • Fix DebugPrint extension to support removed inputs (#2667)

The following PR has been sent to v1.24.0 and merged but we mistakenly failed to add this PR to the previous release note, so now we list this up here and appreciate @himkt for the contribution!

  • Fix a bug in chainer.datasets.split_dataset_random (#2613, thanks @himkt!)

Documentation

  • Fix the location of get_device_from_id and get_device_from_array (#2759)
  • Remove unnecessary sentence from L.Convolution2D (#2757)
  • Improve doc of F.softmax (#2751, thanks @tma15!)
  • Write the Upgrade Guide (#2741)
  • Fix documentation errors (#2760)
  • Update the Installation Guide for v2.0.0 (#2729)
  • Renew the readme (#2692)
  • Remove an obsolete document in L.DilatedConvolution2D (#2689)
  • Remove use_cleargrads from tutorial (#2645)
  • Fix a mistake in grammar (#2571, thanks @soramichi!)
  • Update API Compatibility Policy (#2778)
  • Remove the license for CuPy (#2786)
  • Update the Contribution Guide (#2773)
  • Update the tutorial (#2762)
  • Fix several typos in tutorial (#2737, thanks @PeterTeng)

Examples

  • Add ResNet50 example (#2644, #2655, #2656)

Tests

  • Remove unused parameter/serialization in test_manual_schedule_trigger.py (#2568, thanks @Hakuyume!)
chainer - v1.24.0

Published by mitmul over 7 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

Announcements

  • This is the final regular release of Chainer v1.x. No further changes will be made to Chainer v1 except for critical bug fixes.
  • We will soon merge the current _v2 branch into master. It is predicted that many PRs targeted to the current master will be made obsolete (i.e., they will conflict with the v2 source tree).
  • We have decided to postpone the release of v2.0.0 to May 30. We will work hard to finish the planned changes and documentation stuffs, so wait for the release date!
  • We have to apologize that we cannot fulfill for v2 the compatibility-breaking steps that we declared in our compatibility policy. In particular, many APIs that will be partially changed in v2 do not emit any warnings in v1.24.0.
    • Instead, we are preparing an upgrade guide that lists up which part of the existing user codes should be updated to be compatible with v2.0.0. We believe that this upgrade guide is helpful for all users to properly update their codes.

New features

Summary

  • MultiprocessParallelUpdater is added. It is an updater for Trainer that accumulates the gradients computed by multiple processes using multiprocessing and NCCL.
  • reduce option is added to loss functions. By passing reduce=’no’, we can let the loss function not aggregate the loss values across data in the mini-batch.
  • Many differentiable functions and links are added. In particular, depthwise convolution and spatial transformer networks are supported.
  • QR and SVD decompositions are added to CuPy.

Chainer

  • Differentiable advanced indexing (indexing by integer arrays and boolean arrays) for Variable (#2203, thanks @yuyu2172!)
    • NOTE: This feature was actually included in the previous version. We apologize that this big feature was missed in the previous release note.
  • Add MultiprocessParallelUpdater: a new version of parallel updater using multiprocessing and NCCL (#2213, #2724, thanks @jekbradbury (#1924) and @anaruse (#1895)!)
  • ParameterStatistics extension that accumulates various statistics of parameter arrays (#2166, thanks @hvy!)
  • Add reduce option to the following loss functions. You can use these loss functions without taking summation/average over the mini-batch by passing reduce=’no’.
    • F.softmax_cross_entropy (#2325, #2357, thanks @Hakuyume!)
    • F.gaussian_kl_divergence (#2519)
    • F.bernoulli_nll (#2525)
    • F.gaussian_nll (#2526)
    • F.crf1d (#2559)
    • F.huber_loss (#2560)
    • F.hinge_loss (#2577)
    • F.black_out (#2600)
    • F.contrastive (#2603)
    • F.connectionist_temporal_classification (#2658)
    • F.triplet (#2681)
    • F.cross_covariance (#2697)
    • F.decov (#2698)
    • F.negative_sampling and L.NegativeSampling (#2704)
  • One dimensional integer array indexing (fancy indexing) support for DatasetMixin (#2427)
  • Add keepdims option to F.average and F.mean (#2508)
  • Add TransformDataset: dataset wrapper to transform each data point by an arbitrary callable (#2513)
  • Support array inputs in F.gaussian_kl_divergence, F.bernoulli_nll, and F.gaussian_nll (#2520)
  • New Functions and Links
    • F.simplified_dropconnect and L.SimplifiedDropconnect: simplified version of DropConnect (#1754, thanks @fukatani!)
    • F.depthwise_convolution_2d and L.DepthwiseConvolution2D: depthwise convolution layer used in separable convolution (#2067, thanks @fukatani!)
    • F.spatial_transformer_sampler: 2d image differentiable sampler from “Spatial Transformer Networks” (#2272, thanks @yuyu2172!)
    • F.spatial_transformer_grid: function to generate sampling points of STN (#2458, thanks @yuyu2172!)
    • L.GoogLeNet: pretrained GoogLeNet (#2424, thanks @ronekko!)
    • F.im2col: differentiable version of im2col (#2466, thanks @yuyu2172!)
    • cuDNN-accelerated N-step RNNs and bidirectional RNNs (thanks @aonotas!)
      • F.n_step_rnn, F.n_step_birnn (#2467)
      • F.n_step_bilstm, L.NStepBiLSTM (#2469)
      • F.n_step_gru, F.n_step_bigru, L.NStepGRU, L.NStepBiGRU (#2470)
    • F.squared_error and F.absolute_error: elementwise squared/absolute error (#2566, thanks @Hakuyume!)
    • F.softmax supports axis option (#2536, #2538 thanks sergeant-wizard!)

CuPy

  • Some linalg methods are supported(QR decomposition: #2412, singular value decomposition: #2481)
  • cupy.sum supports keepdims argument (#2507)

Bug fixes

  • Redundant dropout just after input layer in F.NStepLSTM is removed (#2504)
  • Some functions now work correctly with non-contiguous arrays
    • Pooling functions: #2512, #2564
    • F.batch_normalization (#2582 thanks @soramichi!)
    • deconvolution functions (#2666 thanks @soramichi!)
    • F.spatial_transformer_sampler (#2676 @yuyu2172!)
  • Fixed cupy.fuse behavior for *args (#2594 thanks @jekbradbury!, #2598)
  • Fixed resuming behavior of extensions (ExponentialShift: #2686 thanks @Hakuyume!, LinearShift #2721)
  • Fixed ResNet101Layers to load pretrained model (#2608, #2609 thanks @yuyu2172!)
  • Variable.transpose can be called without argument (#2614 thanks @ronekko!, #2635)
  • Added support for broadcasting in SoftmaxCrossEntropy on numpy==1.9 (#2719)
  • Fixed reverse indexing for empty dimension (#2696)
  • softmax cross entroy correctly works when ignore_label is not -1 (#2715 thanks @musyoku!, #2716)
  • Treat numpy scalars correctly in cupy.ndarray.fill (#2723)
  • Fixed duplicated test case name (#2605)
  • Remove debug print (#2610)
  • Fixed convnet tutorial (#2615)

Improvements

  • Support arrays over 2GB (#2530, thanks @kmaehashi!)
  • Check output size of pooling function (#2589, thanks @soramichi!)
  • Stop importing theano automatically (#2570 thanks @mfxss and @jekbradbury, #2619)
  • split_axis function works when its result has zero-dimensional arrays (#2524)
  • Improved DatasetMixin performance (#2427)
  • Check maximum supported version of cuDNN (#2479, #2480)
  • Refactored CIFAR dataset (#1516)
  • Refactor F.DilatedConvolution2DFunciton (#2665 thanks @soramichi!)
  • Refactor chainer.Link (#2711, #2712 thanks @ysekky!)

Documents

  • Modify the nccl wrapper for --cupy-no-cuda (#2724)
  • Add observe_value and observe_lr to extension.rst (#2713)
  • Improve docs
    • expand_dims (#2677)
    • depth2space, space2depth (#2675)
    • copy (#2674)
    • deconvolution nd (#2672)
    • dstack (#2664)
    • accuracy (#2633)
    • deconvolution_2d (#2616)
    • stack, vstack, hstack (#2491)
    • convolution_2d (#2490)
    • lstm, slstm (#2460)
    • log_softmax (#2411)
    • convolution_nd (#2590)
    • Hinge loss (#2573)
  • Modify docstring in connection (#2642, thanks @ysekky!)
  • Fix some mistakes in ConvNet tutorial (#2615)
  • Add reduce option to F.black_out (#2600)
  • Move Information to top (#2591)
  • Add index page for examples (#2587)
  • Add Information in README (#2565)
  • Add special members to the document (#2552)
  • Update install.rst (#2543)
  • Add ConvNet tutorials (#2337)
  • Fix typos (#2597 thanks @PeterTeng!, #2596 thanks @kdnk!, #2595 thanks @hvy!, #2714)
  • Remove TOC from the readme of the examples (#2731)

Examples

  • Added new example that uses a custom training loop (#2339)
  • Added --model argument in PTB example to specify model file (#2617)
  • Removed outdated comment from word2vec example (#2643, thanks @ysekky!)

Tests

  • Fixed epoch_detail behavior of mocked trainer (#2472, thanks @Hakuyume!)
  • Fixed LSTM Dropout test (#2504)
  • Fixed coding style in init docstring test (#2588)
  • Fixed contrastive test (#2604)
  • Fixed test case name of gaussian_kl_divergence test (#2605)
  • Fixed numerical instability in Highway test (#2650)
  • Added tests for show_name functionality of the computational graph (#2517, thanks @sergeant-wizard!)
  • Added corner case in F.stack test (#2532)
  • Use chainer.functions alias in tests (#2541)
  • Retry in unstable dropout test (#2542)
  • Skip external classes in init docstring test (#2583)
  • Improved test for max_pooling_2d in GPU cases (#2589, thanks @soramichi!)
  • Improved tests of manual schedule trigger (#2557 and #2568, thanks @Hakuyume!)

Others

  • Fixed .pep8 file for style checking (#2602)
  • Specify required protobuf version (#2663, thanks @wkentaro!)
chainer - v1.23.0

Published by mitmul over 7 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

Announcements

Important notes on v2 release schedule and future contributions

We have recently released v2.0.0b1. See here for the release note and how to try the latest v2 prerelease. The following note is repeated from the release note of v2.0.0b1.

We are planning the v2.0.0 release on May 16 (the schedule might be changed), and the final regular release of v1 is scheduled on May 9. Any PRs left after this final v1 update will not be merged, and at the release of v2.0.0 we will make the master branch refer to the current _v2 branch. It will make the ongoing PRs conflicting with the new target.

If you are planning to send a new PR to Chainer, we strongly recommend to write your code on top of the _v2 branch and set the target branch to it. As for the ongoing PRs that are hard to meet the deadline of the final regular v1 update, it is also recommended to switch the target to v2 and start resolving any merge conflicts.

New features

Chainer

  • Add remove_variable option to build_computational_graph to render a function-only graph (#156 #2207)
  • Add a new trigger, ManualScheduleTrigger, for Trainer. Given a list of integers or a single integer and triggering unit ('epoch' or 'iteration', this trigger is fired at all timings given in the list/integer. (#2181, Thanks @naoto0804!)
  • Make a FunctionHook to mark forward and backward computation for CUDA profiler (#2209 #2210)
  • F.average for weighted average along a specified axis (#2377)
  • Add an optional reset method to iterators to avoid copy of the iterator made by MultiprocessIterator with spawn or forkserver mode multiprocessing (#2387)
    • It is currently implemented in MultiprocessIterator and used by Evaluator extension.
  • Add get_device_from_id and get_device_from_array that functionally replace existing get_device method and mark deprecated it (#2391)
  • L.Resnet101Layers and L.Resnet152Layers with pretrained models (#2447)
  • F.pad_sequence. It takes different sizes of sequences and makes them a matrix by padding each sequence with given value and concatenating them (#2495)
  • Add ignore_label option to softmax_cross_entropy function (#2499)

CuPy

In this release, we added cupy.fuse decorator that fuses the fixed applications of supported functions on ndarrays. It can be used as follows (with a simple performance observation codes). The resulting NVIDIA visual profiler output is attached below.

import cupy
import cupy.prof

def f(x):
    return x + x + x + x + x

# This decorator fuses the kernels into one.
@cupy.fuse()
def g(x):
    return x + x + x + x + x

x = cupy.arange(40000000)

with cupy.prof.time_range('without fuse', color_id=0):
    f(x)

with cupy.prof.time_range('with fuse', color_id=1):
    g(x)

# You can pass numpy arrays transparently to the fused function as well.
# In this case, no JIT compilation is applied and it just falls back to plain NumPy API calls.
g(numpy.arange(40000000))

NVIDIA Visual Profiler result of a simple example of CuPy fusion

  • fuse (experimental): JIT kernel compilation of functions of arrays with a subset of CuPy API (#1697, #1713, #2297, thanks @asi1024!)
    • This function is currently not documented yet. We will add the documentation in the next release.
  • Add axis option to count_nonzero (#2296, Thanks @ShigekiKarita!)
  • Support indexing by an empty list in cupy.scatter_add and cupy.ndarray.__getitem__ (#2393, thanks @yuyu2172!)
  • Support cuDNN v6 (#2478)
  • Fix OpenMP dependency issue of cuSolver (#2487)
  • Check if NVTX exists before importing it (#2488)

Bug fixes

  • Fix batch_normalization with cuDNN for variables with non-standard dimensions (#2370)
  • Fixed an error caused by calling Variable.cleargrad() in optimizer hooks (#2389, #2390, thanks #Hakuyume!)
  • Stop creating cupy.cuda.Device on backward unless cupy.ndarray is used (#2394)
  • Fix reentrance bug of repeating tests (#2405)
  • Fix advanced indexing bugs of CuPy (#2419, #2420, thanks @yuyu2172!)
  • Abort installation if Python 3.5.0 is used (#2443)
  • Fix unittests for ResNet (#2474)
  • Fix installation problem about NVTX in Windows (#2476)
  • Fix an exception handling problem when importing Matplotlib (#2482)
  • Eliminate Dropout applied to the hidden state vector in LSTM (#2502)
  • Add batchsize checking to the type check part in NStepLSTM (#2500)
  • Skip tests that use axis option of count_nonzero when the NumPy version is under 1.12.0 (#2511)

Improvements

  • Add --frequency option to MNIST example that sets the interval of taking a snapshot (#2217, thanks @johshisha!)
  • Check if NumPy and CuPy are not mixed up in connection functions (#2255 #2454, thanks @soramichi!)
  • Make SerialIterator and MultiprocessIterator interchangeably serialized/deserialized (#2361, thanks @Hakuyume!)
  • Do not create cuda.Device in Variable.backward when CuPy is not used (#2395)
  • Allow optimizer-like object to be passed to StandardUpdater (#2407)
  • Improve Windows support of CuPy around integer types (#2422)
  • Improve Windows support of NVTX (#2423)
  • Use linear_interpolate method instead of naive aritmetric operations in GRU function for more computational efficiency (#2497)

Documents

  • Improve documentation of F.leaky_relu and F.relu (#2399), F.sigmoid and F.hard_sigmoid (#2398), F.softplus (#2401), and F.cast (#2486)
  • Fix typos in documents (#2417, thanks @quolc!) (#2471, thanks @ronekko!) (#2428, thanks @fukatani!)
  • Fix doctests in the reference of function hooks (#2432), numpy_cupy_allclose (#2433), type check (#2434), and time_range (#2435)
  • Add ResNetLayers to the document (#2489)
  • Update the sample visualization of computational graph of GoogLeNet (#2518)

Tests

  • Add tests of chainer.dataset.download (#1388)
  • Add tests of PlotReport.available (#2318)
  • Relax the condition to stabilize the tests of fmod (#2383 #2402)
  • Improve cuDNN tests by splitting it into two for Chainer and CuPy (#2430)
  • Remove rtd test on OS/X (#2438)
  • Skip protobuf files on docstring tests (#2448)
  • Suppress FutureWarning in Travis CI (#2456)
  • Fix typo in a cuDNN test (#2457)
  • Skip tests of itruediv when NumPy version is older than 1.10 (#2459)
  • Downsize the bottleneck test cases to reduce the test time by 30% (#2461)
  • Replace the use of deprecated gradient_check.assert_allclose by testing.assert_allclose (#2463)
  • Use skipUnless instead of if statement (#2477)
  • Insert retry to test_goodness_of_fit in CuPy because it sometimes fails (#2494)
chainer - v2.0.0b1

Published by beam2d over 7 years ago

This is the beta version of v2.0.0. See the list for the complete list of solved issues and merged PRs.

Try this v2 beta version by installing Chainer with the following command:

pip install chainer --pre

Note that --pre option is mandatory to install v2 beta; otherwise v1 is installed. If you want to use CUDA/cuDNN, you also have to install cupy separately.

pip install cupy

Any feedback via issues/PRs/forums/etc. is appreciated!!!

Important notes on future contributions

We are planning the v2.0.0 release on May 16 (the schedule might be changed), and the final regular release of v1 is scheduled on May 9. Any PRs left after this final v1 update will not be merged, and at the release of v2.0.0 we will make the master branch refer to the current _v2 branch. It will make the ongoing PRs conflicting with the new target.

If you are planning to send a new PR to Chainer, we strongly recommend to write your code on top of the _v2 branch and set the target branch to it. As for the ongoing PRs that are hard to meet the deadline of the final regular v1 update, it is also recommended to switch the target to v2 and start resolving any merge conflicts.

New features

  • Variable-wise update rule (#2008, #2137)
    • Every parameter variable now holds update_rule attribute set by the optimizer. Users can edit the hyperparameter of each update rule to customize the optimization configuration for each parameter (e.g. using different learning rate). Each update rule can also have its own hook functions (e.g. for applying weight decay only to weight matrices).
  • Implement aggressive buffer release (#2452)
    • We change the object structure of the computational graph and variables. Variable object is not a part of the computational graph anymore, and instead it holds a reference to VariableNode object which is a part of the computational graph. Some Functions let the variable node not preserve the array buffer so that the memory consumption is reduced. In this release, only the most popular functions (relu, arithemetics, concat, split_axis) support this feature.
    • According to the preliminary benchmark in #2452, it reduces the memory consumption up to 33% in modern convnets.
  • Lazy typecheck (#1437, #2136)
    • The API of type checking is slightly changed. This change lowers the overhead of type checking when the code passes the checks.
  • Add use_cudnn mode (#2188)
    • We removed use_cudnn argument from many functions. Whether to use cuDNN is now configured by chainer.config.use_cudnn.
  • Uninitialized variable and parameter (#1967, #2072)
    • Variable is now allowed to have an uninitialized data array. This change simplifies the handling of uninitialized parameters of links.
    • Change Variable class to share its actual data and gradient arrays across its copied instances including the initialized/uninitialized state (#2101)
  • Widen data types that Evaluator can accept (#1654)
    • Formally, we can only feed NumPy and CuPy ndarray to the evaluation function of chainer.training.extensions.Evaluator. Now, this restriction is removed and arbitrary data types are allowed.

Removed APIs

  • chainer.init_weight and chainer.initializers.init_weight (#2465)
  • Alias to links under chainer.functions namespace (e.g. chainer.functions.Linear), which had been left for backward compatibility . (#2330)
  • chainer.functions.Parameter (#2331)
  • chainer.functions.caffe.CaffeFunction, which had been left for backward compatibility (#2329)
  • Variable.volatile attribute (#2013, #2356)
    • Users are recommended to use chainer.no_backprop_mode and chainer.force_backprop_mode to control the construction of computational graphs instead.
    • chainer.Flag and its concrete instances (chainer.ON, chainer.OFF, and chainer.AUTO) are also removed.

Updated behaviors of APIs

  • Change the initial value of the bias vactor of forget gate in LSTM with 1. This initialization technique is common nowadays and many deep learning frameworks adopt it. (#2425) Thanks @icoxfog417!
  • chainer.initializer.get_fans. Align the definition of fan_out to other frameworks. (#1774) Thanks @dsanno!

Others

  • Improve the format of chainer.config.show (#2335, #2336)
  • Set dependency of Chainer on CuPy explicitly in setup.py (#2275)
  • Document improvement (#2238, #2315, #2354)
  • Add a section about chainer.config.train to the tutorial (#2313)
  • Use __len__ and shape property directly without accessing data property in some examples because arguments which a Link receives may be ndarray (#2378)

Planned features of the future releases

(Note: some features might be dropped in the actual releases)

  • PyCharm-friendly registration of parameters and child links to links and chains.
  • chainer.config.deterministic that detemines stochasticity of some chainer functions.
  • Releasing training graph before validation.
  • Loss functions that output loss values without reduction
  • Supporting buffer release in more functions
  • Clean up APIs of linear-like and convolution-like links
chainer - v1.22.0

Published by beam2d over 7 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

Announcements

We have created two Slack teams (one for English speakers and the other for Japanese speakers) for quick communications related to Chainer. If you want to join, send a request for invitation via the following pages.

New features

Chainer

  • F.fmod: Differentiable elementwise mod function (#1617)
  • Add test argument to L.InceptionBN.__call__ (#1662, thanks @cympfh!)
  • concat_examples supports non-array objects (#2140)
  • chainer.training.extensions.MicroAverage: An extension that calculates the micro average precision (#2345)
  • F.pad: Differentiable padding function (#2253, thanks @KotaroSetoyama)
  • chainer.Variable.transpose (#2359, thanks @yuyu2172!)
  • chainer.Variable.reshape (#2236, Thanks @yuyu2172!)

CuPy

  • cupy.prof.time_range and cupy.prof.TimeRangeDecorator:
    A helper context manager (time_range) and decorator (TimeRangeDecorator) that uses NVTX to mark function calls with range in NVIDIA profiler (#2179, thanks @uchida!)
  • cupy.scatter_add supports indexing with multiple integer arrays (#2202, thanks @yuyu2172!)
  • Add a thin wrapper of cuSolver (#2270)
  • cupy.meshgrid (#2274 Thanks @yuyu2172!)
  • cupy.flip and cupy.rot90 (#2350, thanks @tsurumeso!)
  • cupy.matmul supports all real types (#2151)

Bug fixes

  • Seq2seq example: Select the correct device in multi-GPU environment (#2222)
  • Remove matplotlib.use from PlotReport (#2266)
  • Fix the MNIST example error occurs when Matplotlib doesn’t exist by #2317. It’s reported at #2276. (Thanks @ISP-Tetsuro-Kitajima!)
  • Add the condition to avoid importing Matplotlib in the MNIST example if it’s not been installed (#2277)
  • Raise ImportError rather than RuntimeError when attempting to import broken CuPy (#2307)
  • Fix the wrong behavior of epoch_detail of MultiprocesseIterator and SerialIterator (#2327, thanks @Hakuyume)
  • Enable deterministic option of F.convolution_2d when cuDNN v3 is used (#2355)
  • Remove unnecessary ReLU operator in VGG16Layers (#2364)
  • Support the advanced indexing with an empty list (#2369, thanks @yuyu2172!)
  • Fix cupy.array to not modify the original array when copy=False and ndmin are specified (#2396). It was reported at #2392 (Thanks @Signull8192!).

Improvements

  • Improve cuDNN v5 support (#2260)
  • Improvement on speed of cupy.ndarray.dot when two input arrays are both 1-D (#2282)
  • NStepLSTM accepts None for initial states to initialize them with zeroes (#2289)
  • F.gaussian supports zero-dimensional array (#2342)
  • Set dtype option for cupy.random in initializer (#2376)
  • TheanoFunction supports Theano>=0.9 by using OrderedDict as known_grad argument (#2403)

Documents

  • Add code examples of crelu, elu, tanh, broadcast, broadcast_to, and concat to their docstrings (#2229, thanks @shiba24!)
  • Fix typo in functions/normalization/batch_normalization.py (#2268, thanks @crcrpar!)
  • Include space2depth and depth2space to documents ( #2273, thanks @yuyu2172!)
  • Fix a typo in tips.rst (#2304, thanks @soramichi!)
  • Fix a typo in test_sigmoid_cross_entropy (#2324, thanks @Hakuyume!)
  • Use F.Linear instead of L.Linear in the docstring of LSTM function #2333 (Thanks @dsanno!)
  • Fix doctest result (#2340)
  • Improve docs of F.hard_sigmoid (#2374)

Tests

  • Add tests to check consistency of backward methods of 2D and ND pooling layers (#2124, Thanks @takagi!)
  • Stabilize the unit tests of test_matmul (#2326)
chainer - v2.0.0a1

Published by beam2d over 7 years ago

This is the alpha version of v2.0.0. See the list for the complete list of solved issues and merged PRs.

Try this v2 alpha version by installing Chainer with the following command:

pip install chainer --pre

Note that --pre option is mandatory to install v2 alpha; otherwise v1 is installed. If you want to use CUDA/cuDNN, you also have to install cupy separately.

pip install cupy

Any feedbacks via issues/PRs/forums/etc. are appreciated!!!

CuPy separation

  • We start the development of CuPy as a project separated but still related to Chainer. The official repository of CuPy is found here: https://github.com/cupy/cupy. CuPy-related modules are moved from Chainer repository to CuPy’s one. To enable CUDA modules in Chainer v2, users have to install CuPy separately (the installation order does not matter) (#1909, #2283).
  • Note that the mainstream development of CuPy is still taking place at the pfnet/chainer repository until the final release of Chainer v2.0.0. Any compatibility-breaking changes would be made into the cupy/cupy repository.

New features

  • Unified configuration: The following flags to configure Chainer are now managed by chainer.config and chainer.global_config objects. (#2145)
    • debug mode (which had been configured by set_debug)
    • enable_backprop mode (which had been configured by no_backprop_mode)
    • train mode (which had been configured by train or test arguments of many methods)
    • type_check mode (which had been configured by Function.type_check_enable)

Removed APIs

  • CuPy array creation functions left in chainer.cuda name space (chainer.cuda.empty, chainer.cuda.empty_like, chainer.cuda.zeros, chainer.cuda.zeros_like, chainer.cuda.ones, chainer.cuda.ones_like, chainer.cuda.full, chainer.cuda.full_like) (#2161)
  • chainer.FunctionSet (#2015, #2132)
  • wscale option of link implementations. Users are recommended to change the scale of the distribution from which initial weight values are drawn with Initializer. (#2163)
  • train / test arguments of following methods/classes (#2186)
    • F.dropout, F.n_step_lstm, F.zoneout,
    • L.BatchNormalization, L.CaffeFunction, L.InceptionBN, L.NStepLSTM, L.ResNet50Layers, L.StatefulZoneoutLSTM, L.VGG16Layers
  • snapshot and snapshot_object: trigger option is removed. Use the argument with the same name of Trainer.extend instead. (#2017, #2204)

Updated behaviors of APIs

  • Variable.__len__: It returned the number of elements in the wrapped array in v1. But it was inconsistent with the ndarray of NumPy and CuPy, which returns the number of first dimension. In v2, the behavior of Variable.__len__ is aligned with ndarray’s. (#1792, #2153)
  • Evaluator extension: It now automatically switches to test mode during the evaluation.
  • chainer.initializers.init_weight: scale option is removed (#2164)
  • F.split_axis: The default value of force_tuple option is set to True. With this change, users can assume the returned type of this function is a tuple by default. (#2022, #2187).

Updated examples

  • Examples are updated to use chainer.config.train to switch between training/test mode. In particular, some examples are simplified thanks to Evaluator handling the mode automatically. (#2186)

Plans for the future releases

We are planning beta release in March. The following features are planned to be included in the future pre-releases (some may be included after the beta) of Chainer v2.

  • Optimizer with UpdateRule. It enables us to customize the hyperparameters (e.g. learning rate, whether to apply weight decay) for each parameter.
  • Uninitialized variable. It will be used for implementing the parameter-shape placeholder.
  • PyCharm-friendly registration of parameters and child links to links and chains.
  • Other configuration flags: use_cudnn, deterministic
  • Removing volatile flag from Variable. The graph construction is then controlled by chainer.config.enable_backprop flag.
  • Releasing training graph before validation.
chainer - v1.21.0

Published by unnonouno over 7 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

New features

  • Chainer
    • Add UnpoolingND function (#1517)
    • Add chainer.functions.fliplr and chainer.functions.flipud (#2239, #2249)
    • Add marker and grid attribute to PlotReport(#2123)
    • Trainer.extend can accept lambda functions as an extension (#1930, thanks @soramichi)
    • Add strict option to HDF5Deserializer and NpzDeserializer (#2053, #2195)
    • Make chainer.dataset.to_device public (#2087)
  • Cupy
    • Add cupy.cumsum without axis argument support (#2235)
    • Make cupy.ndarray.__setitem__ accept boolean arrays (#2099, thanks @yuyu2172)
    • Make cupy.scatter_add accept boolean arrays (#2175, thanks @yuyu2172)
    • Add order option to cupy.ndarray.copy (#2178, thanks @yuyu2172)
    • cupy.ndarray.__setitem__ can accept multiple integer arrays (#2126, thanks @yuyu2172)
    • cupy.random.distribution can accept an array of float values as loc and scale argument (#2169)
    • cupy.ndarray.__getitem__ support mask whose shape is different from that of the array (#2201, thanks @yuyu2172)
    • Change behavior of complete_slice to fit numpy 1.12.0 (#2114)
    • Add wrappers for FindConvolution functions of cuDNN (#2147, thanks @yuyu2172)
    • Add global variables introduced in cuDNN v5.1 (#2148, thanks @yuyu2172)
  • Support Python 3.6 (#2115)
  • Support NumPy 1.12(#2116)

Bug fixes

  • cupy.ndarray.shape.__set__ accept scalar values (#1969, #2129)
  • Fix import error of PReLU (#2007, #2128, thanks @kamo-naoyuki)
  • cupy.ndarray.get can work even if current device and the device array is located is not identical (#2141, #2142, thanks @yuyu2172)
  • cupy.ndarray.get can accept array with size 0 (#2172)
  • Fix returned values of Stream.done and Event.done (#2237)
  • Fix an error caused by the difference between Python int and C long (#2156)
  • Fix to handle two int32’s ('i' and 'I') of NumPy correctly (#2184)
  • Variable.backward correctly handle cudaErrorNoDevice (#2245, thanks @niboshi)
  • Remove redundant error raise when import of CuPy failed (#2246)
  • Forbid to initialize links with uninitialized parameters (#2199, #2200)
  • Add wrongly-missing eps to calculated variance in fixed_batch_normalization(#2214)
  • Fix a bug related to GIL in handling cuBLAS functions (#2226)
  • Correct error handling in making the cache directory for CUDA kernels (#2264)

Improvements

  • Performance improvement
    • Speed up in cupy.get_array_module (#2162)
    • Improve cupy.concatenate (#2144, #2242, #2233, #2248, #2251)
    • Improve the performance of asfortranarray (#2176, thanks @yuyu2172)
    • Improve cupy.ndarray.device (#2247)
    • Improve the performance of cupy.nadarray.__setitem__ (#2227)
    • Improve the performance of Variable.backward (#2224)
    • Improve in handling tensors in cuDNN via improvement on creation of tensor descriptors cuDNN (#2228)
    • Improve GPU calculation of convolution_2d function (#2192, thanks @fukutani)
    • Improve the performance of transpose_sequence (#2212)
    • Improve the performance of NStemLSTM (#2219, 2261)
    • Improve the performance of cupy.ndarray.__getitem__ when inputs are integer arrays. (#2134, thanks @yuyu2172)
    • Improve the performance of boolean array indexing of cupy.ndarray.__getitem__ (#2174, thanks @yuyu2172)
    • Improve the performance of cupy.array_split (#2165)
    • Improve the performance of cupy.asarray and cupy.asanyarray by calling cupy.array implementation directly (#2225)
  • cuda.get_array_module can accept Variable (#2050)
  • Memory pool in CuPy calls GC in allocating memory if memory is not enough (#2257, #2262)
  • Improve the error message of cupy.reshape when input array is invalid (#2193, thanks @fukutani)
  • Make the error massage of type checking in Chainer functions more readble (#2082)
  • Improve error messages in SerialIterator (#2160, #2234, thanks @LukasDrude)

Others

  • Improve documents of clipped_relu and relu, sigmoid and linear. (#2208, thanks @shiba24)
  • Remove fs attribute from CaffeFunction document (#2130, #2131)
  • Improve document of chainer.functions.sigmoid function (#1907)
  • Add chainer.functions.n_stem_lstm to the document (#2039)
  • Add document on cupy.scatter_add.at (#2206, thanks @yuyu2172)
  • Improve document of chainer.functions.linear (#2189, thanks @shiba24)
  • Add for_orders and for_CF_orders decorator for unit tests (#2105, thanks @yuyu2172)
  • Fix example to download ResNet model manually (#1904, #2196, thanks @aonotas)
  • Add Dockerfile for Python3 (#2190)
chainer - v1.20.0.1

Published by beam2d over 7 years ago

This is a hot-fix release for fixing the issue #2120 via the PR #2121 (thanks @owruby!). It affects users who use Trainer with non-standard optimizer names (e.g. multiple optimizers), including the official DCGAN example. We recommend all users who are using v1.20.0 to this version.

chainer - v1.20.0

Published by mitmul almost 8 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

New features

  • Chainer
    • Theano function feature (#1103)
    • Support N-dimensional average pooling funciton (#1391, thanks @takagi!)
    • Add DeCov loss (https://arxiv.org/abs/1511.06068) (#1942, thanks @longjie!)
    • Add Upsampling2D Function (#1951)
    • Add PlotReport (#2001, thanks @Kiikurage!)
  • CuPy
    • Make ndarray.setitem to handle one integer array (#2040, thanks @yuyu2172!)
    • Add cupy.scatter_add with interface identical to ndarray.setitem (#2043, thanks @yuyu2172!)
    • Optimize cupy.asfortranarray (#2049, thanks @yuyu2172!)
    • Add cupy.pad (#1856, thanks @KotaroSetoyama!)
    • Add cupy.choose (#2059)
    • Add order option to cupy.ndarray (#2061, thanks @yuyu2172!)
    • Support cupy.logspace (#2063, thanks @fukatani!)
    • Enable concurrent kernel execution with ElementwiseKernel and ReductionKernel (#2085, thanks @koji123!)
    • Add order option to cupy.empty (#2103, thanks @yuyu2172!)
    • Add order option to cupy.zeros (#2104, thanks @yuyu2172!)

Bug fixes

  • Fix bug about multi GPU in LSTM (#1882, thanks @kamo-naoyuki!)
  • Fix optimizer add-hooks with cuda kernels (#1947, thanks @ShigekiKarita!)
  • Output size mismatch in InceptionBN (#1988, thanks @tatsy!)
  • Fix self.assertEqual() in test_cifar.py (#2009)
  • Fix mock method (#2011)
  • Fix doctest of upsampling_2d (#2012)
  • Add 'with nogil' to cudaStreamWaitEvent call (#2025)
  • Fix backward accumulation for zero-dim array (#2026)
  • Fix installation failure on Windows (#2031, thanks @dsanno!)
  • Use cudnn.create_filter_descriptor (#2044, thanks @aonotas!)
  • Fix max pooling backward cpu (#2062, thanks @ronekko!)
  • Make cupy.ndarray.astype handle conversion between np.longlong and np.int64 (#2075, thanks @yuyu2172!)
  • Fix deadlock issue on multi-threaded application (#2084, thanks @koji123!)
  • Add cupy.ext to packages in the setup function call (#2094)
  • Fix doctest in scatter_add (#2095)
  • Fix the simulated bug behaviour of packbits (#2112)

Improvements

  • Check contiguity of CuPy array. It can reduce memory usage (#1915)
  • Call model.to_gpu() in StandardUpdater (#2024, thanks @td2sk!)
  • Add nogil to CUDA API functions (#2028)
  • Remove parameters from function declarations in cupy_*.h (#2069)
  • Make Dropout object immutable (#2071)
  • Improve PlotReport enhancement (#2073)
  • Change name of kernels for scatter_* to be more consistent with other kernels (#2097, thanks @yuyu2172!)
  • Fix to print the original error when cupy is not successfully built (#2108)

Others

  • Documentation
    • Fix logsoftmax math (#1957, thanks @fukatani!)
    • Fix document of chainer.util.experimental (#1966, #1972)
    • Fix doctest of chainer.utils.experimental (#1972)
    • Fix comments problem (#2004, thanks @hisa0507!)
    • Fix document of chainer.functions.log_softmax (#1957 thanks @fukutani!)
    • Add NumPy license and fix contribution guide (#2035)
    • Add document of n_step_lstm (#2039)
    • Fix hard sigmoid reference (#2041, thanks @knorth55!)
    • Fix document format of n_step_lstm (#2066)
    • Fix grammatical issues in the tutorial (#2079)
    • Fix typo (concat_example) (#2088)
    • Fix markup language (#2089)
  • Test
    • Fix unit tests of CIFAR dataset loading (#1515, #2009)
    • Disable slow tests in AppVayor (#2010)
    • Add tests for how extensions are executed (issue #2006) (#2032, thanks @soramichi!)
    • Fix test code for convolution_2d (#2037)
    • Use autospec in mock creation. (#2038)
    • Add @testing.gpu (#2068)
    • Fix test for cupy.pad by passing lists instead of passing ndarrays (#2091)
    • Use sys.modules to emulate situation that h5py is not installed (#2093)
    • Use for_dtypes_combination decorator for array indexing tests (#2106, thanks @yuyu2172!)
  • Example
    • Add DCGAN example (#1892)
    • Use shape placeholders in examples (#1989)
    • Improve imagenet example performance (#2000)
chainer - v1.19.0

Published by beam2d almost 8 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

Highlight

Easy-to-use pretrained model implementations for computer vision are added. VGG16 and ResNet50 are available. For example, you can load and use VGG16 as a feature extractor as follows.

from chainer.links import VGG16Layers
from PIL import Image


model = VGG16Layers()
img = Image.open("path/to/image.jpg")
feature = model.extract([img], layers=["fc7"])["fc7"]

New features

  • Add experimental feature function (#1893, #1977)
  • New Functions and Links
    • N-dimensional max pooling (#1353)
    • VGG16Layers and ResNet50Layers with pretrained models (#1677, #1961)
    • Mean absolute error (#1881, #1899)
    • LayerNormalization (#1950, #1981, thanks @loofahcus for improving it!)
    • r2_score without sample weights (#1896, thanks @mottodora!)
    • class_weight argument of softmax_cross_entropy (#1968)
  • CuPy
    • Array integer indexing for getter (#1832, #1863, #1921, #1922, #1937, thanks @yuyu2172!)
    • Boolean array indexing for getter (#1840, thanks @wkentaro!)
    • cupy.where with only passing the condition array (#1925, thanks @yuyu2172!)
    • packbits and unpackbits without axis support (#1935, #1965)

Bug fixes

  • Link.copyparams on uninitialized parameters (#1908)
  • Fix cleargrads and zerograds for uninitialized params (#1939)
  • Installation fix for Cython 0.25.2 (#1986)
  • Fix GIL dead lock on CUDA module load (#1995, thanks @soramichi for reporting it!)

Improvements

  • Code improvements of F.concat (#1897, thanks @fukatani!)
  • Make PringReport and ProgressBar work on Windows (#1730, thanks @tapdo!)
  • Include cupy_stdint.h on Windows (#1916)
  • Remove a redundant assignment in softmax_cross_entropy (#1936)
  • Use direct import for the better support of PyCharm (#1911)
  • Fix eps check for cuDNN batch normalization (#1955)
  • Reorganize cuDNN header file (#1676)
  • Add docs and tests for parameter shape placeholder for MLPConvolution2D (#1954, thanks @fukatani!)
  • Simplify cupy.matmul implementation (#1901)
  • Refactor the memory allocator codes (#1978)
  • Test fix
    • Loosen matmul test condition (#1889)
    • Fix Typo in testcase TestNorm (#1992, thanks @boeddeker!)
    • Fix the use of assert_raises_regex (#1984)
    • Suppress cudaMalloc before fork in test (#1912)

Others

  • Add examples of ConvNet for CIFAR10/100 datasets (#1949, #1982, #1990)
  • Document fix
    • Add F.binary_accuracy to the reference manual (#1898, thanks @ronekko!)
    • Fix TupleDataset document (#1902, thanks @soramichi!)
    • reStructuredText formatting (#1906)
    • Add MultiprocessIterator to the reference manual (#1923)
    • Fix PrintHook example codes (#1938)
    • Add cupy.nonzero and cupy.flatnonzero to the CuPy overview page (#1945)
    • Fix typos and formatting (#1958, thanks @fukatani!)
    • Fix the document of bernoulli_nll (#1959, thanks @fukatani!)
    • Fix the link to NIN paper (#1975, thanks @fukatani!)
    • Fix the document of MemoryPointer.copy_from_host_async (#1980, thanks @yuyu2172!)
    • Add F.classification_summary to the reference manual (#1993)
    • Add L.NStepLSTM to the reference manual (#1999)
  • Update README (#1903)
  • Simplify the install script (#1890)
chainer - v1.18.0

Published by bkvogel almost 8 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

Announcements

New features

  • Add elapsed_time to Trainer and LogReport (#1731)
  • Implement pinned memory (#1707, #1865)
  • Extract evaluation procedure from training extension Evaluator (#1737)
  • Support CUDA 8 (#1821)
  • Make IntervalTrigger stateful to fix PTB example bug (#1384 #1423, thanks @jekbradbury!)
  • Add deterministic option into convolution_2d and deconvolution_2d (#1321)
  • New Functions/Links
    • Adds Subpixel Space2Depth and Depth2Space Transformation Functionality (#1748, #1749, thanks @tjtorres!)
    • Support square function. (#1537, thanks @fukatani!)
    • Nanmin and nanmax (#1645)
    • Implement tile function. (#1760)
    • Add functions.absolute (#1720)
    • Squared difference (#1808, thanks @aonotas!)
    • Implement squeeze function. (#1742)
    • Implement zoneout (#1339 #1457, thanks @tohmae!)
  • New CuPy functions
    • cupy matmul (#1794, thanks @boeddeker!)
    • Add norm function in cupy.linalg (#1833, thanks @ywatanabex!)
    • Sampling method for Gumbel distribution (#1839)

      Bug fixes

  • Use old libstdc++ ABI for Anaconda (#1880)
  • Documentation bug fixes (#1872, #1870, #1871, #1874, thanks @fukatani!)
  • Fix corrupted math (#1875, thanks @fukatani!)
  • Fix cleargrads and zerograds for uninitialized parameters (#1811)
  • Fix scale of elapsed_time for GPU case (#1779, thanks @ywatanabex)
  • Revert "Extract evaluation procedure from training extension Evaluator" (#1790)
  • Fix cuda build on macOS (#1819)
  • Fix caffe function name (#1822)
  • Use amax instead of maximum (#1841)
  • Fix multi gpu mnist exmaple (#1837)
  • Fix RandomState.del to check if initialization succeeded (#1830)
  • Append except clause to functions that returns int (#1829)
  • Initialize grad before Optimizer calls update method (#1824)
  • Fix issue of user defined kernel (#1807, thanks @anaruse)
  • Fix broken link of convolution_2d and convolution_nd document. (#1798, thanks @mmurooka)
  • Fix dot for 0-dim and 1-dim array (#1784)
  • Fix link placeholder problem caused by ndarray input. (#1810, thanks @dsanno)
  • Fix dropout state bug on rnn API (#1804)

Improvements

  • Remove unnecessary files in Docker script (#1842)
  • Stop type checking in numerical_grad (#1683)
  • Fix norm result dtype and test it (#1859)
  • Support FP16 and FP64 in N-dimensional convolution link (#1663)
  • Move part of concatenate to cupy.core.core (#1864, thanks @yuyu2172)
  • Improve hook (#1681)
  • Add seed option to split functions (#1795)
  • Added parameter shape placeholder support to deconvolution_2d. (#1673)
  • Move broadcast_to to cupy.core.core (#1855)
  • Add init.pxd for correctly cimport (#1834)
  • Fix order of ctypedef in cudnn.pxd (#1835)
  • Reorganize header file (cupy_cuda.h) (#1836)
  • Improve col2im and im2col (#1685)
  • testing.parameterize can accept functions (#1356)
  • Use destructive astype in convolution-nd function for efficiency. (#1717)

Others

  • Try to build shared library to check dependent libraries (#1857)
  • Revert use-cythonize branch (#1878)
  • Simplify build process on Read the Docs (#1772)
  • Documentation improvements (#1826, #1758, #1868, #1862, #1853, #1813, #1801, #1800, #1787, #1785, #1780)
  • Revert use-cythonize branch (#1878)
  • Simplify build process on Read the Docs (#1772)
  • Documentation improvements (#1826, #1758, #1868, #1862, #1853, #1813, #1801, #1800, #1787, #1785, #1780)
chainer - v1.17.0

Published by beam2d about 8 years ago

This is a minor release. See here for the complete list of solved issues and merged PRs.

Announcements

  • We are planning to change the release schedule. We will announce it as soon as possible.
  • We are also planning the first major version update. It is expected to be done within a few months. We will announce the plan at the next minor release.

New features

  • New Functions and Links
    • arccos, arcsin, arctan: elementwise inverse trigonometric functions (#1678)
    • deconvolution_nd, DeconvolutionND: N-dimensional deconvolution function and link (#1408)
    • dstack: concatenate variables along the third (depth) axis (#1613, thanks @fukatani!)
  • Support parameter shape placeholder for LSTM and StatelessLSTM (#1719)
  • crf1d supports variable-length input sequences (#1451)
  • Test case decorator to generate test functions for checking a unary Function (#1534, #1741 (#1740), #1747)
  • Dockerfile for quickly trying Chainer in an isolated environment (#1756 (#1458))

Bug fixes

  • Allow NumPy integers in parameter shapes of Link (#1698)
  • Fix get_device to accept future’s int and Device object (#1629, #1778)
  • Fix a bug of BatchNormalization used with model copying (#1767)
  • Fix a bug of MultiprocessIterator used in Evaluator (#1768)
  • Fix the error of “_thread_local.default_backprop not found” in a multi-threaded case (#1705, #1708, thanks @Hiroshiba!)
  • Fix Variable.addgrad to support None grads (#1680, #1770)
  • Fix ImageDataset for Pillow 2 (#1738)
  • Fix cupy.sqrt to follow the latest version of NumPy and deprecate sqrt_fixed (#1764 (#1752), #1769)
  • Fix an error of ImageNet example saying 'Convolution2D' object has no attribute 'W' (#1776 (#1691))
  • Fix installation on Mac (#1773)
  • Fix typo in error messages (#1715, thanks @crcrpar!)
  • Fix an error on building documents of CuPy (#1721)

Improvements

  • Improve the performance of MultiprocessIterator by using shared memory (#1652)
  • n_step_lstm and convolution_nd accept raw arrays (#1703, #1704)
  • Better API compatibility of cupy.squeeze to that of NumPy (#1746)
  • Allow CuPy to call CUDA async APIs with callbacks (#1709)

Others

  • Add a document that explains different behaviors between NumPy and CuPy (#1723 (#557, #610, #815))
  • Fix AlexNet architecture in the ImageNet example (#1650)
  • Implementation improvements (#1684, #1710, #1711, #1722, #1725, #1735, #1745)
  • Documentation improvements (#1693 (#1690), #1714, #1736 (#1529), #1750 (#1640))
  • Test improvements and updates (#1682, #1701, #1743)
chainer - v1.16.0

Published by okuta about 8 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

New features

  • no_backprop_mode and force_backprop_mode are added. In no_backprop_mode context, Variables with volatile=’auto’ behave like non-volatile variables. And, in force_backprop_mode context, they behave like volatile variables. (#1521)
  • Add profiling and optimization utilies that wrap NVTX to work with nvprof and nvvp (#1407 thanks, @anaruse!)
  • Support cuDNN RNN (#1146, #1174)
  • Support float16 type in Linear and Conv2d links, and add example (#1469)
  • Accept dtype to add parameters via Link's __init__ method (#1631 #1633)
  • New Chainer functions
    • floor: backprop-able floor function (#1621)
    • ceil: backprop-able ceil function (#1622)
    • forget: A mechanism that does not store the intermediate results of the forward propagation and recalculate them in the back prpropagation. It is useful for reducing memory usage, while more calculation time is required (#1332).
  • New Chainer links
    • Highway: The buliding block of the highway network. (#1317, thanks @fukutani!)
    • Dilated convolution layer. (#1335, thanks @yasunorikudo!)
  • New Trainer extensions
    • Create training.triggers module and put concrete triggers there (#1506)

Bug fixes

  • CuPy
    • Remove nvtx on Windows for a while (#1672)
    • Count words before trimming data. (#1642)
    • Fix build bug (#1674)

Improvements

  • Chainer
    • Add docstring of math operators of Variable (#1382, #1527)
    • Support protobuf cpp implementation version. (#1442, #1443)
    • Fix Word2vec example to use trainer modules (#1500)
    • Use parameter shape place holder in the MNIST and ImageNet example and tutorial (#1551, #1584)
    • Reduce the memory usage in cupy.tensordot (#1603)
    • Add use_cudnn flag to BatchNormalization (#1626)
    • Add assertion that checks the positivity of output sizes in (de)convolutions (#1627)
    • Raise an exception on Python 3.5.0. (#1655, #1656)
    • Use tanh instead of exp (#1657)
  • CuPy
    • Fix the behavior of cupy.clip when a_min is greater than a_max (#1611, thanks @asi1024!)
    • Reduce complie warnings in cython code (#1667)
    • Remove unused variable in trigonometric functions (#1668)
  • Others
    • Remove some dependencies on Chainer in tests of CuPy (#1616)
    • Add slow attr for test (#1637)
    • Some minor test fixes (#1635, #1644, #1647, #1661, #1665, #1666, #1669, #1670)
    • Some minor document fixes (#1632, #1636, #1639, #1643, #1671, #1679, thanks @mmurooka!)
    • Some minor webpage fixes (#1623)
    • Add license description in setup.py (#1638)
chainer - v1.15.0.1

Published by beam2d about 8 years ago

This is a hot-fix release that fixes a build issue. If you have already installed Chainer 1.15.0 successfully, you do not have to install 1.15.0.1.

For those who failed to install Chainer 1.15.0, it is recommended to install Chainer from PyPI (pip install chainer).

chainer - v1.15.0

Published by delta2323 about 8 years ago

This is a minor release. See the list for the complete list of solved issues and merged PRs.

Recommendation for users

  • Variable.cleargrad, Link.cleargrads: Remove all gradient arrays of the variable(s) instead of filling them with zeros. It reduces the computational overhead of the backward operations and the memory usage compared to Variable.zerograd and Link.zerograds, which are now deprecated (#1585). So, we recommend to use cleargrad(s) instead (#1528, #1598)
  • GradientMethod.use_cleargrads: Forces optimizers to use cleargrad(s) internally. It uses zerograd(s) by default for backward compatibility. Because of the same reason as the above, we recommend to enable this option whenever possible. (#1570)
  • Now, travis CI checks the coding style of PRs with autopep8 (#1304). So, we recommend developers to use autopep8 in making PRs.

New features

  • New Chainer functions
    • rsqrt: reciprocal of the square root (#1490)
    • sinh, cosh: hyperbolic sine and cosine functions (#1496)
    • log2, log10: logarithm functions to the base two and ten (#1508, #1519)
    • vstack: vertical concatenation of multiple tensors (#1535, thanks, @fukutani!)
    • convolution_nd: multi dimensional convolution operation (#1556)
    • flatten: flatten a given array (#1588)
  • New Chainer links
    • ConvolutionND: multi dimensional convolution operation (#1556)
  • New Trainer extensions
    • observe_time, observe_lr: extensions that record the elapsed time and current learning rate, respectively (#1472)

Bug fixes

  • Chainer
    • Fix initialization of weights of slave models in ParallelUpdater (#1576)
    • Fix connectionist_temporal_classfiication function that could behave wrongly in multi-GPU situation (#1589, #1590).
    • Batchnormalization link does not calculate population statistics both in finetune and test modes (#1580, thanks @moootom!)
  • CuPy
    • Fix the behavior of cupy.min and cupy.max when the array includes NaN (#1366, #1536)
    • Check the version of NVCC to detemine whether to use cached kernels (#1348, #1539)
    • Fix the behavior of cupy.vstack, cupy.dstack when single array is concatenated (#1561, #1562, #1567, #1568)
    • Fix wrongly set flags attribute of cupy.ndarray (#1571)

Improvements

  • Chainer and CuPy officially support cuDNN v5.1 (#1548)
  • Chainer
    • Initializer supports dtype option that specifies dtype of initialized arrays (#1295).
    • ProgressBar extension shows the progress bar at first iteration (#1430).
    • MNIST example takes snapshot only at the end of the training to reduce training time (#1577).
  • CuPy
    • Reduce memory usage of some FP16 calculatation by using cublasSgemmEx if possible (#1481).
  • Others
    • implementation improvements (#1544, #1597)
    • improvements on comments and documents (#1538, #1550, #1559, #1566, #1582, #1591, #1593, #1596)
    • improvements on error and warning messages (#1453, #1454, #1552, #1554, #1596)
    • installation improvements (#1541, #1557, thanks @tapdo for reporting the issue!)
chainer - v1.14.0

Published by unnonouno about 8 years ago

This is a minor release. See https://github.com/pfnet/chainer/milestone/29?closed=1 for the complete list of solved issues and merged PRs.

New features

  • New Functions/Links/Triggers:
    • blackout: Blackout loss function (#1073, #1261)
    • @ operator for Variable: Matmul operator (#1138)
    • classification_summary: Shows summary of classifier (#1340)
    • MaxValueTrigger and MinValueTrigger: Trigger that fires specific value is maximum/minimum (#1400, thanks @dsanno)
    • hstack: Horizontal stack function (#1475, thanks @fukatani)
    • tan: Tangent function (#1480)
    • sqrt: Square root function (#1488)
  • New other features
    • LSTM function supports variable lenght inputs (#1209)
    • cuDNN batch normalization is supported (#1310)
    • Caffe function supports SLICE layer (#1361, thanks @amitibo)
    • snapshot and snapshot_object accept a trigger (#1450, thanks @dsanno)
    • Variable supports shape, ndim and dtype attributes (#1518)

Bug fixes

  • report: Support to report int values (#1459, #1483)
  • split_axis: Support int and long values (#1489)
  • reduction kernels: Fix overflow (#1494, #1498)
  • float16: Fix division (#1495, #1497)
  • ProgressBar: Use time instead of clock. (#1499, #1501)
  • get_cifar100: Make it work (#1507, #1511)
  • cupy.hstack: Support zero-dim arrays (#1524)

Improvement

  • Set Variable.volatile=True in Evaluator. That reduces memory requirements on evaluation (#1462)
  • EmbeID skips to check validity of ignore labels (#1479, thanks @fukatani)
  • Some functions support float16
    • crelu (#1530), dropout (#1532), sum (#1531), clip (#1533)
Package Rankings
Top 1.15% on Pypi.org
Top 29.43% on Anaconda.org
Badges
Extracted from project README
pypi GitHub license travis coveralls Read the Docs Optuna
Related Projects