A flexible framework of neural networks for deep learning
MIT License
Published by beam2d over 5 years ago
This is the release note of v6.1.0. See here for the complete list of solved issues and merged PRs.
F.batch_renormalization
, and related fixes (#7197)Variable.backward
(#7208)MultiprocessParallelUpdater
to support new devices (#7246)Variable
(#7445)get_device
error message when ChainerX is not available (#7461)F.convolution_2d
(#7499)F.deconvolution_2d
(#7500)MultiNodeBatchNormalization
(#7254)L.Linear
when called with n_batch_axes
(#7300)_values_to_dicts
so it works with unicode of python 2 too (#7323)Bernoulli.log_prob
(#7334, thanks @seiyab!)scatter_dataset
and bcast
(#7360)WeightDecay
aware of loss scale (#7510)F.where
(#7532)FunctionTestCase
(#7134)backend.get_device_from_array
(#7168)F.copy
view behavior (#7174)optimizers.MSVAG
to documentation (#7193)CommunicatorBase.allgather
(#7195)chainerx.md
(#7218)chainer.utils.type_check
(#7274, thanks @ktns!)F.relu
in doc (#7299)F.normalize
documentation (#7337, thanks @crcrpar!)static_graph.rst
(#7399)test_iter.epoch
manually in the tutorial of training loop (#7410)robots.txt
to allow indexing root (#7458)F.swish
document (#7467, thanks @fiarabbit!)F.normalize
documentation (#7482, thanks @crcrpar!)reset
method in the PTB example (#7535)CUDA_VISIBLE_DEVICES
in ChainerX tests (#7294)test_cuda.py
to backends_tests
(#7295)0
to 0.0
for python2 (#7508).mergify.yml
(#7151)Published by kmaehashi over 5 years ago
This is the release note of v7.0.0a1. See here for the complete list of solved issues and merged PRs.
links.loss.CRF1d
to automatically sort the input sequence (#6351)squared_difference
to chainerx (#6501, thanks @aksub99!)chainerx.minimum
(#6541, thanks @aksub99!)chainerx.maximum
(#6570, thanks @aksub99!)chainerx.ceil
(#6705, thanks @kshitij12345!)chainerx.floor
(#6707, thanks @kshitij12345!)chainerx.absolute
(#6715, thanks @dido1998!)chainerx.argmin
and chainerx.ndarray.argmin
(#6740, thanks @Harshan01!)chainerx.amin
and chainerx.min
(#6752, thanks @Harshan01!)chainerx.a/sinh
,chainerx.a/cosh
(#6776, thanks @kshitij12345!)chainerx.fabs
and chainerx.sign
(#6777, thanks @kshitij12345!)chainerx.logical_and
chainerx.logical_or
(#6779, thanks @kshitij12345!)chainerx.all
and chainerx.any
(#6781, thanks @kshitij12345!)chainerx::Softmax
and chainerx.softmax
(#6814, thanks @tohmae!)BatchNorm
states public (#6847)chainerx::Swapaxes
and chainerx.swapaxes
(#6897, thanks @kshitij12345!)chainerx.logical_xor
(#7014, thanks @ishanrai05!)chainerx.log10
(#7015, thanks @ishanrai05!)chainerx.isfinite
(#7016, thanks @kshitij12345!)chainerx.arctan2
(#7028, thanks @kshitij12345!)chainerx.expand_dims
(#7029, thanks @kshitij12345!)chainerx.flip
, chainerx.fliplr
and chainerx.flipud
(#7065, thanks @kshitij12345!)chainerx.where
(#7067, thanks @kshitij12345!)F.arctanh
(#7095)gradient_check.check_double_backward
(#6427)link_hooks.SpectralNormalization
(#6655, thanks @crcrpar!)snapshot_object
have condition
and writer
option (#6762)chainerx.ndarray
(#6769)Evaluator
for chainer.dataset.converter
(#6768)patients
argument to patience
in EarlyStoppingTrigger
(#6784)Backend
ctor and use CreateBackend
(#6785)__str__
for Device
classes (#6816, thanks @nishnik!)numeric.h
(#6832)chainerx::Minimum
(#6858)distributions.independent
(#6860, thanks @ganow!)chainerx.ndarray.all
and chainerx.ndarray.any
(#6926)HuberLoss.forward
avoid loss of significance (#6940)chainerx::Dot
(#6960)F.get_item
backward for ChainerX (#6991)Stack
(#7058)Reshape
copy condition (#7080)chainerx::Conv
(#7112)ndarray
conversion (#6204)chainerx.astype
casting from float16
to bool
in CUDA (#6780, thanks @kshitij12345!)chainerx.square
fallback since it is implemented in C++ (#6823)to_gpu
/to_cpu
/to_intel64
were overridden (#6824)filename
arg of PlotReport
(#6866)InvalidType
picklable (#6884, thanks @zaltoprofen!)AMinOp
(#6922)import cupy
(#6954)ConcatWithAsyncTransfer
(#6992)allow_pickle=True
(#7036)At
output offset (#7046)std::shared_ptr
with custom deleter in chianer_interop.cc
(#7107)cuda_internal::DeviceInternals
to wrap handle etc. (#6820)DeviceInternals
(#6827)CHAINERX_REGISTER_OP_{NATIVE,CUDA}
to CHAINERX_{NATIVE,CUDA}_REGISTER_OP
(#6865)del
(#6933)gradient_check
(#6935)chainerx/kernels/
and rename existing device "op"s to "kernel"s (#6944)native::Float16
and cuda::Float16
(#7069)F.swish
document (#6509, thanks @fiarabbit!)chainer.get_device
to doc (#6735)chainerx.sigmoid
docs (#6889, thanks @crcrpar!)F.convolution_2d
(#6890, thanks @crcrpar!)chainer.testing.LinkTestCase
(#6895, thanks @crcrpar!)chainerx.md
(#6899, thanks @tkat0!)FunctionTestCase
(#6931)pickle_dataset.py
(#6942)CHAINERX_ENABLE_BLAS
environment variable (#7098, thanks @durswd!)AdamW
docstring (#7137, thanks @crcrpar!)AMSGrad
(#7138, thanks @crcrpar!)filename
in PlotReport example (#6880, thanks @crcrpar!)F.mean_absolute_error
test (#6253, thanks @aksub99!)F.bilinear
test (#6488, thanks @ishanrai05!)F.deconvolution_2d
test (#6498, thanks @ishanrai05!)pytest
summary (#6625, thanks @kshitij12345!)chainerx.max
test (#6761)F.flip
test (#6801, thanks @ishanrai05!)F.where
test (#6802, thanks @ishanrai05!)F.repeat
test (#6803, thanks @ishanrai05!)F.elu
test numeric error (#6841)unary_math_function_unittest
(#6845)F.unpooling_nd
test (#6861, thanks @ishanrai05!)F.local_response_normalization
test (#6867, thanks @ishanrai05!)F.reshape
test (#6868, thanks @ishanrai05!)F.layer_normalization
test (#6871, thanks @ishanrai05!)test_spatial_transformer_sampler.py
(#6883)F.prelu
test (#6887, thanks @ishanrai05!)F.flatten
test (#6888, thanks @ishanrai05!)F.dstack
test (#6891, thanks @ishanrai05!)F.sign
test (#6898, thanks @hikjik!)F.ceil
test (#6900, thanks @hikjik!)F.floor
test (#6901, thanks @hikjik!)F.rrelu
test instability (#6915)F.max_pooling_nd
test instability (#6917)F.fmod
test (#6937, thanks @hikjik!)F.fix
test (#6938, thanks @hikjik!)F.expm1
test (#6965, thanks @hikjik!)max_pool
test (#6975)F.bias
test (#6976, thanks @hikjik!)F.cumsum
test (#6977, thanks @hikjik!)Variable.addgrad
test (#6979)F.cosh
, F.sinh
test (#6980, thanks @hikjik!)F.log1p
test (#6981, thanks @hikjik!)F.linear_interpolate
test (#6984, thanks @hikjik!)F.fft
, F.ifft
test (#6985, thanks @hikjik!)F.matmul
test (#6987, thanks @ishanrai05!)TestLogSumExp
(#6988)TestMin
(#6989)F.get_item
test (#6990)F.inv
, F.batch_inv
test (#6994, thanks @hikjik!)F.batch_l2_norm_squared
test (#6996, thanks @hikjik!)F.accuracy
test (#7006, thanks @hikjik!)F.binary_accuracy
test (#7007, thanks @hikjik!)F.r2_score
test (#7008, thanks @hikjik!)F.permutate
test (#7010, thanks @hikjik!)F.scatter_add
test (#7012, thanks @hikjik!)F.separate
test (#7013, thanks @hikjik!)F.logsumexp
test (#7018, thanks @hikjik!)test_math.py
(#7023)chainerx.abs
test (#7024)chainerx.tan
test (#7033)pytest
summary (cont.) (#7089)chainerx/libchainerx.dylib
(#6666).mergify.yml
(#7074)Published by beam2d over 5 years ago
This is the release note of v6.0.0. See here for the complete list of solved issues and merged PRs.
This release note only covers the difference from v6.0.0rc1; for all highlights and changes, please refer to the release notes of the pre-releases:
See the Upgrade Guide if you are upgrading from previous versions.
Adam
chainerx.minimum
(#6813, thanks @aksub99!)logical_and
and logical_or
to ChainerX (#6821, thanks @kshitij12345!)squared_difference
to ChainerX (#6822, thanks @aksub99!)condition
and writer
option to snapshot_object
(#6943)chainerx.ceil
(#6852, thanks @kshitij12345!)Evaluator
for chainer.dataset.converter
(#6790)Backend
ctor and use CreateBackend
(#6809)link_hooks.SpectralNormalization
(#6877, thanks @crcrpar!)distributions.independent
(#6945, thanks @ganow!)__str__
for Device
classes (#7092, thanks @nishnik!)ArgMax
of CUDA when all values are negative (#6796)chainerx.astype
casting from float16
to bool
in CUDA (#6797, thanks @kshitij12345!)TypeError
during BN deserialization on win64 (#6812, thanks @hyabe!)chainerx.square
fallback since it is implemented in C++ (#6828)to_gpu
/to_cpu
/to_intel64
were overridden (#6849)filename
arg of PlotReport
(#6928)InvalidType
picklable (#6934, thanks @zaltoprofen!)ImportError
during import cupy
(#7011)ConcatWithAsyncTransfer
(#7019)allow_pickle=True
(#7048)At
output offset (#7054)std::shared_ptr
with custom deleter in chianer_interop.cc
(#7109)chainermn
(#7142)cuda_internal::DeviceInternals
to wrap handle etc. (#6826)DeviceInternals
(#6830)chainerx.md
(#6916, thanks @tkat0!)pickle_dataset.py
(#6964)chainer.testing.LinkTestCase
(#7001, thanks @crcrpar!)CHAINERX_ENABLE_BLAS
environment variable (#7120)protobuf
3.8.0rc1
from dependencies (#7088)filename
in PlotReport
example (#7009, thanks @crcrpar!)reinforcement_learning
example to work with default dtype (#7049)chainerx.max
test (#6766)F.elu
test numeric error (#6844)unary_math_function_unittest
(#6919)F.rrelu
test instability (#6920)F.max_pooling_nd
test instability (#6927)TestLogSumExp
(#6999)max_pool
test (#7002)test_spatial_transformer_sampler.py
(#7020)chainerx.tan
test (#7053)pytest
summary (#7090)pytest
summary (cont.) (#7091)chainerx/libchainerx.dylib
(#6885)Published by kmaehashi over 5 years ago
This is the release note of v6.0.0rc1. See here for the complete list of solved issues and merged PRs.
v6
branch.CHAINER_DTYPE=mixed16
to make Chainer choose appropriate dtypes for mixed precision training (in most places it is float16
, but it automatically chooses float32
when it’s better for precision and performance reasons).(optimizer).loss_scaling()
. See the documentation for details.variable.item()
(#5797, thanks @crcrpar!)Link.to_device
family (#5986)unit
to CupyMemoryProfileHook.print_report()
(#6256, thanks @hitsgub!)distributions.Independent
(#6324, thanks @ganow!)FloorDivide
(#6350)testing.FunctionTestCase
(#6444)mixed16
mode and its support in L.BatchNormalization
(#6456)F.relu6
as an alias to F.clipped_relu
(#6463, thanks @aksub99!)minimum
to chainerx (#6477, thanks @aksub99!)square
to chainerx (#6486, thanks @aksub99!)chainerx.testing.integral_dtypes
(#6526)chainer.mixed16
data type in PureNcclCommunicator (#6548)LinkTestCase
to simplify link tests (#6559)Sin
and Cos
to chainerx (#6601, thanks @kshitij12345!)MultiNodeBatchNormalization
of ChainerMN (#6619)tan
, arcsin
, arccos
, arctan
to ChainerX (#6703, thanks @IvanYashchuk!)F.resize_images
speed (#5753, thanks @grafi-tt!)F.group_normalization
via cuDNN call (#5924, thanks @grafi-tt!)F.average_pooling_nd
with pad_value
of None (#6332, thanks @crcrpar!)F.log_ndtr
to avoid NaN (#6340)y.grad
on y.backward(retain_grad=False)
(#6348)requires_grad
explicitly in gradient_check
and function test (#6364)get_fans
(#6365)ResultType
to take kind into account (#6419)FunctionTestCase
error message (#6426)Adam
for float16 parameters to float32 (#6442)chainerx.Scalar
(#6481)BatchNorm
and FixedBatchNorm
(#6484)chainerx::Take
indices other dtype than int64 (#6485)cupy.cudnn.batch_normalization_forward_training
(#6497)chainerx::conv
and chainerx::conv_transpose
(#6510)F.cast
(#6518)x.dtype == b.dtype
in F.convolution_nd
and F.deconvolution_nd
(#6524)chainerx.Scalar
to Python (#6535)parameterize_pytest
to allow parameterizing with tuples (#6554)chainerx.linear
(#6569)chainer.grad
(#6580)PerformanceWarning
(#6617)testing.product
(#6635)BatchNormalization
to only allocate dummy mean and var in cuDNN path (#6656)F.layer_normalization
(#6680, thanks @hitsgub!)F.l2_normalization
(#6681, thanks @hitsgub!)D.Normal
(#6709)minimum
and maximum
(#6713)Sequential
(#6304)F.softmax_cross_entropy
float16 under/overflow (#6366)BatchNormalization
link (#6369)str.join
TypeError
in FunctionTestCase
helper (#6370)chainer.links.NStepRNN
and its variants (#6415, thanks @crcrpar!)chainerx::Array
(#6540)chainerx::Slice
(#6557)chainerx::Linear
(#6593, thanks @crcrpar!)DeviceResident.to_gpu
fallback argument (#6712)==
/ !=
to compare str) (#6346)# NOQA
in docstrings (cont.) (#6356)op_utils.py
(#6421)chainerx::Linear
(#6425)ResultTypeResolver
multiple definitions (#6439).clang-tidy
(#6445)AsContiguous
in CudaConv::ConvGradWeight
(#6520)_BNMode
(#6582)collections
(#6645)ArrayBody::GetArrayNode
to return null (#6658)BackwardBuilder::Target
less stateful (#6659)TimerHook
(#6433, thanks @hitsgub!)F.prelu
(#6455, thanks @fiarabbit!)Dot
backward cast (#6537)forward
in LinkHook
documentation (#6546, thanks @crcrpar!)F.rrelu
documentation (#6581, thanks @fiarabbit!)gradient_check.check_double_backward
in reference (#6584):meth:
link (#6603, thanks @23pointsNorth!)chainerx.md
(#6610, thanks @kshitij12345!)F.erfcx
, F.erfcinv
and F.erfinv
(#6618)chainer.backend.get_array_module
documentation (#6663)CMAKE_BUILD_TYPE
(#6664)args.out
in train_cifar_custom_loop.py
(#6378, thanks @crcrpar!)__future__.division
in imagenet example with Python2 (#6462)__future__.division
for Python2 (#6562)F.matmul
instead of F.batch_matmul
in memnn example (#6611)unchain_backward()
in pix2pix example (#6634, thanks @hayato-maki!)mushrooms.csv
(#6693)download.py
(#6694)guides/functions.rst
(#6194)F.swish
test (#6306, thanks @ishanrai05!)F.log_softmax
test (#6320, thanks @ishanrai05!)F.softmax_cross_entropy
test (#6363)F.softmax
test (#6371, thanks @aksub99!)F.flipr
test (#6389, thanks @ishanrai05!)F.flipud
test (#6390, thanks @ishanrai05!)F.moveaxis
test (#6392, thanks @ishanrai05!)F.pad
test (#6393, thanks @ishanrai05!)F.test_squared_difference
test (#6395, thanks @aksub99!)F.minimum
test (#6396, thanks @aksub99!)F.maximum
test (#6400, thanks @aksub99!)F.convolution_2d
and F.convolution_nd
(#6406, thanks @crcrpar!)F.rollaxis
test (#6408, thanks @ishanrai05!)F.vstack
test (#6410, thanks @ishanrai05!)F.transpose
test (#6458, thanks @ishanrai05!)F.tile
test (#6459, thanks @ishanrai05!)F.swapaxes
test (#6460, thanks @ishanrai05!)F.resize_image
test. (#6464, thanks @ishanrai05!)F.expand_dims
test (#6473, thanks @ishanrai05!)F.prod
test (#6479, thanks @aksub99!)F.squeeze
test (#6487, thanks @ishanrai05!)examples/.gitignore
(#6391, thanks @crcrpar!)FunctionTestCase
s (#6416)SPHINXOPTS
env from Makefile (#6417)test_print_report
(#6430)NumpyOpTest
(#6437)F.group_normalization
test (#6468, thanks @crcrpar!)F.pad
test for Python2 (#6478)F.vstack
to a list of ndarrays (#6494, thanks @crcrpar!)OpTest
(#6507)batch_norm
test (#6542)fixed_batch_norm
test (#6558)chainerx.divide
test (#6573)F.einsum
tests (#6588)FunctionTestBase
class attributes (#6599)LinkTestCase
and LinkInitializersTestCase
class attributes (#6600)op_test
decorator remove the previous class (#6602)compute_60
instead of compute_50
to run test on P100 (#6633)BatchNormalizationMultiGpuTest
(#6652)TestConvTranspose
(#6691)F.convolution_nd
test for flake8 (#6711)convolution_nd
function test (#6728)Published by niboshi over 5 years ago
This is the release note of v5.4.0. This is the final release of v5.x series. See here for the complete list of solved issues and merged PRs.
get_fans
(#6413)F.log_ndtr
to avoid NaN (#6431)text_classification
example fails on Python 3 (#5651, thanks @koreyou!)BatchNormalization
link (#6480)chainer.links.NStepRNN
and its variants (#6517, thanks @crcrpar!)# NOQA
in docstrings (#6549)collections
(#6676)F.rrelu
documentation (#6586, thanks @fiarabbit!)gradient_check.check_double_backward
in reference (#6587)forward
in LinkHook
documentation (#6594, thanks @crcrpar!):meth:
link (#6614, thanks @23pointsNorth!)F.erfcx
, F.erfcinv
and F.erfinv
(#6632)chainer.backend.get_array_module
documentation (#6685)classification_summary
(#6697, thanks @yewang!)dali_util
in imagenet example for fp16 (#6377, thanks @anaruse!)args.out
in train_cifar_custom_loop.py
(#6411, thanks @crcrpar!)__future__.division
for Python2 (#6567)F.matmul
instead of F.batch_matmul
in memnn example (#6631)FutureWarning
other than experimental features (#6052)SPHINXOPTS
env from Makefile (#6491)F.einsum
tests (#6672)Published by beam2d over 5 years ago
This is the release note of v6.0.0b3. See here for the complete list of solved issues and merged PRs.
chainer.datasets
NotImplementedError
if Extension.__call__
is not overridden (#6095)get_retained_{in/out}puts
to return None
for None
inputs/outputs (#6121)chainerx
-> chx
in public API (#6312)finished
property to once_trigger
(#6023, thanks @hitsgub!)Iterator.finalize
from __del__
and __exit__
(#6098)L.Deconvolution2D
(#6175, thanks @crcrpar!)create_mnbn_model
(#6245)align_units
to TimerHook.print_report()
(#6254, thanks @hitsgub!)chainerx.ndarray.item
(#6050)chainerx.grad
Python binding (#6063)Variable
(#6284)chainerx::ResultType
(#6347)spatial_scale
>= 1.0 in F.roi_max_align_2d
(#5635, thanks @knorth55!)pseudo_connect
with None
input (#5652)Link.__init__
in subclasses (#5927)ndarray.take
(#6081)MultiprocessParallelUpdater
(#6100)get_retained_{in/out}puts
to return None
for None
inputs/outputs (#6121)Sigmoid
layer (#6234, thanks @notogawa!)group
option value of Convolution2D
to Caffe exporter (#6241, thanks @ohnabe!)Variable
operators (#6255)DimsFormatter
to print a list of dimensions (#6064)FunctionNode
None
inputs in ChainerX (#6122)NativeDevice::Dot
(#6227)Dot
(#6246)TrueDivide
for integer types (#6281)chainerx
-> chx
in public API (#6312)Sum
(#6313)Variable.xp
to avoid creation of Device
instance (#6016)Variable._init_unchecked()
static method for faster instantiation (#6033)contextmanager
in backprop (#6264)F.relu
performance with CuPy (#6268)get_variable
performance (#6269)backprop_step
(#6286)using_config
(#6290)chainer.is_debug()
overhead (#6291)using_device
for NumPy and Intel64 devices (#6292)chainerx.ndarray.__getitem__
(#5989)initializers.Orthogonal
unbiased (#5615){Max,Average}Pool
kernel_size
and stride
(#6066)Conv
, ConvTranspose
stride (#6067)FunctionNode.get_retained_outputs
to return ()
if no output is retained (#6118)xp
with numpy
for cupy code path (#6126)F.rrelu
(#6139)xp
with numpy
for cupy code path (cont.) (#6159)Parameter.to_device
(#6170)Optimizer
to convert state arrays back to ChainerX (#6171)Device.__ne__
for Python 2 (#6335)FreeUnusedBlocks
(#5992)_check_grad_type
(#6213)test_gradient_check
(#6271)is_arrays_compatible
(#6274)utils.size_of_shape
in F.convolution_nd
and F.deconvolution_nd
(#6329)_array_to_gpu
with stream argument (#6358)NOLINT
to reinterpret_cast
(#6051)py::isinstance
to check types (#6083)_has_chainerx_array
in Variable
(#6214)CHAINERX_VISIBILITY_HIDDEN
(#6231)ndarray
(#6042)classifier.py
(#6090, thanks @hiden-cubist!)Link.forward
method (#6183)# NOQA
in docstrings (#6184)FunctionTestCase
to documentation (#6189)README.md
(#6339, thanks @crcrpar!)# NOQA
in docstrings (#6355)chainermn.links.create_mnbn_model
(#6360)CMAKE_CURRENT_BINARY_DIR
in CMakeLists.txt
(#6114)PrintReport
entries in seq2seq example (#6308)dali_util
in imagenet example for fp16 (#6342, thanks @anaruse!)train_mnist.py
example for NumPy 1.16 (#5999, thanks @Guriido!)F.batch_renormalization
test (#5817)F.mean_squared_error
test (#5822)F.concat
test (#5823)F.crelu
and F.elu
test (#6070)FutureWarning
(#6135)F.triplet
test (#6136)x_dtype
and W_dtype
to the if
statement of FunctionTestCase._skip_if_chainerx_float16
(#6167, thanks @crcrpar!)F.tanh
test (#6173, thanks @crcrpar!)F.sigmoid
test (#6174, thanks @crcrpar!)F.hard_sigmoid
test (#6192, thanks @crcrpar!)F.average_pooling_2d
(#6211, thanks @crcrpar!)F.selu
test (#6243, thanks @aksub99!)F.softplus
test (#6298, thanks @ishanrai05!)F.leaky_relu
test (#6301, thanks @aksub99!)F.maxout
test (#6302, thanks @aksub99!)F.sum
test (#6307, thanks @aksub99!)F.rrelu
(#6318)F.diagonal
test (#6322, thanks @ishanrai05!)chainerx_tests
(#6049)FunctionTestCase
is used (#6069)CHAINERX_CUDA_MULTITHREAD_TEST_SEGV_WORKAROUND
from Jenkins script (#6108)Published by hvy over 5 years ago
This is the release note of v5.3.0. See here for the complete list of solved issues and merged PRs.
MultiprocessParallelUpdater
(#6113)group
option value of Convolution2D
to Caffe exporter (#6293, thanks @ohnabe!)Sigmoid
layer (#6294, thanks @notogawa!)F.relu
performance with CuPy (#6270)chainer.is_debug()
overhead (#6297)MultiNodeOptimizer
with loss scaling (#5783)F.forget
(#6076)dump_graph
not to memory leak (#6147, thanks @hitsgub!)NStepLSTM
/NStepRNN
(#6074)classifier.py
(#6102, thanks @hiden-cubist!)ndarray
(#6288)train_mnist_dual_parallel.py
(#5716)PrintReport
entries in seq2seq example (#6321)F.triplet
test (#6144)Published by niboshi over 5 years ago
This is the release note of v6.0.0b2. See here for the complete list of solved issues and merged PRs.
D.Cauchy
(#5337)D.Geometric
(#5343)cached_property
decorator (#5416)build_computational_graph
accept single output (#5445)L.NegativeSampling
(#5664)finished
to trigger object (#5681, thanks @hitsgub!)F.spatial_transformer_sampler
(#5751)TimerHook
link hook. (#5842, thanks @crcrpar!)F.as_strided
(#5902, thanks @fiarabbit!)ndim!=2
for F.huber_loss
(#5534)r
arg of F.rrelu
(#5619)Variable
s in _check_grad_type
(#5640)FunctionNode
automatic fallback of array attributes in forward (#5745)gradient_check
(#5777)cuda.GpuDevice
initialization (#5780)hasattr
check to user-specified flush
call to file-like objects. (#5794, thanks @grafi-tt!)links.CRF1d
(#5807, thanks @himkt!)F.clip
type restriction (#5813)F.huber_loss
(#5835)F.LocalResponseNormalization
as FunctionNode
(#5851)F.relu
(#5871, thanks @grafi-tt!)F.clip
1 at x_min
and x_max
(#5876, thanks @grafi-tt!)reset
method is not implemented in an iterator (#5882)FunctionNode
on ROIPooling2D
(#5957)function_hooks/timer.py
(#5971, thanks @crcrpar!)F.elu
memory consumption by retaining output (#5972, thanks @grafi-tt!)dump_graph
not to memory leak (#5538, thanks @hitsgub!)F.batch_normalization
+ F.forget
combination (#5557)MultiNodeOptimizer
with loss scaling (#5659)downsample_fb
in resnet (#5737, thanks @milhidaka!)device
argument passed to MultiprocessParallelUpdater
being modified (#5739, thanks @Guriido!)cuda.fuse
decorator used without parentheses (#5809, thanks @grafi-tt!)F.cast
gradient for casts between the same dtypes (#5811)split_dataset
(#5895)F.leaky_relu
grad when slope = 0
(#5898, thanks @grafi-tt!)_to_device
for consistency (#5948)import chainer.testing
without pytest (#5973)WalkerAlias
(#6057)ndarray
(#5718)robots
.txt to hide older versions from search results (#5768)Linear
. (#5852)ndarray
(#5863)ndarray
(#5875)static_graph
module path in documentation (#5883).data
to .array
in Guides and Examples docs (#5907, thanks @jinjiren!)F.softmax_cross_entropy
on output shape with reduce=no
(#5965)ndarray
(#5975)ndarray
(#6032)iter.reset()
in PTB example (#5834)FunctionTestCase
for function tests (#3499)F.connectionist_temporal_classification
(#5727)F.split_axis
and F.concat
(#5733)make html
to Travis (#5769)testing.BackendConfig
context for repeated use (#5779)Evaluator
(#5806)testing.assert_allclose
(#5814)testing.parameterize
(#5893)testing.inject_backend_tests
and testing.parameterize
(#5904)F.connectionist_temporal_classification
(#5928)FutureWarning
other than experimental features (#5949)inject_backend_tests
multi_gpu
test mark (#6028)Function
(#5828)chainerx.grad
) (#5747)_as_noncontiguous_array
workaround for ChainerX (#5781)L.NegativeSampling
ChainerX support (#5816)cudaMemcpyAsync
for pinned memory for faster host-to-device transfer (#5940)chainerx.asscalar
(#6007)indices_and_sections
in chainerx.split
(#5788)chainerx.maximum
to restore CUDA device (#6043)chainerx.ndarray
to the ndarray
doc (#5864)CMAKE_CXX_FLAGS
a user specified (#5770)pybind
dependency to v2.2.4 (#5798)gsl-lite
to v0.32.0 (#5849)pybind
exception registration for macOS (#5936)chainerx.GradientError
(#5787).circleci
(#5860)FixedCapacityDummyAllocator
in CUDA memory pool test (#5993).gitignore
(#5805, thanks @knorth55!)modernize-use-auto
(#5839)Link
, LinkHook
, Initializer
and ChainerX (#5675)gradient_check
(#5699)setup.py
(#5764)MultiprocessIterator.__copy__
(#5833)utils._getitem
/_setitem
to chainerx
(#5840)@overload
annotations outside the stub files (#5960)numpy.asscalar
(#5994)chainerx.asscalar
from mypy stub file (#6024).gitignore
to avoid ignoring some necessary files (#5836)Published by mitmul over 5 years ago
This is the release note of v5.2.0. See here for the complete list of solved issues and merged PRs.
L.BinaryHierarchicalSoftmax
(#5714)F.embed_id
(#5926)F.spatial_transformer_sampler
(#6003)F.connectionist_temporal_classification
(#6011)F.det
and F.inv
(#6012)L.NegativeSampling
(#6013)utils.mixed_presision
decorator (#6022)TimerHook
link hook (#6038, thanks @crcrpar!)Link.add_hook
to return self
(#5750, thanks @crcrpar!)hasattr
check to user-specified flush
call to file-like objects (#5803, thanks @grafi-tt!)Variable
s in _check_grad_type
(#5826)F.LocalResponseNormalization
as FunctionNode
(#5900)testing.assert_allclose
(#5984)function_hooks/timer.py
(#6021, thanks @crcrpar!)device
argument passed to MultiprocessParallelUpdater
being modified (#5790, thanks @Guriido!)F.cast
gradient for casts between the same dtypes (#5818)cuda.fuse
decorator used without parentheses (#5825, thanks @grafi-tt!)downsample_fb
in resnet (#5850, thanks @milhidaka!)split_dataset
(#5899)F.leaky_relu
grad when slope = 0 (#5922, thanks @grafi-tt!)import chainer.testing
without pytest (#5998).gitignore
to avoid ignoring some necessary files (#5838)ndarray
(#5831)ndarray
(#5881)None
(#5886)ndarray
(#5889)static_graph
module path in documentation (#5906).data
to .array
in Guides and Examples docs (#5913, thanks @jinjiren!)Linear
. (#5919)F.softmax_cross_entropy
on output shape with reduce=no
(#5969)ndarray
(#5976)ndarray
(#6034)iter.reset()
in PTB example (#5857)Published by niboshi almost 6 years ago
This is the release note of v5.1.0. See here for the complete list of solved issues and merged PRs.
F.negative_sampling
(#5593)F.scatter_add
and F.get_item
(#5594)ndarray.astype
(#5623)compute_stream
argument in ConcatWithAsyncTransfer
to allow more overlap between computation and transfer in CUDA (#5684, thanks @anaruse!)numerical_grad
(#5705)DropoutStates
(#5644)testing/backend.py
definitions in testing/__init__.py
(#5639)Variable.array
in codes under links (#5689, thanks @crcrpar!)F.repeat
(#5708)D.Uniform.log_prob
to avoid returning -inf at boundary (#5550)reporter.Summary
float value deserialization (#5584)F.negative_sampling
output dtype in CPU mode (#5625)F.forget
(#5588, thanks @fiarabbit!)F.roi_average_align_2d
doc to refer wrapper function (#5617, thanks @knorth55!)Chain
example code (#5655)chainer.distributions
documentation (#5661)L.ResNetLayers
(#5667, thanks @takaaki82!)ndarray
(#5704)backprop_step
(#5710)chainer.distributions
refer ndarray
(#5719)SerialIterator
in train_mnist_custom_loop.py
(#5544)F.rrelu
(#5673)Published by hvy almost 6 years ago
This is the release note of v6.0.0b1. See here for the complete list of solved issues and merged PRs.
ChainerX is an ndarray implementation with Define-by-Run automatic differentiation capability. It roughly corresponds to "NumPy/CuPy + Chainer Variable", while some additional features follow:
The speed is best achieved by directly using ChainerX APIs,
while it also provides a compatibility layer through the conventional Variable
interface for easier adoption of ChainerX in existing projects.
See the ChainerX Tutorial for more details and concrete examples.
F.roi_max_align_2d
(#5198, thanks @knorth55!)F.roi_average_pooling_2d
(#5285, thanks @knorth55!)F.roi_max_pooling_2d
(#5304, thanks @knorth55!)F.negative_sampling
(#5336)D.Chisquare
(#5338)D.Gumbel
(#5352)D.Poisson
(#5364)D.OneHotCategorical
(#5372)BestValueTrigger
(#5402, thanks @ktns!)return_samples
argument to F.negative_sampling
and L.NegativeSampling
(#5597)F.embed_id
(#5624)L.BlackOut
(#5638)L.BinaryHierarchicalSoftmax
(#5648)F.connectionist_temporal_classification
(#5680)cupy.linalg.det
in F.det
(#5525)ndarray.astype
(#5547)DropoutStates
(#5563)D.OneHotCategorical
(#5587)compute_stream
argument in ConcatWithAsyncTransfer
to allow more overlap between computation transfer in CUDA (#5606, thanks @anaruse!)chainer.utils.size_of_shape
in ChainerMN (#5610)testing/backend.py
definitions in testing/__init__.py
(#5633)Variable.array
in codes under links (#5657, thanks @crcrpar!)F.repeat
(#5662)train_mnist_dual_parallel.py
(#5678)Link.add_hook
to return self
(#5736, thanks @crcrpar!)reporter.Summary
float value deserialization (#5482)text_classification
example fails on Python 3 (#5591, thanks @koreyou!)D.OneHotCategorical
(#5604)F.roi_average_pooling_2d
(#5611)F.negative_sampling
output dtype in CPU mode (#5613)F.roi_average_align_2d
and F.roi_average_pooling_2d
(#5627, thanks @knorth55!)L.BatchNormalization
with lazy initialization fail on GPU (#5683, thanks @koreyou!)F.forget
(#5586, thanks @fiarabbit!)F.roi_average_align_2d
doc to refer wrapper function (#5609, thanks @knorth55!)Chain
example code (#5653)F.max_pooling_nd
docstring (#5654)chainer.distributions
documentation (#5658)ndarray
(#5660)L.ResNetLayers
(#5665, thanks @takaaki82!)backprop_step
(#5692)chainer.distributions
refer ndarray
(#5717)F.rrelu
(#5618)numerical_grad
(#5698)Published by kmaehashi almost 6 years ago
This is the release note of v6.0.0a1. See here for the complete list of solved issues and merged PRs.
F.det
and F.inv
(#5323)F.scatter_add
and F.get_item
(#5335)D.Gamma
(#5310)D.Exponential
(#5341)D.Pareto
(#5371)maxtasksperchild
parameter for MultiprocessIterator
(#4972, thanks @jnishi!)F.batch_renormalization
(#5014)utils._fp16_mixed_precision_helper
decorator (#5306)matplotlib
(#5320)force_array
(#5409)gradient_check.check_backward
(#5411)Adam.lr
to Adam.alpha_t
(#5420)matmul
(#5459)F.convolution_2d
(#5460)Iterable
in CaffeFunction
(#5477)axis
for F.softmax
(#5497)arr.item()
instead of numpy.asscalar(arr)
to support NumPy 1.16 (#5510)type_check.argname
private (#5552)Link.add_param
and Link.add_link
(#5553)basic_math
(#5428, #5439)MpiCommunicatorBase.allreduce
(#5473)F.softmax_cross_entropy
using FunctionNode
(#5478, #5508)Variable.array
instead of .data
(#5417, #5495, thanks @crcrpar!)KeyError
at UpdateRule
deserialization (#5353, thanks @grafi-tt!)D.Beta
(#5382)CaffeFunction
ignores pad_w
(#5463, thanks @koreyou!)train_imagenet_data_parallel.py
example cannot be run (#5469, thanks @Lynkzhang!)HuberLoss
for ndim >= 3 (#5493)F.softmax
and F.log_softmax
with axis=-1
on gpu (#5496)D.Uniform.log_prob
to avoid returning -inf at boundary (#5548)Variable.data
with Variable.array
in examples and functions (#5386, thanks @crcrpar!)chainer.report
(#5410)D.Beta
(#5419)Extension.on_error
(#5523)SerialIterator
in train_mnist_custom_loop.py
(#5519)l2normalize
with float16 (#5380)Variable
test (#5385)check_double_backward
test (#5486)maxtasksperchild=1
or 10
are slow (#5516)Published by beam2d almost 6 years ago
This is the release note of v5.0.0. See here for the complete list of solved issues and merged PRs.
This is the fifth major release of Chainer. This release note only covers the difference from v5.0.0rc1; for all highlights and changes, please refer to the blog post and release notes of the pre-releases:
See the Upgrade Guide if you are upgrading from previous versions.
pip install -U ideep4py
.__init__(...)
, add_param
, and add_link
are undeprecated. They are useful when one builds a link as a container of parameters and links, and therefore we decided to leave these APIs besides init_scope
.F.convolution_2d
(#5466)MpiCommunicatorBase.allreduce
(#5475)variable.data
with variable.array
in variable.py
(#5488, thanks @crcrpar!)Variable.array
instead of .data
(#5517)arr.item()
instead of np.asscalar(arr)
to support NumPy 1.16 (#5529)axis
for F.softmax
(#5543)F.batch_renormalization
(#5546)type_check.argname
private (#5556)mpi4py
is missing (#5562)Link.add_param
and Link.add_link
(#5569)CaffeFunction
ignores pad_w
(#5468, thanks @koreyou!)FunctionNode.retained_outputs
(#5476)train_imagenet_data_parallel.py
example cannot be run (#5499, thanks @Lynkzhang!)F.softmax
and F.log_softmax
with axis=-1
on gpu (#5502)KeyError
at UpdateRule
deserialization (#5506, thanks @grafi-tt!)HuberLoss
(#5520)D.Beta
(#5426)chainer.report
(#5447)chainer.Sequential
(#5461)FunctionNode
upgrade guide (#5541)get_device_from_array
(#5560)setup.py
(#5398)Variable
test (#5406)l2normalize
with float16 (#5448)check_double_backward
test (#5490)scipy<1.0
is warned by using a deprecated feature of numpy>=1.15
(#5491)Published by hvy about 6 years ago
These are the releases notes for v5.0.0rc1. See here for the complete list of solved issues and merged PRs.
Static subgraph optimization feature has been introduced. The CPU (Python) overhead of graph construction and traversal in backward is removed with it.
By applying @static_graph
decorator to functions or methods (typically it is the forward
method of a chain), you can let Chainer cache the computational graph collected at the first call and reuse it from the subsequent calls. To use this feature safely, your define-by-run code must always perform the same computations each iteration.
Advanced graph optimizations/transformations are not implemented yet, so currently it only reduces the CPU overhead. We will consider adding more sophisticated graph-level optimizations to improve the GPU utilization as well as further reduce CPU overhead.
This feature is experimental. We may change the interface in the future releases.
ChainerMN has been integrated into Chainer. ChainerMN module (chainermn
) will become available just by installing Chainer (note that installation of MPI is still required separately). Please uninstall ChainerMN (pip uninstall chainermn
) if you already have ChainerMN installed before updating to this version of Chainer.
iDeep 2.0 has been supported. iDeep 2.0 provides accelerations on Intel architecture for more functions than iDeep 1.x. Be aware that iDeep 1.x is incompatible with this version of Chainer; please update to iDeep 2.x if you already have iDeep 1.x installed.
NVIDIA DALI has been supported.
DALI is a library to construct data preprocessing pipeline.
New DaliIterator
converts the data pipeline for DALI into an iterator that can be used from any updaters.
Currently, users need to write a custom converter function to use it in Trainer.
See the imagenet
example and its dali_util.py
for how to use it.
This feature is experimental. We may change the interface in the future releases.
params
and xp
to Distribution
(#4925)F.spatial_transformer_grid
(#5114)chainer.print_runtime_info()
(#5163, thanks @himkt!)F.sigmoid_cross_entropy
(#5211)L.VGG19Layers
(#5213, thanks @crcrpar!)F.normalize
(#5256)contains_nan
(#5270)F.roi_pooling_2d
(#5281)F.gaussian
(#5284)F.roi_average_align_2d
interface (#5305, thanks @knorth55!)None
for the in_channels
argument in ConvolutionND
/DeconvolutionND
(#4587)chainer.grad
in debug mode (#5228)log_softmax
in D.Categorical
(#5255)in_params
of coo_matmul
(#5258)with cuda_device
(#5269)pkg_resources
to retrieve Chainer version (#5298)get_array_module
to backends from cuda (#5327)F.einsum
(#5328)FunctionNode
of where
(#5340)copyto
to chainer.backend
(#5344)FunctionNode
of permutate
(#5349)binary_check
option to D.Bernoulli
(#5363)axis
for F.log_softmax
(#5381)_runtime_info.py
(#5271)Link.to_*pu
return self (#5322)PrintHook
fails if grad is None
(#5333)W
in L.ConvolutionND
(#5370)intel64.mdarray
(#5373)Classifier
chain clears Variable
attributes (#5069, thanks @grafi-tt!)FunctionHook
documentation (#5188)chainer.Chain
document (#5294, thanks @fiarabbit!)long_description
for PyPI (#5345)chainer.backend
in upgrade guide (#5384)--resume
option and improve docs (#4977)is_linear
argument in test (#5307, thanks @knorth55!)TestTriangularInv
(#5329)test_kldivergence
(#5366)TestKLDivergence
(#5379)chainer.print_runtime_info
(#5272, thanks @himkt!)Published by mitmul about 6 years ago
This is the release note of v4.5.0. See here for the complete list of solved issues and merged PRs.
chainer.print_runtime_info()
(#5268, thanks @himkt!)Link.to_*pu
return self (#5362)intel64.mdarray
(#5377)FunctionHook
documentation (#5339)--resume
option and improve docs (#5275)Published by niboshi about 6 years ago
This is the release note of v5.0.0b4. See here for the complete list of solved issues and merged PRs.
avg_var
of L.BatchNormalization
to 1 (#4742)F.forget
(#5179). In this fix, the double backprop capability of F.forget
is removed, since it did not work correctly in some cases.F.rrelu
, Randomized Leaky ReLU (RReLU) activation function (#3059, thanks @raven38!)F.erfcx
, scaled complementary error function (#5195)F.erfcinv
, inverse complementary error function (#5202)F.ndtr
, normal cumulative distribution function (#5237)F.log_ndtr
(#5239)F.ndtri
, the inverse of ndtr (#5247)F.roi_average_align_2d
(#5070, thanks @wkentaro!, #5259)F.cumprod
(#5074)D.MultivariateNormal
(#4899)D.Beta
(#5088)D.Categorical
(#5028)D.Uniform
(#5123)D.LogNormal
(#5124)L.GoogLeNet
(#5099)L.ResNetLayers
(#5101)L.VGG16Layers
(#5107)F.absolute_error
(#5145)F.contrastive
(#5152)F.cross_covariance
(#5158)F.decov
(#5174)F.hinge
(#5175)F.huber_loss
(#5176)F.squared_error
(#5212)F.triplet
(#5214)F.batch_l2_norm_squared
(#5235)F.mean_squared_error
(#5052)WarmupShift
and MultistepShift
extensions (#4935, thanks @mingxiaoh!)L.Maxout
(#5068)L.Linear
(#5103)axis
argument for F.log_softmax
(#5215)MultiprocessIterator
(#4607)backward_accumulate
(#4772)numpy.ascontiguousarray
when iDeep is used (#5063)F.cumprod
in backward of F.prod
(#5094)Link
and Chain
(#5119)cuda.get_array_module
in fused function (#5120)L.Convolution2D
error message (#5138, thanks @fiarabbit!)cuda_fusion.py
(#5144)eps_inside_sqrt
option to RMSprop
(#5150)chainer.config.dtype
in chainer.get_dtype()
(#5167)collections.abc
to avoid DeprecationWarning in Python 3.7 (#5172)D.MultivariateNormal
(#5173)collections.Iterable
(#5180)FunctionHook
callbacks (#5191)F.erfinv
(#5199)xp.einsum
in F.bilinear
(#5207)D.Beta
(#5219)D.Uniform
(#5225)eps < CUDNN_BN_MIN_EPSILON
in FixedBatchNormalization
(#5232, thanks @cycentum!)ndtr
and log_ndtr
in normal distribution (#5240)erfcinv
for Normal.icdf
(#5242)ndtri
in normal distribution (#5254)normcdfinv
in F.ndtri
(#5260)backends.cuda.copyto
to backends.copyto
and make it work with iDeep (#5095)F.deconvolution_nd
(#5129, thanks @fiarabbit!)TestResNetLayers
(#5133)Link.__call__
MRO (#5141)PrintReport
reports (#5146)F.split_axis
(#5157)D.Normal
(#5185)F.logsumexp
(#5190, thanks @cadenacchi!)F.softmax_cross_entropy
(#5238)D.Categorical
(#5261)F.erfinv
(#5201)tips.rst
to Chainer Backend for Intel Architecture (#5208, thanks @mingxiaoh!)C
to beta
in the example (#5135, thanks @Evanc123!)Link.forward
in MNIST model parallel example (#5159)softmax_cross_entropy
test (#3409)F.contrastive
(#5147)test_erfinv
(#5165)open_pickle_dataset
(#5182)l2normalize
(#5210)contrastive
(#5218)DeprecationWarning
s at importing Theano (#5230)TestMatMul
(#5236).pytest_cache/
to .gitignore
(#5193)Published by kmaehashi about 6 years ago
This is the release note of v4.4.0. See here for the complete list of solved issues and merged PRs.
L.Convolution2D
error message (#5140, thanks @fiarabbit!)collections.abc
to avoid DeprecationWarning
in Python 3.7 (#5177)collections.Iterable
(#5220)PrintReport
reports (#5149)Link.__call__
MRO (#5151)F.split_axis
(#5164)F.logsumexp
(#5196, thanks @cadenacchi!)F.softmax_cross_entropy
(#5241)Extension.name
(#5110)Iterator
description in Chainer at a glance documentation (#5250, thanks @fiarabbit!)softmax_cross_entropy
test (#5216)l2normalize
(#5222)DeprecationWarning
s at importing Theano (#5243)F.contrastive
(#5252)contrastive
(#5253).pytest_cache/
to .gitignore
(#5194)Published by kmaehashi about 6 years ago
This is the release note of v4.3.1. See here for the complete list of solved issues and merged PRs.
This is a hot-fix release for v4.3.0 to address the backward incompatibility issue reported in #5078 (thanks @grafi-tt and @tkanmae for reporting this!). Users implementing __call__
method of their own Link using mix-in (multiple inheritance) may have been affected by this issue.
Link.__call__
MRO (#5154)Published by beam2d over 6 years ago
This is the release note of v4.3.0. See here for the complete list of solved issues and merged PRs.
hasattr
in L.BatchNormalization
(#5066)os.path.join
(#5102)F.normalize
(#5108)GetItem.backward
for 0-dim boolean index (#5045)ZippedImageDataset
and MultiZippedImageDataset
to documentation (#4963, thanks @d0i!)alpha
argument explanation (#4970)StandardUpdater
(#4993)F.upsampling_2d
according to new F.max_pooling_2d
(#4995)L.NStepBiRNNTanh
, L.NStepLSTMBase
, L.NStepLSTM
and L.NStepBiLSTM
(#4996, thanks @mori97!)computational_graph
(#4998)chainer.functions.fft
docstring (#5004, thanks @butsugiri!)n_step_gru
docs (#5007)F.dilated_convolution_2d
and F.convolution_2d
(#5023)L.Linear
docs (#5024)numpy.dtype.kind
in Tips (#5054)chainer.dataset
and chainer.datasets
(#5072)caffe.rst
docs (#5092)L.Linear
in ImageNet example (#4994)test_default_backward
(#5003)TestBatchRenormalization
(#5020)Published by niboshi over 6 years ago
This is the release note of v5.0.0b3. See here for the complete list of solved issues and merged PRs.
F.einsum
, F.lgamma
, F.digamma
, F.polygamma
Link
s support chainer.config.dtype
configuration introduced in v5.0.0b2.Please refer to the Upgrade Guide for details.
Link.copyparams
has been changed to copy persistent values in addition to parameters (#4997). You can use newly-introduced copy_persistent=False
option to emulate the previous behavior.FunctionNode
classes exposed under chainer.functions
namespace have been removed (#4421). Please use wrapper functions under chainer.functions
instead of directly using classes.F.einsum
(#4644)F.lgamma
, F.digamma
, and F.polygamma
(#4720)StepShift
extension (#4894, thanks @jinjiren!)LabeledZippedImageDataset
(#4961, thanks @d0i!)L.BatchNormalization
and L.BatchRenormalization
(#5034), L.Maxout
(#5058), L.InceptionBN
(#5062), L.StatefulMGU
(#5084)F.mean_absolute_error
(#5053)cuda.elementwise
to up performance (#3787)F.dropout
(#3369, thanks @bonprosoft!)FunctionNode
classes from chainer.functions
namespace (#4421)F.normalize
(#4769)__call__
methods in Link
s to forward
(#4912)F.batch_normalization
(#4964)log_scale
option of Normal distritbuion (#4987)Link.copyparams
(#4997)hasattr
in L.BatchNormalization
(#5017)F.depthwise_convolution_2d
use F.convolution_2d
internally (#5046)F.einsum
to support NumPy 1.15rc1 (#5079)os.path.join
(#5100)GetItem.backward
for 0-dim boolean index (#4958)MultiAdd
(#5056)auto_new_epoch
(#4956)StandardUpdater
(#4968)Extension.name
(#4980)chainer.config.dtype
(#4981)L.Linear
docs (#4983)computational_graph
(#4984)L.NStepBiRNNTanh
, L.NStepLSTMBase
, L.NStepLSTM
and L.NStepBiLSTM
(#4991, thanks @mori97!)F.upsampling_2d
according to new F.max_pooling_2d
(#4992)chainer.functions.fft
docstring (#5002, thanks @butsugiri!)n_step_gru
docs (#5006)F.dilated_convolution_2d
and F.convolution_2d
(#5010)F.linear
docs (#5011)Variable
guide (#5030)numpy.dtype.kind
in Tips (#5051)chainer.dataset
and chainer.datasets
(#5057)PolynomialShift
(#5089)caffe.rst
docs (#5090)Link.copyparams
changes (#5093)ChainList
(#5098)L.Linear
in ImageNet example (#4975)no_grads
and squares in double backward tests (#4978)test_default_backward
(#5001)TestBatchRenormalization
(#5016)test_get_dummy_device_for_empty_array
(#5071)