A flexible framework of neural networks for deep learning
MIT License
This release fixes an issue in v7.8.1 that documentation cannot be built on ReadTheDocs. No code changes are added since v7.8.1 release.
Published by kmaehashi almost 3 years ago
This is the release note of v7.8.1. See here for the complete list of solved issues and merged PRs.
This minor release allows importing Chainer with CuPy v10+ environment. Note that we still encourage Chainer v7 users to stay with CuPy v7.8.0 & CUDA 10.2 or earlier & cuDNN v7.6 if you don't have strong reasons to upgrade. A warning message will be shown if you run Chainer v7 with CuPy v8 or later, but you can disable it by setting the CHAINER_WARN_VERSION_MISMATCH=0
environment variable.
As announced previously, Chainer is under the maintenance phase. There are no further planned releases for Chainer v7 series.
Published by emcastillo over 3 years ago
This is the release note of v7.8.0. See here for the complete list of solved issues and merged PRs.
For those who need to run Chainer on CUDA 11.1+, this release provides "limited" support for CuPy v8/v9. We confirmed that basic tests and examples run fine, but we still encourage Chainer v7 users to stay with CuPy v7.8.0 & CUDA 10.2 or earlier & cuDNN v7.6 if you don't have strong reasons to upgrade. A warning message will be shown if you run Chainer v7 with CuPy v8 or later, but you can disable it by setting the new CHAINER_WARN_VERSION_MISMATCH=0
environment variable. Please also understand that CuPy v10 is not compatible with Chainer.
As announced previously, Chainer is under the maintenance phase. There are no further planned releases for Chainer v7 series.
CHAINER_WARN_VERSION_MISMATCH
environment variable (#8588)chainer.testing
requiring pytest installed (#8611)cupy.cuda.cudnn
first to show preload warning (#8605)cotiguousness
-> contiguousness
(#8595, thanks @timgates42!)cupy-cuda110
(#8580)[jenkins]
requirement (#8585)pytest.PytestUnknownMarkWarning
(#8599)skip
condition (#8607)Published by kmaehashi almost 4 years ago
Published by emcastillo about 4 years ago
This is the release note of v7.7.0. See here for the complete list of solved issues and merged PRs.
As announced previously, Chainer has reduced the release frequency from monthly to once every two months if there are changes that justify the release. We have decided to skip v7.5.0 and v7.6.0 in order to keep the Chainer version up to date with CuPy’s most recent release.
spawn
and forkserver
start method in PickleDataset
(#8465, thanks @zaltoprofen!)create_multi_node_evaluator
(#8568)Reporter
example (#8561)Published by emcastillo over 4 years ago
This is the release note of v7.4.0. See here for the complete list of solved issues and merged PRs.
As announced previously, Chainer has reduced the release frequency from monthly to once every two months. We have decided to skip v7.3.0 in order to keep the Chainer version up to date with CuPy’s most recent release.
concat_arrays
to be pickable (#8549)start_method
s other than fork on MultiprocessParallelUpdater
(#7552)backend.copyto
for mismatched dtypes to CuPy ndarray
(#8043)optimizer.use_fp32_update
on ChainerX model (#8382, thanks @y1r!)local_convolution_2d
result shape documentation (#8553, thanks @msakai!)functions.rst
(#8557, thanks @husisy!)Published by niboshi over 4 years ago
This is the release note of v7.2.0. See here for the complete list of solved issues and merged PRs.
As announced previously, Chainer is currently under the maintenance phase. Considering the situation, we are going to reduce the release frequency of Chainer from monthly to once every two months. This does not affect the release frequency of CuPy.
cupy-cuda102
(#8544)beta
with static_code
on F.BatchNormalization.forward
(#8325)Published by emcastillo almost 5 years ago
This is the release note of v7.1.0. See here for the complete list of solved issues and merged PRs.
NStepRNN
(#8489)n_step_gru
function on exporting ONNX (#8492, thanks @msakai!)TransposeSequence
converter to support more cases (#8493, thanks @msakai!)NStepGRU
link converter example to ONNX-Chainer test (#8494, thanks @msakai!)patch_functions
to patch functions in modules other than chainer.functions
(#8495, thanks @msakai!)n_fold
with n_folds
(#8516, thanks @Saanidhyavats!)IndexIterator
for ChainerX CUDA (#8360)CooMatrix.to_dense
for duplicate indices (#8187)try/finally
block to yield
in reporter.py (#8508)chainer.functions.rnn.*
(#8454, thanks @msakai!)chainermn.extension
-> chainermn.extensions
(#8526, thanks @msakai!)chainer>=7.0.0
in python2 (#8517, thanks @knorth55!)observation_aggregator
(#8384)TestZeta
(#8514)TestCholesky
(#8520)chainerx.fromfile
test when dtype
is bool_
and mode
is text
(#8521)FunctionTestCase
to test F.decov
(#8522)Published by niboshi almost 5 years ago
This is the release note of v6.7.0. See here for the complete list of solved issues and merged PRs.
As announced previously, this is the final release of v6 series, which is the last version supporting Python 2.
reporter.py
(#8511)chainer.functions.rnn.*
(#8530, thanks @msakai!)FunctionTestCase
to test F.decov
(#8523)chainerx.fromfile
test when dtype is bool_
and mode is text (#8524)Published by emcastillo almost 5 years ago
This is the release note of v7.0.0. See here for the complete list of solved issues and merged PRs.
This release note only covers the difference from v7.0.0rc1; for all highlights and changes, please refer to the release notes of the pre-releases:
See the Upgrade Guide if you are upgrading from previous versions. Also, note that we dropped the support of Python 2.7 and 3.4 from Chainer v7.
Please read the following announcement to learn about the future of Chainer.
insert
on Sequence
(#6374)setup
/tear-down
method names in testing.fix_random
(#8432)F.mean_absolute_error
for FP16 (#6807)F.accuracy
(#7396)from_params
to Linear
& Conv
(#7525, thanks @crcrpar!)FunctionNode.forward
output type message (#7655)Take
(#8281)chainerx::MakeArray
in some case (#8296)ValueError
when calling xxx_obj
with ChainerX array in ChainerMN (#8320)Permutate
exporter to onnx_chainer
(#8333, thanks @msakai!)SoftmaxCrossEntropy
(#8347)chainerx::AddAt
as a public function (#8351)cover_all=True
on Unpooling2D
in exporting to ONNX (#8391)ceiling_mode
on exporting to ONNX MaxPool
(#8392)onnx_chainer.replace_func.fake_as_funcnode
to reconstruct return value structure (#8398, thanks @msakai!)Rollaxis
in ONNX-Chainer (#8428, thanks @tkanmae!)SelectItem
in ONNX-Chainer (#8450, thanks @tkanmae!)TransposeSequence
exporter to ONNX-Chainer (#8451, thanks @msakai!)__name__
attribute in parameterized test names when available (#8455, thanks @grlee77!)SelectItem
using GatherElements
for ONNX opset>=11
(#8470)RuntimeError
when using cudnn_fast
without cudnn (#8499)chainerx::AddAt
faster (#8299)F.accuracy
with ignore_label
(#8364, thanks @y1r!)AttributeError
in WrappedFunctionNode.forward
(#8397, thanks @msakai!)GetItem
converter to handle -1
correctly (#8460, thanks @msakai!)chainerx.batch_norm
with 2D input on CUDA (#8464)BatchNormalization
for NHWC without cudnn (#8497)routines/indexing.h
(#8288)_snapshot.py
(#8297)VariableNode
in F.convolution_2d
backward implementation (#8395)cholesky
and eigh
(#8312)NStepGRUBase
(#8330, thanks @msakai!)/examples/seq2seq/README.md
(#8399, thanks @tanaken0515!)scatter_dataset
part of ChainerMN tutorial (#8406)type_check
errors (#8407)CHAINERX_NVCC_GENERATE_CODE
(#8370)PYBIND11_EXPORT
instead of visibility hack (#8437)CMakeLists.txt
(#8440)MultiprocessParallelUpdater
example (#7478)insert
on Sequence
(#6374)test_Meshgrid
(#8285)multi_node_early_stopping
(#8321).git
in ChainerCV compatibility CI (#8331)SoftmaxCrossEntropy
test tolerances (#8335)chainerx.where
test (#8342)LinkTestCase
for L.GroupNormalization
(#8343)F.cast
test (#8363)FunctionTest
modified input error (#8367)chainerx.linalg.*
(#8371)TestTriplet
(#8376)test_allreduce_persistent.py
(#8412)fix_random
in xfail backward tests (#8419)TestMeshgrid
(#8420)test_checkpoint.py
(#8429)test_create_mnbn_model
(#8435)multi_node_optimizer
(#8436)Convolution2D
tests for older numpy versions (#8458)parametrize_device_name
to setup.cfg (#8459)F.cholesky
test (#8469)cupy.util.PerformanceWarning
in pytest (#8471)_modified_xlogx
(#8483)array_utils.uniform
to be deterministic with fix_random
by default (#8491)Published by niboshi almost 5 years ago
This is the release note of v6.6.0. See here for the complete list of solved issues and merged PRs.
max_pooling_2d
(#8329)optimizer_hooks.GradientHardClipping
for scalar array (#8372)F.negative_sampling
in fp32 for fp16 inputs (#8309)optimizer_hooks.GradientHardClipping
for ChainerX (#8377, thanks @kshitij12345!)/examples/seq2seq/README.md
(#8404, thanks @tanaken0515!)type_check
errors (#8456)LinkTestCase
for L.GroupNormalization
(#8355)CHAINER_CI
in Travis CI (#8373)CHAINER_CI
in ChainerX tests in Jenkins (#8375)CHAINER_CI
in Chainer tests in FlexCI (#8381)FunctionTest
modified input error (#8388)TestTriplet
(#8396)fix_random
in xfail backward tests (#8457)Convolution2D
tests for older numpy versions (#8478)_modified_xlogx
(#8486)Published by hvy almost 5 years ago
This is the release note of v7.0.0rc1. See here for the complete list of solved issues and merged PRs.
This time, we will keep the current branches for active development (master
for v7.x, v6
for v6.x) after the RC. We will maintain v6.x series until Python2 EOL, so we do not cut the new development version for now to avoid increasing the number of branches to maintain. New features will be included directly into v7 for a while, and maintenance changes will be backported to v6.
ONNX-Chainer which used to be a separate project has now been integrated to the Chainer repository and made more accessible to existing Chainer users (#8229). You can easily export Chainer model as ONNX format like this:
import onnx_chainer
onnx_chainer.export(chainer_model, pseudo_input, filename='model.onnx')
For a more detailed description on how to get started, please refer to the ONNX-Chainer section in the official documentation.
ChainerMN now works with ChainerX. In this release, the MNIST example has also been updated to demonstrate the usage. (#7844)
UpsamplingDeconvFilter
and DownsamplingConvFilter
initializer (#5290, thanks @knorth55!)chainerx.meshgrid
(#6668, thanks @kshitij12345!)chainerx.hsplit
(#7030, thanks @ishanrai05!)linalg.cholesky
to ChainerX (#7329, thanks @IvanYashchuk!)linalg.eigh
, linalg.eigvalsh
to ChainerX (#7503, thanks @IvanYashchuk!)force_equal_length=False
(#8071)RandomState
instance (#8081, thanks @mr4msm!)chainerx.hinge
(#8168)chainerx::SoftmaxCrossEntropy
and chainerx.softmax_cross_entropy
(#8250)chainermn.testing.to_device
function (#8279)chainerx.copyto
(#8314, thanks @kshitij12345!)TabularDataset.as_tuple/as_dict
to TabularDataset.astuple/asdict
(#7788)DeviceResident.to_gpu
/to_cpu
/to_intel64
(#8058)generate_matrix
(#8167)chainerx.take
(#8197)*GradState
classes (#8224)gradient_check
(#8236)F.batch_normalization
(#8266)device
argument from chainerx.diag
and chainerx.diagflat
(#8275)gradient_check
(#8290)output_grad
support on fake_as_funcnode
(#8298)F.negative_sampling
in fp32 for fp16 inputs (#8300)mode
and align_corners
arguments in F.resize_image
keyword-only (#8009)weights
and keepdims
arguments in Variable.mean
keyword-only (#8010)WeightStandardization
keyword-only (#8011)call_before_training
argument of Trainer.extend
keyword-only (#8064)
ObservationAggregator
and MultiNodeEarlyStoppingTrigger
keyword-only (#8065)force_equal_length
argument in scatter_dataset
and scatter_index
keyword-only (#8066)size
argument of tabular.from_data
keyword-only (#8067)chainerx::Take
faster (#8295)F.batch_normalization
with mixed dtype (#8149)__str__
of parameterized class (#8169)x
and gamma
/beta
have different dtypes in F.batch_normalization
(#8175)copy
to __deepcopy__
in ChainerMN batch_normalization
and replace to_gpu
(#8185)Allocator
(#8215)chainerx.ascontiguousarray
(#8262)global_kernel_registry
(#8265)gpu_id=0
in ChainerMN testing get_device
(#8304)setup.cfg
(#8180)AveragePoolPadMode
enum (#8214)setup.py
(#8218){Max,Average}PoolForwardBackward
(#8223)readability-avoid-const-params-in-decls
(#8225)gradient_check
(#8238)F.softmax_cross_entropy
(#8253)CreateSubgraph
(#8310)resize_images
documentation to reflect recent code changes (#8221, thanks @zu3st!)chainerx.ravel
(#8233)chainerx.sigmoid_cross_entropy
(#8249)libchainerx_base.a
to link chainerx statically (#8247)generate.py
in examples/wavenet
(#8172, thanks @dhgrs!)F.scale
test (#6969, thanks @ishanrai05!)test_n_step_rnn
(#7483)TestAccuracy
: Randomly reduce testing parameters (#7820)chx.linalg.solve
(#7997)TestQR
(#8114)pytest.skip()
in combination with testing.repeat
/retry
(#8174)DummySerializer
and DummyDeserializer
from iterators_tests
(#8176)BatchNormalization
backward test tolerances (#8189)protobuf>=3.8
(#8190)CHAINER_TEST_PAIRWISE_PARAMETERIZATION
and enable it only in Travis CI (#8211)attrs
package version (#8219)HDF5Serializer
test for h5py<2.9 (#8220)TestBatchNormalization
(#8230)"jenkins"
extras (#8241)clang-format-6.0
if possible and track the version of clang-format
(#8242)DeprecationWarning
filter from test_multi_node_chain_list
(#8246)chainex_tests
/unit_tests
/routines_tests
/test_linalg.py::Inverse
(#8255)TestHuberLoss
(#8271)ImportWarning
just a warning in tests (#8291)gtest
linkage (#8292, thanks @cloudhan!)test_average
is slow in FlexCI (#8303)test_mnist
in chainermn_tests
(#8305)communicator_test
for ChainerX+ChainerMN (#8313)ImportWarning
ignore entry (#8186)WIN32_LEAN_AND_MEAN
definition (#8205, thanks @cloudhan!)Published by emcastillo almost 5 years ago
This is the release note of v6.5.0. See here for the complete list of solved issues and merged PRs.
print_runtime_info
(#7860)__str__
of parameterized class (#8184)BatchNormalization
backward test tolerances (#8196)L.BatchRenormalization
and adjust tolerances (#8200)TestConvolution2DFunction::test_double_backward
fp16 tolerance (#8201)attrs
version (#8222)HDF5Serializer
test for h5py<2.9 (#8256)Published by asi1024 about 5 years ago
This is the release note of v7.0.0b4. See here for the complete list of solved issues and merged PRs.
Many updates to ChainerX including new routines and support for loss scaling.
F.n_step_rnn
and F.n_step_birnn
(#5808)chainerx.vsplit
to ChainerX (#7032, thanks @ishanrai05!)chainerx.linalg.qr
to ChainerX (#7379, thanks @IvanYashchuk!)chainerx.accuracy
(#7526, thanks @aksub99!)chainerx.{remainder/mod}
(#7675, thanks @sky58!)F.zeta
(#8059, thanks @UmashankarTriforce!)testing.generate_matrix
to get matrices of given singular values (#8077)chainerx.fmod
(#8110)chainerx.nonzero
(#8124)chainerx::ArrayRepr
for large inputs (#7708)FutureWarning
on GPU-to-GPU transfer in StandardUpdater
(#7952)typeid
of kernels in libchainerx
(#7970)variable.Parameter
objects (#8022)ScanKernel
(#8103)chainerx::Absolute
device implementation (#7319)MultiprocessIterator
and MultiprocessParallelUpdater
(#7511)mixed16
/float16
GroupNormalization
(#7965)chx::Device
object on ndarray
pickling (#7988)chainerx::Dot
edge cases with empty arrays (#8020)AddAt
implementation for float16 arrays (#8055)fill_value
in constant initializer (#8089)ArrayReprImpl
(#7699)F.batch_normalization
and ChainerMN backend implementations (#8039)-Wabsolute-value
for clang (#8045)NativeCumsumKernel
(#8053)-Wbraced-scalar-init
for clang (#8076)arithmetic.{h,cc}
(#8128)backend.copyto
(#7832)chainerx.to_numpy
(#7984)chainerx.take
indices dtype (#7998)CHAINERX_ENABLE_{BLAS,LAPACK}
(#8099)chainerx.minimum
(#8146)chainerx.maximum
doc (#8147)cblas.h
and modified CMakeLists.txt
(#8052, thanks @okdshin!)CHAINERX_ENABLE_LAPACK=0
causes error (#8086, thanks @cloudhan!)DeprecationWarning
in test_maniplation.py
(#7824)F.max_pooling_2d
test (#7924)negative_sampling
(#7975)F.lstm
test parameterization (#7987)gradient_check
test (#7989)TrueDiv
tolerances (#8047)L.BatchRenormalization
and adjust tolerances (#8080)h5py.File
mode
(#8090)np.empty
(#8096)PseudoInverse
test (#8102)test_normal.py
(#8111)ignore::ImportWarning
to setup.cfg
(#8131)fix_random
decorator to be used with OpTest
(#8136)NStepRNN
and NStepBiRNN
(#8142)empty
in F.cast
test that can cause overflow warning (#8152)TestConvolution2DFunction::test_double_backward
fp16 tolerance (#8163)setup.cfg
(#8154)Published by niboshi about 5 years ago
This is the release note of v6.4.0. See here for the complete list of solved issues and merged PRs.
GroupNormalization
(#8113)MultiprocessIterator
and MultiprocessParallelUpdater
(#8126)deepcopy
for chain parameters (#8150)backend.copyto
(#8056)DecorrelatedBatchNormalizationTest
and add stable input (#7940)F.batch_inv
test (#7981)F.squared_error
test (#8012)negative_sampling
(#8019)gradient_check
test (#8021)h5py.File
mode
(#8107)Contrastive.backward
(#8108)test_normal.py
(#8117)im2col
test (#8135)Published by niboshi about 5 years ago
This is the release note of v7.0.0b3. See here for the complete list of solved issues and merged PRs.
Due to the end-of-life (EOL) of Python 2 in January 2020, Python 2 support has been dropped in this release. Chainer v6.x continues to support Python 2. See the blog post for details.
F.max_pooling_2d
refactoringImplementation of F.max_pooling_2d
has been merged to F.max_pooling_nd
. The behavior is unchanged, so ordinary users should not be affected by this change. However, the FunctionNode
class recorded in the computational graph corresponding to F.max_pooling_2d
has changed from MaxPooling2D
to MaxPoolingND
. The code explicitly depending on this class will need a fix.
chainerx.repeat
(#7223, thanks @durswd!)TabularDataset.slice
(#7251)chainer.dataset.tabular.DelegateDataset
(#7276)ObservationAggregator
extension to ChainerMN (#7302)scatter_dataset
as well as scatter_index
(#7327)chainer.dataset.tabular.from_data
(#7361)linalg.svd
, linalg.pinv
to ChainerX (#7411, thanks @IvanYashchuk!)TabularDataset.convert/with_converter
(#7428)linalg.solve
, linalg.inv
to ChainerX (#7474, thanks @IvanYashchuk!)Converter
class (#7489)chainerx.sigmoid_cross_entropy
(#7524, thanks @aksub99!)chainerx.cumsum
(#7558, thanks @aksub99!)chainerx.nansum
(#7719, thanks @aksub99!)chainerx.nanargmax
and chainerx.nanargmin
(#7755, thanks @aksub99!)tri*
routines to ChainerX (#7791, thanks @IvanYashchuk!)CommunicatorBase
class (#7814)numerical_grad_dtype
to FunctionTestCase
and LinkTestCase
(#7817)tabular.from_data
(#7847)chainerx.count_nonzero
(#7852, thanks @aksub99!)chainerx.flatten
(#7901, thanks @aksub99!)chainerx.ravel
(#7904, thanks @aksub99!)roi_{average|max}_{pooling|align}_2d.py
(#5636, thanks @knorth55!)Link.to_gpu
unless compatible with to_device
(#5762)F.dropout
to use cuDNN by default (#7185, thanks @crcrpar!)F.average
as accurate as backend (#7758)PureNcclCommunicator
(#7793)type_check
error message on evaluating bool expression (#7795)type_check
(#7803)chx.leaky_relu
/elu
(#7816)None
inputs to gradient check and generating None
gradients in FunctionTestCase
(#7831)print_runtime_info
(#7833)F.clip
for NumPy 1.17 (#7843)rtol * abs(b)
in allclose
output (#7848)TypeError
in max_pooling_2d
(#6835, thanks @ishanrai05!)PureNcclCommunicator
(#7600)create_mnbn_model()
bug (#7718)optimizer_hooks.GradientHardClipping
for scalar array (#7760)backends.copyto
from chainerx to non-chainerx (#7835)split_axis
for intel64 when grad_ouputs
contains None
(#7836)CommunicatorBase
(#7888)DeprecationWarning
to initializer of BuildingBlock
(#7909)Link.serialize
and optimizers.Adam
(#7918)F.max_pooling_2d
(#7922)_fallback_workarounds
in SpectralNormalization
(#7539)links.rnn
and functions.rnn
(#7725)batched_copy
to all Communicators
(#7761)axis
(#7799)linalg.svd
python bindings layer in ChainerX (#7866, thanks @IvanYashchuk!)n_layer
with n_layers
for consistency (#7871)pooling_nd
functions (#7938)F.max_pooling_2d
into F.max_pooling_nd
(#7939)static_graph
docs code examples (#7875)scatter
to doc (#7897)F.max_pooling_2d
test (#6836, thanks @ishanrai05!)F.lstm
test (#7808, thanks @dido1998!)F.slstm
test (#7805, thanks @dido1998!)F.n_step_rnn
test (#7804, thanks @dido1998!)F.n_step_lstm
test (#7807, thanks @dido1998!)F.n_step_gru
test (#7806, thanks @dido1998!)F.embed_id
test (#7903, thanks @dido1998!)point_to_point
communications (#7637)pseudo_connect
(#7638)TestConv*TensorCore
(#7710)chx.reshape
(#7762)TestHuberLoss
(#7837)F.average_pooling_2d
test (#7841)F.clipped_relu
test for NumPy 1.17 (#7842)test_accuracy.py
to the list of slow test files (#7851)BatchNorm
flaky of ChainerX (#7857)test_TrilTriu
(#7865)chainerx.logsumexp
test tolerance (#7867)F.tree_lstm
test for ChainerX (#7881, thanks @dido1998!)ndarray.data
access and fix wrong test (#7890)TrueDiv
test (#7917)F.cast
from negative floating-point to unsigned (#7920)L.CRF1d
test (#7926)DecorrelatedBatchNormalizationTest
and add stable input (#7932)chainerx.power
test (#7950)TestContrastive
(#7953)F.batch_inv
test (#7971)Published by emcastillo about 5 years ago
This is the release note of v6.3.0. See here for the complete list of solved issues and merged PRs.
F.average
as accurate as backend (#7782)type_check
error message on evaluating bool expression (#7801)type_check
(#7810)F.clip
for NumPy 1.17 (#7855)Parameter.dtype
for uninitialized parameter (#7749)UpdateRule.use_fp32_update
for uninitialized parameter (#7751)PureNcclCommunicator
(#7787)TypeError
in max_pooling_2d
(#7789, thanks @ishanrai05!)create_mnbn_model()
bug (#7846)split_axis
for intel64 when grad_ouputs
contains None
(#7931)F.max_pooling_2d
(#7933)backends.copyto
from/to chainerx (#7934)Link.serialize
and optimizers.Adam
(#7941)static_graph
docs code examples (#7884)chx.reshape
(#7792)test_communicator
(#7822)F.clipped_relu
test for NumPy 1.17 (#7854)TestHuberLoss
(#7869)F.average_pooling_2d
test (#7870)chainerx.logsumexp
test tolerance (#7889)ndarray.data
access and fix wrong test (#7913)F.cast
from negative floating-point to unsigned (#7944)TestContrastive
(#7959)TrueDiv
test (#7972)L.CRF1d
test (#7977)Published by hvy over 5 years ago
This is the release note of v7.0.0b2. See here for the complete list of solved issues and merged PRs.
ChainerX has several new backproppable ops such as ELU and softplus activation functions and loss functions including absolute error, squared error, Huber loss and Gaussian KL divergence. ChainerX is also supported in all OptimizerHook
s when used through Chainer. TabularDataset
has also been improved with new features.
Variable.grad
getter now raises an error when it is called before calling cleargrad
, zerograd
, or setting the gradient directly. (#7146)BatchRenormalization
(usage of epsilon) is fixed. It affects the inference behavior. (#7202)HierarchicalCommunicator
, SingleNodeCommunicator
and TwoDimensionalCommunicator
and are no longer necessary as NCCL now supports inter-node communication. (#7697)WeightStandardization
link hook (#6678, thanks @hitsgub!)chainerx.dsplit
(#7031, thanks @ishanrai05!)chainerx.left_shift
and chainerx.right_shift
(#7339, thanks @sky58!)chainerx.elu
(#7439, thanks @aksub99!)TabularDataset
(#7493)TabluarDataset.__iter__
(#7601)Variable.mean
(#7670)chainerx.softplus
(#7679, thanks @aksub99!)top_data
as -np.inf
and argmax_data
as -1
in F.roi_max_pooling_2d
(#6237, thanks @knorth55!)cleargrad
(#7146)chainerx.grad
from chainer.grad
(#7464)ImportError
(#7518)device
argument a keyword only argument. (#7537, thanks @kshitij12345!)Array::At
and __getitem__
(#7561)chainerx.ndarray._is_chained
(#7565)squared_difference
and fix docs (#7582)allreduce_grad()
and functions related with it (#7604)IndexError
if the index __getitem__
takes is out of bounds (#7614)six.integer_types
for axis check in F.concat
(#7632, thanks @knorth55!)optimizer_hooks.GradientClipping
for ChainerX (#7641)optimizer_hooks.GradientHardClipping
for ChainerX (#7656, thanks @kshitij12345!)IntervalTrigger.__str__
(#7664, thanks @ktns!)GradientLARS
optimizer hook working with ChainerX (#7669)absl::Span
and related helpers instead of gsl::span
(#7671)six.integer_types
for axis checks (#7713)CHAINERX_BUILD_CUDA
is set (#7752)None
array in FunctionNode
NaN check (#6283)CupyMemoryProfiler
(#7003)running_var
of F.batch_renormalization
(#7202)MultiprocessIterator
(#7486)initializers.Identity
for ideep backend (#7548)chainermn.links.create_mnbn_model
(#7603)PickleDataset
crash when using multiprocessing (#7625, thanks @zaltoprofen!)AMSGrad
with intel64 backend (#7661)chainer.grad
for multiple devices (#7692)chainerx::Flip
(#7727)Parameter.dtype
for uninitialized parameter (#7735)UpdateRule.use_fp32_update
for uninitialized parameter (#7736)backend.get_array_module
not cuda.get_array_module
(#7514, thanks @crcrpar!)squared_difference
alias of squared_error
(#7547)Optimizer
and GradientMethod
(#7585)chainerx.clipped_relu
in F.clipped_relu
(#7588)CMakeList.txt
(#7647)Link
s (#6512)CHAINERX_CUDNN_USE_CUPY
(#7574)ResNet
prepare method (#7577)BackwardContext
comment (#7595, thanks @crcrpar!)expand_dims.py
(#7602)FunctionNode
docs. (#7622)chainer/functions/math/average.py
(#7653, thanks @ktns!)F.squeeze
documentation (#7682)examples/vae/train_vae.py
(#7578, thanks @m4saka!)F.polygamma
test (#6970, thanks @ishanrai05!)F.cast
test (#7034)y_shape
not used in tests (#7610)optimizer_hooks.Lasso
for ChainerX (#7657, thanks @kshitij12345!)GroupNormalization
tests (#7684)optimizer_hooks.GradientNoise
for ChainerX (#7709, thanks @kshitij12345!)protobuf
(#7715)optimizer_hooks.WeightDecay
for ChainerX (#7716, thanks @kshitij12345!)atol
/rtol
of chainerx.erf
float16 test (#7721)TestHuberLoss
(#7723)Contrastive.backward
(#7745)TestContrastive
(#7747)third-party.cmake
(#7643)Published by niboshi over 5 years ago
This is the release note of v6.2.0. See here for the complete list of solved issues and merged PRs.
six.integer_types
for axis check in F.concat
(#7712, thanks @knorth55!)six.integer_types
for axis checks (#7770)chainermn.links.create_mnbn_model
(#7618)CupyMemoryProfiler
(#7639)None
array in FunctionNode
NaN check (#7642)AMSGrad
with intel64 backend (#7689)PickleDataset
crash when using multiprocessing (#7729, thanks @zaltoprofen!)MultiprocessIterator
(#7742)chainer.grad
for multiple devices (#7746)backend.get_array_module
not cuda.get_array_module
(#7619, thanks @crcrpar!)Optimizer
and GradientMethod
(#7644)chainer.get_device
to doc (#6831)shape
in generate_array
(#7576)expand_dims.py
(#7608)Link
s (#7628)BackwardContext
comment (#7636, thanks @crcrpar!)FunctionNode
docs. (#7659)F.squeeze
documentation (#7688)examples/vae/train_vae.py
(#7580, thanks @m4saka!)y_shape
not used in tests (#7612)GroupNormalization
tests (#7700)TestContrastive
(#7765)Published by emcastillo over 5 years ago
This is the release note of v7.0.0b1. See here for the complete list of solved issues and merged PRs.
Power
for ChainerX (#6496, thanks @dido1998!)chainerx.hstack
, chainerx.vstack
and chainerx.atleast_2d
(#6886, thanks @kshitij12345!)TabularDataset
(#7115)TabularDataset.concat/join
(#7116)chainerx.expm1
and chainerx.exp2
(#7126, thanks @aksub99!)chainerx.log2
(#7139)TabularDataset.{transform/transform_batch}
(#7150)chainerx.log1p
(#7161, thanks @sky58!)chainerx::AsContiguous
as a public C++ API (#7166)chainerx
import in debug mode (#7178)chainer.as_array
for consistency with chainer.as_variable
(#7252, thanks @tkerola!)chainerx.moveaxis
(#7265, thanks @kshitij12345!)chainerx.leaky_relu
(#7351, thanks @aksub99!)chainerx.dstack
and chainerx.atleast_3d
(#7353, thanks @kshitij12345!)__abs__
with chainerx.ndarray
(#7364)chainerx.erf
(#7404, thanks @aksub99!)align_corners
option to resize_images
(#7429)resize_images
(#7443)input_device
to StandardUpdater
(#7472)is_array_supported
method on backend.Device
(#7487)roi_max_align_2d
and roi_average_align_2d
(#6405, thanks @knorth55!)MPI_Status
. (#6696, thanks @y1r!)F.copy
(#6982)F.batch_renormalization
, and related fixes (#7104)Variable.addgrad
(#7132)cuda.DummyDevice
inheritance (#7147)Device.name
property (#7149)Link.serialize
to support ChainerX (#7175)Variable.backward
(#7196)require_grad()
on ChainerX Variable.grad
setter (#7198)FunctionNode.unchain
and raise error in ChainerX fallback mode (#7216)Variable.copydata
(#7226)MultiprocessParallelUpdater
to support new devices (#7245)StackVector<int64_t, kMaxNdim>
to Dims
(#7258)chainerx::{Max,Min}imum
(#7261)chx.backward
not cause error even if backprop is not required (#7287)None
arguments in chainerx.clip
and chainerx.ndarray.clip
(#7296)chainerx::Where
(#7325)F.clip
function with None
parameter to min
/max
(#7333)Array::ToNative()
(#7394)Variable
(#7400)get_device
error message when ChainerX is not available (#7401)get_device
to raise a more correct error types (#7421)EXEPECT_ARRAY_*
macros able to used outside ChainerX (#7434)F.convolution_2d
(#7448)F.deconvolution_2d
(#7449)F.copy
between non-ChainerX and ChainerX devices only if backprop is not required (#7473)FunctionNode
ChainerX fallback, reuse ChainerxDevice
taken from inputs to create outputs (#7397)F.where
(#6872)Bernoulli.log_prob
(#7064, thanks @seiyab!)MultiNodeBatchNormalization
(#7106)MultiNodeChainList
should not assume float32 (#7165)L.Linear
when called with n_batch_axes
(#7167)L.BatchRenormalization
(#7256)F.absolute_error
for ChainerX (#7281, thanks @crcrpar!)_values_to_dicts
so it works with unicode of python 2 too (#7316)chainerx.square
(#7321)WeightDecay
aware of loss scale (#7491)GradientMethod
ChainerX fallback for uninitialized parameters (#7492)cuda.DummyDevice
and cuda.get_device_from_array
(#7148)math.cc
(#7171)logic.cc
(#7176)testing.backend.BackendConfig
(#7212)math.cc
(#7222)xp
when possible (#7234)AMax
and AMin
to statistics routines (#7269)math.cc
(#7270)_
for private classes under chainer.dataset.tabular
(#7275)math.cc
(#7298)math.cc
(#7317)FindCuDNN.cmake
(#7419)const&
(#7453)cuda_fp16.h
instead of cuda_fp16.hpp
(#7480)math.h
(#7501)AsTypeKernel
(#7522, thanks @kshitij12345!)F.normalize
documentation (#7062, thanks @crcrpar!)F.copy
view behavior (#7135)backend.get_device_from_array
(#7163)chainerx.md
(#7179)optimizers.MSVAG
to documentation (#7183)F.relu
in doc (#7188)CommunicatorBase.allgather
(#7192)chainer.utils.type_check
(#7249, thanks @ktns!)observe_value
and observe_lr
trigger interval (#7266)robots.txt
to allow indexing root (#7306)F.normalize
documentation (#7371, thanks @crcrpar!)static_graph.rst
(#7389)test_iter.epoch
manually in the tutorial of training loop (#7405)shape
in generate_array
(#7450)tabular_dataset.py
(#7495, thanks @nai62!)CUDNN_LIBNAME
to be specified by environment variable (#7243)$MAKEFLAGS
instead if set in Travis CI script (#7331)FindCuDNN.cmake
, prioritize explicit variables over environment variables (#7441)typing == 3.6.6
(#7562)typing
requirements (#7564)predict.py
) (#7206)PlotReport.available()
check in glance example (#7209)reset
method in the PTB example (#7533)F.tensordot
test (#6968, thanks @ishanrai05!)F.cumprod
test (#6978, thanks @hikjik!)F.average
test (#6995, thanks @hikjik!)test_cuda.py
to backends_tests
(#7144)chainerx.swapaxes
test (#7184, thanks @kshitij12345!)Variable.grad
and Variable.grad_var
tests (#7191)Variable.zerograd
test (#7199)chainerx.conv
and chainerx.conv_transpose
(#7203)TestTanh
from test_math.py
to test_trigonometric_hyperbolic.py (#7207)Variable.copydata
test (#7224)CUDA_VISIBLE_DEVICES
in ChainerX tests (#7290)chainer.as_array
test (#7318)StandardUpdater
tests with pytest style assertion (#7326)0
to 0.0
for python2 (#7373)dstack
to invalid_shape
test (#7457, thanks @kshitij12345!)pytest.mark.xfail
instead of unittest.expectedFailure
(#7488)