Chainer

Latest version: v7.8.1

Safety actively analyzes 638755 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 6

6.3.0

This is the release note of v6.3.0. See [here](https://github.com/chainer/chainer/milestone/95?closed=1) for the complete list of solved issues and merged PRs.

Highlights

- NumPy 1.17 is now officially supported.

New Features

- Add automatic management of snapshots (deletion and load) (7862)

Enhancements

- Fix Adam FP16 overflow on gpu kernels (7780)
- Make `F.average` as accurate as backend (7782)
- Fix `type_check` error message on evaluating bool expression (7801)
- Fix module in msg of `type_check` (7810)
- Fix `F.clip` for NumPy 1.17 (7855)

Bug Fixes

- Fix `Parameter.dtype` for uninitialized parameter (7749)
- Fix `UpdateRule.use_fp32_update` for uninitialized parameter (7751)
- Avoid unload module call in `PureNcclCommunicator` (7787)
- Fix `TypeError` in `max_pooling_2d` (7789, thanks ishanrai05!)
- Fix `create_mnbn_model()` bug (7846)
- Fix backward of `split_axis` for intel64 when `grad_ouputs` contains `None` (7931)
- Fix precision in `F.max_pooling_2d` (7933)
- Fix `backends.copyto` from/to chainerx (7934)
- Fix in-place update of arrays in `Link.serialize` and `optimizers.Adam` (7941)
- Fix ChainerX non native deserialization (7954)
- Fix multi-device loss scaling (7968)

Documentation

- Fix `static_graph` docs code examples (7884)
- Add 1.17 to supported NumPy versions (7961)

Tests

- Fix test of `chx.reshape` (7792)
- Revert 6754 (Fix Travis with macOS) (7800)
- Fix a typo in `test_communicator` (7822)
- Fix `F.clipped_relu` test for NumPy 1.17 (7854)
- Switch current directory in Jenkins tests (7856)
- Fix flaky `TestHuberLoss` (7869)
- Configure tolerances of `F.average_pooling_2d` test (7870)
- Refactor convolution functions tests (7873)
- Relax tolerances in convolution link tests when using old cuDNN (7878)
- Fix `chainerx.logsumexp` test tolerance (7889)
- Relax tolerances in convolution function tests when using old cuDNN (7895)
- Sample stable inputs in tests of group normalization (7899)
- Relax float16 tolerances in ChainerX binary math tests (7908)
- Avoid `ndarray.data` access and fix wrong test (7913)
- Avoid unstable inputs in tests of decorrelated batch normalization (7915)
- Avoid testing `F.cast` from negative floating-point to unsigned (7944)
- Relax fp16 tolerances in `TestContrastive` (7959)
- Relax fp16 tolerance in `TrueDiv` test (7972)
- Fix tolerance in `L.CRF1d` test (7977)

6.2.0

This is the release note of v6.2.0. See [here](https://github.com/chainer/chainer/milestone/93?closed=1) for the complete list of solved issues and merged PRs.

Enhancements

- Avoid code duplication in optimizer hook implementation (7674)
- Use `six.integer_types` for axis check in `F.concat` (7712, thanks knorth55!)
- Use `six.integer_types` for axis checks (7770)

Bug Fixes

- Fix a bug of `chainermn.links.create_mnbn_model` (7618)
- Fix unit selection of `CupyMemoryProfiler` (7639)
- Skip `None` array in `FunctionNode` NaN check (7642)
- Fix `AMSGrad` with intel64 backend (7689)
- Fix spectral normalization chainerx conversion (7705)
- Fix `PickleDataset` crash when using multiprocessing (7729, thanks zaltoprofen!)
- Fix pickling issues on `MultiprocessIterator` (7742)
- Fix an error on `chainer.grad` for multiple devices (7746)

Code Fixes

- Remove backslashes to continue lines of link targets (7182)
- Use `backend.get_array_module` not `cuda.get_array_module` (7619, thanks crcrpar!)
- Avoid code duplication and access violation between `Optimizer` and `GradientMethod` (7644)

Documentation

- Add `chainer.get_device` to doc (6831)
- Correct Embed ID documentation (7575)
- Fix documentation for `shape` in `generate_array` (7576)
- Fix typos in ResNet prepare method (7579)
- Fix inconsistent document for extension finalizer (7581)
- Fix typos in `expand_dims.py` (7608)
- Minor grammar Improvements to broadcast documentation. (7623)
- Explain corresponding `Link`s (7628)
- Correct missing parenthesis in documents (7635, thanks tinunkai!)
- Tiny fix of `BackwardContext` comment (7636, thanks crcrpar!)
- Edit `FunctionNode` docs. (7659)
- Improve contribution docs (7680)
- Fix typo in `F.squeeze` documentation (7688)
- Fix a grammar error (7711)

Examples

- Fix typo in `examples/vae/train_vae.py` (7580, thanks m4saka!)
- Support default dtype in sentiment example's recursive minibatch version (7596)
- Warn NaN in FP16 mode in sentiment example's recursive minibatch version (7598)
- Example fix: stateful triggers cannot be reused (7683)

Tests

- Fix `y_shape` not used in tests (7612)
- Fix `GroupNormalization` tests (7700)
- Fix warning filter for protobuf (7744)
- Fix flaky `TestContrastive` (7765)

6.1.0

This is the release note of v6.1.0. See [here](https://github.com/chainer/chainer/milestone/91?closed=1) for the complete list of solved issues and merged PRs.

Enhancements

- Avoid unnecessary updates in `F.batch_renormalization`, and related fixes (7197)
- Fix typo in `Variable.backward` (7208)
- `MultiprocessParallelUpdater` to support new devices (7246)
- Add type hints to `Variable` (7445)
- Improve `get_device` error message when ChainerX is not available (7461)
- Check positive dilation in `F.convolution_2d` (7499)
- Check positive dilation in `F.deconvolution_2d` (7500)

Bug Fixes

- Fix uncopyable `MultiNodeBatchNormalization` (7254)
- Fix initialization of `L.Linear` when called with `n_batch_axes` (7300)
- Improve type check in `_values_to_dicts` so it works with unicode of python 2 too (7323)
- Fix a bug in `Bernoulli.log_prob` (7334, thanks seiyab!)
- Fix a bug that root is ignored in `scatter_dataset` and `bcast` (7360)
- Fix condition to invoke cuDNN dropout (7374, thanks crcrpar!)
- Fix mypy errors (7465)
- Make `WeightDecay` aware of loss scale (7510)
- Fix AdamW update rule regression on CPU (7516)
- Fix type check of `F.where` (7532)

Code Fixes

- Fix code style for long expressions (7542)

Documentation

- Fix to clarify the description about initializer argument (7070)
- Remove extra spaces in docstrings (7130)
- Fix link to ChainerMN docs in performance guide (7131)
- Document passive attributes in `FunctionTestCase` (7134)
- Fix dead sphinx links (7159)
- Document `backend.get_device_from_array` (7168)
- Document `F.copy` view behavior (7174)
- Add `optimizers.MSVAG` to documentation (7193)
- Add missing doc entry for `CommunicatorBase.allgather` (7195)
- Remove `chainerx.md` (7218)
- Fix grammatical errors in documentation (7219)
- Fix typos in `chainer.utils.type_check` (7274, thanks ktns!)
- Improve device documentation (7288)
- Fix capitalization of `F.relu` in doc (7299)
- Fix invalid escape sequences in ChainerX routine docstrings (7336)
- Fix `F.normalize` documentation (7337, thanks crcrpar!)
- Fix format of `static_graph.rst` (7399)
- Avoid setting `test_iter.epoch` manually in the tutorial of training loop (7410)
- Avoid installing ChainerX when building docs of other projects on ReadTheDocs (7426, thanks knorth55!)
- Fix `robots.txt` to allow indexing root (7458)
- Add reference and warning to `F.swish` document (7467, thanks fiarabbit!)
- Change Deformable Convolution 2D docs to match arguments (7468, thanks higumachan!)
- Remove test coverage from ChainerX contribution guide (7469)
- Remove "Comparison with other frameworks" from docs (7477)
- Improve `F.normalize` documentation (7482, thanks crcrpar!)

Installation

- Fix ChainerX compilation with MSVC (7173, thanks durswd!)
- Fix typing requirements (7566)

Examples

- Support device specifiers in examples:
- Support device specifier in image captioning example (7229)
- Support device specifiers in MNIST data parallel example (7233)
- Support device specifiers in pix2pix example (7235)
- Support device specifiers in static graph example (7236)
- Support device specifiers in PTB example (7263)
- Support device specifiers in ImageNet data parallel example (7303)
- Support ChainerX in PTB gentxt example (7340)
- Fix sentiment example test (7238)
- Warn NaN in FP16 mode in examples:
- Warn NaN in FP16 mode in wavenet example (7376)
- Warn NaN in FP16 mode in static_graph_optimizations/mnist example (7377)
- Warn NaN in FP16 mode in word2vec example (7378)
- Warn NaN in FP16 mode in sentiment example (7380)
- Warn NaN in FP16 mode in static_graph_optimizations/cifar example (7381)
- Warn NaN in FP16 mode in reinforcement learning examples (7382)
- Warn NaN in FP16 mode in dcgan example (7383)
- Warn NaN in FP16 mode in memnn example (7386)
- Warn NaN in FP16 mode in pos example (7387)
- Warn NaN in FP16 mode in pix2pix example (7388)
- Warn NaN in FP16 mode in vae example (7412)
- Implement `reset` method in the PTB example (7535)

Tests

- Use `CUDA_VISIBLE_DEVICES` in ChainerX tests (7294)
- Move `test_cuda.py` to `backends_tests` (7295)
- Improve mergify configuration (7301)
- Add configuration of new CI system (7403)
- Change `0` to `0.0` for python2 (7508)
- Add a test to reproduce the bcast deadlock problem (7554)

Others

- Add `.mergify.yml` (7151)
- Remove "Research projects using Chainer" from README (7459)

6.0.0

This is the release note of v6.0.0. See [here](https://github.com/chainer/chainer/milestone/89?closed=1) for the complete list of solved issues and merged PRs.

This release note only covers the difference from v6.0.0rc1; for all highlights and changes, please refer to the release notes of the pre-releases:

6.0.0rc1

This is the release note of v6.0.0rc1. See [here](https://github.com/chainer/chainer/milestone/88?closed=1) for the complete list of solved issues and merged PRs.

Announcements

- After this release, the master branch is switched to the development of v7 series. v6.0.0 will continue developing at the `v6` branch.
- (6629) You can now access the product backlog (the task list that ChainerX core team is willing to work on) as a spreadsheet [here](https://docs.google.com/spreadsheets/d/1daitXlRhHu7eZENFUs1cHw8o12rmA8bvudUQ0Yof8Jc/edit#gid=0). Note that the sheet is actively edited by ChainerX core dev team. The items are NOT promises; we may drop any features from the list any time, but you can use it to know in which direction the development is going on in the near future.

Highlights

- Mixed precision training support is improved.
- In particular, mixed precision mode (a.k.a. mixed16 dtype) is added. You can set an environment variable `CHAINER_DTYPE=mixed16` to make Chainer choose appropriate dtypes for mixed precision training (in most places it is `float16`, but it automatically chooses `float32` when it’s better for precision and performance reasons).
- Loss scaling for avoiding underflow in backprop with float16 now supports dynamic mode. In this mode, the scaling factor is adjusted during training so that backprop does not overflow. You can use it with `(optimizer).loss_scaling()`. See the documentation for details.

Changes without compatibility

- Deprecate old NCCL versions and related communicators (6506)
- Support of NCCL<2.3 is deprecated. We encourage users to use NCCL 2.3 or later ones.

New Features

- Human readable representation of link and chain (4853, thanks wkentaro!)
- Add `variable.item()` (5797, thanks crcrpar!)
- Refactor `Link.to_device` family (5986)
- Add decorrelated batch normalization (6150, thanks crcrpar!)
- Add option `unit` to `CupyMemoryProfileHook.print_report()` (6256, thanks hitsgub!)
- Add `distributions.Independent` (6324, thanks ganow!)
- Dynamic loss scaling (6337, thanks anaruse!)
- Add ChainerX `FloorDivide` (6350)
- Customizable forward output check in `testing.FunctionTestCase` (6444)
- Adding fp16 support to the ChainerMN communicators (6448)
- `mixed16` mode and its support in `L.BatchNormalization` (6456)
- Add shape and dtype check before allrecuce (6461)
- Add `F.relu6` as an alias to `F.clipped_relu` (6463, thanks aksub99!)
- Implementation of sigmoid for ChainerX (6472, thanks dido1998!)
- Add `minimum` to chainerx (6477, thanks aksub99!)
- Add `square` to chainerx (6486, thanks aksub99!)
- Add `chainerx.testing.integral_dtypes` (6526)
- Support for `chainer.mixed16` data type in PureNcclCommunicator (6548)
- Add `LinkTestCase` to simplify link tests (6559)
- Add `Sin` and `Cos` to chainerx (6601, thanks kshitij12345!)
- Support for fp16 and mixed16 in `MultiNodeBatchNormalization` of ChainerMN (6619)
- Add `tan`, `arcsin`, `arccos`, `arctan` to ChainerX (6703, thanks IvanYashchuk!)

Enhancements

- Improve `F.resize_images` speed (5753, thanks grafi-tt!)
- Improve `F.group_normalization` via cuDNN call (5924, thanks grafi-tt!)
- Fix backward of `F.average_pooling_nd` with `pad_value` of None (6332, thanks crcrpar!)
- Support for fp16 in naive comm (6333)
- Change backward of `F.log_ndtr` to avoid NaN (6340)
- Stop retaining `y.grad` on `y.backward(retain_grad=False)` (6348)
- Set `requires_grad` explicitly in `gradient_check` and function test (6364)
- Fix error messages in `get_fans` (6365)
- ChainerX dtype promotion: mathematical functions (6379)
- Mixed dtype: concatenate (6381)
- `ResultType` to take kind into account (6419)
- Improve `FunctionTestCase` error message (6426)
- Mixed dtype: arithmetics (6432)
- Change intermediate dtype of `Adam` for float16 parameters to float32 (6442)
- Mixed dtype: dot (6443)
- Avoid using pytest attributes during import (6453)
- Dot product for higher dimensions chainerX (6476, thanks dido1998!)
- Remove dtype from `chainerx.Scalar` (6481)
- Mixed dtype: `BatchNorm` and `FixedBatchNorm` (6484)
- Support `chainerx::Take` indices other dtype than int64 (6485)
- Keep backward compatibility on `cupy.cudnn.batch_normalization_forward_training` (6497)
- Deprecate old NCCL versions and related communicators (6506)
- Mixed dtype `chainerx::conv` and `chainerx::conv_transpose` (6510)
- Support non-float cast in `F.cast` (6518)
- Remove restriction of `x.dtype == b.dtype` in `F.convolution_nd` and `F.deconvolution_nd` (6524)
- Avoid exposing `chainerx.Scalar` to Python (6535)
- Fix `parameterize_pytest` to allow parameterizing with tuples (6554)
- Change device spec (6563)
- Mixed dtype support in `chainerx.linear` (6569)
- Check lengths of args of `chainer.grad` (6580)
- Mixed dtype: comparison (6590)
- Fix linspace (6605, thanks kshitij12345!)
- Add `PerformanceWarning` (6617)
- Implemented ChainerX version of Clipped ReLU forward (6627, thanks Harshan01!)
- Allow comma separated keys in `testing.product` (6635)
- `BatchNormalization` to only allocate dummy mean and var in cuDNN path (6656)
- Generate shorter class names for parameterized tests (6660)
- ChainerX dynamic op registry (6675)
- Remove unnecessary broadcasts from `F.layer_normalization` (6680, thanks hitsgub!)
- Remove unnecessary broadcasts from `F.l2_normalization` (6681, thanks hitsgub!)
- Support cupy-cuda101 package (6700)
- Properly handle FP16 in `D.Normal` (6709)
- Mixed-dtype: `minimum` and `maximum` (6713)
- Op registration: indexing (6718)
- Op registration: logic (6727)
- Op registration: trigonometric (6729)

Bug Fixes

- Forbid calling empty `Sequential` (6304)
- Fix fp16 issue in batch normalization (6323, thanks anaruse!)
- Fix `F.softmax_cross_entropy` float16 under/overflow (6366)
- Fix lazy init of `BatchNormalization` link (6369)
- Fix `str.join` `TypeError` in `FunctionTestCase` helper (6370)
- Fix `chainer.links.NStepRNN` and its variants (6415, thanks crcrpar!)
- Fix an off-by-one in slicing of `chainerx::Array` (6540)
- Fix more corner cases in `chainerx::Slice` (6557)
- Fix dimension check of `chainerx::Linear` (6593, thanks crcrpar!)
- Fix ChainerX optimzer fallback for non-default devices (6699)
- Fix `DeviceResident.to_gpu` fallback argument (6712)

Code Fixes

- Fix F632 (use `==` / `!=` to compare str) (6346)
- Avoid ` NOQA` in docstrings (cont.) (6356)
- Fix comment style of `op_utils.py` (6421)
- Refactor `chainerx::Linear` (6425)
- Fix `ResultTypeResolver` multiple definitions (6439)
- Assert that input to array props formatter is a list or tuple (6440)
- Fix style of `.clang-tidy` (6445)
- Remove unnecessary `AsContiguous` in `CudaConv::ConvGradWeight` (6520)
- Remove commented out code from `_BNMode` (6582)
- Change the deprecated `collections` (6645)
- Remove obsolete assertions (6648)
- Allow `ArrayBody::GetArrayNode` to return null (6658)
- Make `BackwardBuilder::Target` less stateful (6659)
- Clean up test code (6688)

Documentation

- Write guides to implement new-style functions (4986)
- Fix typo (6384, thanks aksub99!)
- Fix Sphinx markups in RNNs docs (6412, thanks crcrpar!)
- Fix docment in `TimerHook` (6433, thanks hitsgub!)
- Refactor documentation of `F.prelu` (6455, thanks fiarabbit!)
- Fixes typo in docstring for classification_summary (6515, thanks yewang!)
- Write TODOs to address `Dot` backward cast (6537)
- Override `forward` in `LinkHook` documentation (6546, thanks crcrpar!)
- Remove duplicated entry in reference (6571)
- Fix `F.rrelu` documentation (6581, thanks fiarabbit!)
- Add `gradient_check.check_double_backward` in reference (6584)
- Fix `:meth:` link (6603, thanks 23pointsNorth!)
- Update broken link in `chainerx.md` (6610, thanks kshitij12345!)
- Improve docs and exception message in `F.erfcx`, `F.erfcinv` and `F.erfinv` (6618)
- Include a link to ChainerX product backlog (6630)
- Fix missing module declaration (6662)
- Fix `chainer.backend.get_array_module` documentation (6663)
- Fix typo: 'Notatition' -> 'Notation' (6673, thanks nai62!)
- Fix test failures in FunctionNode implementation guide (6734)

Installation

- Environment variable to set ChainerX Python binding build type (6647)
- Check `CMAKE_BUILD_TYPE` (6664)

Examples

- Use `args.out` in `train_cifar_custom_loop.py` (6378, thanks crcrpar!)
- Fix to use right device for DALI iterator in imagenet example (6397)
- Properly pass device ID to DALI pipelines in imagenet example (6429)
- Use `__future__.division` in imagenet example with Python2 (6462)
- Fix broken imagenet example (6489)
- Fix wavenet example to support the default dtype (6536)
- Use float division instead of `__future__.division` for Python2 (6562)
- Fix DCGAN example to work with default dtype (6585)
- Use `F.matmul` instead of `F.batch_matmul` in memnn example (6611)
- Remove unnecessary `unchain_backward()` in pix2pix example (6634, thanks hayato-maki!)
- Fix file mode of `mushrooms.csv` (6693)
- Replace deprecated URLopener in `download.py` (6694)

Tests

- Test all codes in `guides/functions.rst` (6194)
- Test various spatial_scale for roi_average_pooling_2d (6238, thanks knorth55!)
- Test simplifications
- Simplify `F.swish` test (6306, thanks ishanrai05!)
- Simplify `F.log_softmax` test (6320, thanks ishanrai05!)
- Simplify `F.softmax_cross_entropy` test (6363)
- Simplify `F.softmax` test (6371, thanks aksub99!)
- Simplify `F.flipr` test (6389, thanks ishanrai05!)
- Simplify `F.flipud` test (6390, thanks ishanrai05!)
- Simplify `F.moveaxis` test (6392, thanks ishanrai05!)
- Simplify `F.pad` test (6393, thanks ishanrai05!)
- Simplify `F.test_squared_difference` test (6395, thanks aksub99!)
- Simplify `F.minimum` test (6396, thanks aksub99!)
- Simplify `F.maximum` test (6400, thanks aksub99!)
- Simplify tests of `F.convolution_2d` and `F.convolution_nd` (6406, thanks crcrpar!)
- Simplify `F.rollaxis` test (6408, thanks ishanrai05!)
- Simplify `F.vstack` test (6410, thanks ishanrai05!)
- Simplify `F.transpose` test (6458, thanks ishanrai05!)
- Simplify `F.tile` test (6459, thanks ishanrai05!)
- Simplify `F.swapaxes` test (6460, thanks ishanrai05!)
- Simplify `F.resize_image` test. (6464, thanks ishanrai05!)
- Simplify `F.expand_dims` test (6473, thanks ishanrai05!)
- Simplify `F.prod` test (6479, thanks aksub99!)
- Simplify `F.squeeze` test (6487, thanks ishanrai05!)
- Fix `examples/.gitignore` (6391, thanks crcrpar!)
- Suppress warning in caffe test (6402)
- Add ChainerX test to `FunctionTestCase`s (6416)
- Remove `SPHINXOPTS` env from Makefile (6417)
- Rewrite ChainerX connection tests (6424)
- Fix regex in `test_print_report` (6430)
- Fix duplicated test (6434)
- Add strides check in `NumpyOpTest` (6437)
- Rewrite ChainerX indexing tests (6438)
- Add float16 and float 64 to `F.group_normalization` test (6468, thanks crcrpar!)
- Rewrite ChainerX linalg tests (6469)
- Fix `F.pad` test for Python2 (6478)
- Fix input of `F.vstack` to a list of ndarrays (6494, thanks crcrpar!)
- Change pytest version requirement (6502)
- Force camel case class name for `OpTest` (6507)
- Test result dtype permutation (6511)
- Fix test class name (6532)
- Rewrite ChainerX `batch_norm` test (6542)
- Rewrite ChainerX sorting tests (6550)
- Rewrite ChainerX logic tests (6551)
- Rewrite ChainerX activation tests (6553)
- Rewrite ChainerX manipulation tests (6556)
- Rewrite ChainerX `fixed_batch_norm` test (6558)
- Rewrite ChainerX pooling tests (6560)
- Rewrite ChainerX arithmetics tests (6566)
- Rewrite ChainerX math tests (6568)
- Fix tolerance in `chainerx.divide` test (6573)
- Improve arithmetics tests (6577)
- Adjust tolerances of `F.einsum` tests (6588)
- Check grads of inputs to test backward of collective communication (6589)
- Avoid mutating `FunctionTestBase` class attributes (6599)
- Avoid mutating `LinkTestCase` and `LinkInitializersTestCase` class attributes (6600)
- Make `op_test` decorator remove the previous class (6602)
- Use `compute_60` instead of `compute_50` to run test on P100 (6633)
- Destroy NCCL communicator after every use (6636)
- Run ChainerX python tests in debug build (6649)
- Suppress numpy warnings in math tests (6651)
- Fix testing condition of `BatchNormalizationMultiGpuTest` (6652)
- Remove C++ routines tests (6667)
- Minimize the Travis CI matrix (6677)
- Fix conflicts between 6432 and 6486 (6679)
- Stop clang-tidy test in Travis CI (6682)
- Fix tolerance in `TestConvTranspose` (6691)
- Rewrite the rest of math tests (6695)
- Fix test failure in cuDNN v7.5 (6710)
- Fix `F.convolution_nd` test for flake8 (6711)
- Relax tolerances in `convolution_nd` function test (6728)

6.0.0b3

This is the release note of v6.0.0b3. See [here](https://github.com/chainer/chainer/milestone/86?closed=1) for the complete list of solved issues and merged PRs.

Highlights

- Spectral Normalization is supported as a link hook
- Kuzushiji-MNIST dataset is now available at `chainer.datasets`

Changes without compatibility

- Raise `NotImplementedError` if `Extension.__call__` is not overridden (6095)
- Fix `get_retained_{in/out}puts` to return `None` for `None` inputs/outputs (6121)
- Rename `chainerx` -> `chx` in public API (6312)

New Features

- Unchain all variables after running extensions (5539, thanks hitsgub!)
- Add spectral normalization link hook (5742, thanks crcrpar!)
- Add non-deterministic warning (5977)
- Add `finished` property to `once_trigger` (6023, thanks hitsgub!)
- Call `Iterator.finalize` from `__del__` and `__exit__` (6098)
- Add dilate argument to `L.Deconvolution2D` (6175, thanks crcrpar!)
- Add `create_mnbn_model` (6245)
- Add option `align_units` to `TimerHook.print_report()` (6254, thanks hitsgub!)
- Add Kuzushiji-MNIST dataset (6295, thanks wintercarver!)
- Add synchronized iterator (6345)
- Converter decorator for ChainerX device support (5832)
- Add ChainerX CUDA float16 (5845)
- `chainerx.ndarray.item` (6050)
- `chainerx.grad` Python binding (6063)
- Unwrap ChainerX connected array from `Variable` (6284)
- `chainerx::ResultType` (6347)

Enhancements

- Unify arguments of file names (5357, thanks crcrpar!)
- support spatial_scale >= 1.0 in roi_average_align_2d.py (5634, thanks knorth55!)
- Support `spatial_scale` >= 1.0 in `F.roi_max_align_2d` (5635, thanks knorth55!)
- Fix `pseudo_connect` with `None` input (5652)
- Enforce `Link.__init__` in subclasses (5927)
- Add sequence and numpy array indices support to `ndarray.take` (6081)
- Reduce memory usage in `MultiprocessParallelUpdater` (6100)
- Fix `get_retained_{in/out}puts` to return `None` for `None` inputs/outputs (6121)
- Check input size consistency in RNN and LSTM when using cuDNN (6169)
- Add support for importing and exporting Caffe `Sigmoid` layer (6234, thanks notogawa!)
- Add `group` option value of `Convolution2D` to Caffe exporter (6241, thanks ohnabe!)
- Improve errors for disabled `Variable` operators (6255)
- `DimsFormatter` to print a list of dimensions (6064)
- Support `FunctionNode` `None` inputs in ChainerX (6122)
- Fix ChainerX fallback for replaced optimizer state (6218)
- Use FMA in `NativeDevice::Dot` (6227)
- Use float accumulation in ChainerX float16 `Dot` (6246)
- Make Chainer backprop modes affect ChainerX counterparts (6278)
- Support ChainerX `TrueDivide` for integer types (6281)
- Rename `chainerx` -> `chx` in public API (6312)
- Improve accuracy of ChainerX native float16 `Sum` (6313)

Performance Improvements

- Optimize `Variable.xp` to avoid creation of `Device` instance (6016)
- Add `Variable._init_unchecked()` static method for faster instantiation (6033)
- Avoid `contextmanager` in backprop (6264)
- Improve `F.relu` performance with CuPy (6268)
- Improve `get_variable` performance (6269)
- Pass debug flag to `backprop_step` (6286)
- Improve hook handling in backward (6289)
- Improve performance of `using_config` (6290)
- Reduce `chainer.is_debug()` overhead (6291)
- Improve performance of `using_device` for NumPy and Intel64 devices (6292)
- Support NumPy integers in `chainerx.ndarray.__getitem__` (5989)

Bug Fixes

- Make signs generated by `initializers.Orthogonal` unbiased (5615)
- Use ideep in optimizers properly (5985)
- Fix warning message for backward on a scalar array (6026)
- Validate `{Max,Average}Pool` `kernel_size` and `stride` (6066)
- Validate `Conv`, `ConvTranspose` stride (6067)
- Fix cupy import failure detection (6085)
- Fix memory leak during backprop in Python 2 (6105)
- Fix `FunctionNode.get_retained_outputs` to return `()` if no output is retained (6118)
- Do not compare `xp` with `numpy` for cupy code path (6126)
- CuPy cannot be enabled when cuDNN is unavailable (6138)
- Fix double-backprop of `F.rrelu` (6139)
- Check Array constructor for nullptr (6156)
- Do not compare `xp` with `numpy` for cupy code path (cont.) (6159)
- Fix type of internally held grad after `Parameter.to_device` (6170)
- Fix `Optimizer` to convert state arrays back to ChainerX (6171)
- Fix error message of parameterized test (6287)
- Add `Device.__ne__` for Python 2 (6335)
- Fix pickling of ChainerX link (5988)
- Fix thread safety of CUDA memory pool `FreeUnusedBlocks` (5992)

Code Fixes

- Fix import order (6128)
- Simplify `_check_grad_type` (6213)
- Cosmetic fix to `test_gradient_check` (6271)
- Fix inappropriate usage of `is_arrays_compatible` (6274)
- Use `utils.size_of_shape` in `F.convolution_nd` and `F.deconvolution_nd` (6329)
- Use single quotes (6352)
- Simplify `_array_to_gpu` with stream argument (6358)
- Add `NOLINT` to `reinterpret_cast` (6051)
- Wrap platform specific operations and reduce macro usage (6054)
- Use `py::isinstance` to check types (6083)
- Use `_has_chainerx_array` in `Variable` (6214)
- Write comment about `CHAINERX_VISIBILITY_HIDDEN` (6231)
- Fix clang-tidy errors (6267)

Documentation

- Make docs of functions refer `ndarray` (6042)
- Fix typo in `classifier.py` (6090, thanks hiden-cubist!)
- Document NumPy 1.16 support (6111)
- Remove anchor to non-existing section (6130)
- Reorganize documentation for easier access to tutorials and examples (6142)
- Fix old and broken PTB url (6177)
- Add imports of initializers and math, required in "Define your own function" examples (6179, thanks Qwinpin!)
- Update URL of PTB dataset (6182)
- Add upgrade guide for use of `Link.forward` method (6183)
- Avoid ` NOQA` in docstrings (6184)
- Add `FunctionTestCase` to documentation (6189)
- Add references for n-dimensional arrays (6219)
- Imagenet README.md typo (6223)
- Update docs for Python 3.4 end-of-life (6300)
- Remove duplicate periods in Installation section of `README.md` (6339, thanks crcrpar!)
- Avoid ` NOQA` in docstrings (6355)
- Fix ChainerMN Step-by-Step Troubleshooting (6328)
- Document `chainermn.links.create_mnbn_model` (6360)
- Document ChainerX op test tool (6354)

Installation

- Remove bad brew option from Travis CI (6202)
- Upgrade clang-tidy to 6.0 (6062)
- Use `CMAKE_CURRENT_BINARY_DIR` in `CMakeLists.txt` (6114)
- Set CMake policy in a proper way (6166)
- Make chainerx compiled on Windows (6176, thanks durswd!)

Examples

- Fix seq2seq example (6091)
- Fix iterator syntax in MNIST custom loop example (6099)
- Fix seq2seq example encoding problem on Python3 (6205)
- Minor fix on README of seq2seq example (6206)
- Remove FP16 specific models from imagenet example (6215)
- Remove `PrintReport` entries in seq2seq example (6308)
- Fix `dali_util` in imagenet example for fp16 (6342, thanks anaruse!)
- ChainerX seq2seq example (5830)
- Fix Chainer X `train_mnist.py` example for NumPy 1.16 (5999, thanks Guriido!)
- Fix to check chainerx device in ImageNet example (6280)

Tests

- Simplify `F.batch_renormalization` test (5817)
- Simplify `F.mean_squared_error` test (5822)
- Simplify `F.concat` test (5823)
- Add Windows matrix in Travis CI (5888)
- Limit the length of parameterized test class name (6060)
- Simplify `F.crelu` and `F.elu` test (6070)
- Fix Travis CI ignoring non-last command errors in each step (6082)
- Fix chainermn tests (6048)
- Remove Travis macOS Py34 job (6107)
- Remove unused test step (6123)
- Move Jenkins mypy check to misc matrix (6124)
- Fix filtering `FutureWarning` (6135)
- Fix tolerance and numeric grad precision in `F.triplet` test (6136)
- Remove Travis Ubuntu Py34 job (6149)
- Remove commented-out Py34 matrix from AppVeyor (6160)
- Fix unit test collection timeout (6164)
- Add `x_dtype` and `W_dtype` to the `if` statement of `FunctionTestCase._skip_if_chainerx_float16` (6167, thanks crcrpar!)
- Stop mypy in CIs (6172)
- Simplify `F.tanh` test (6173, thanks crcrpar!)
- Simplify `F.sigmoid` test (6174, thanks crcrpar!)
- Simplify `F.hard_sigmoid` test (6192, thanks crcrpar!)
- Rewrite the tests of `F.average_pooling_2d` (6211, thanks crcrpar!)
- Rewrite linear function test (6236, thanks crcrpar!)
- Simplify `F.selu` test (6243, thanks aksub99!)
- Simplify `F.softplus` test (6298, thanks ishanrai05!)
- Simplify `F.leaky_relu` test (6301, thanks aksub99!)
- Simplify `F.maxout` test (6302, thanks aksub99!)
- Simplify `F.sum` test (6307, thanks aksub99!)
- Improve accuracy of test of `F.rrelu` (6318)
- Simplify `F.diagonal` test (6322, thanks ishanrai05!)
- Write test types in Travis CI job names (6361)
- Check CUDA device after each test case of `chainerx_tests` (6049)
- Skip ChainerX float16 tests when `FunctionTestCase` is used (6069)
- Remove legacy `CHAINERX_CUDA_MULTITHREAD_TEST_SEGV_WORKAROUND` from Jenkins script (6108)
- Run ChainerX python tests in Travis CI (6109)
- Enable ChainerX C++ test in Travis CI (6110)
- ChainerX test tool for ops (6248)
- Use Chainer-style parameterization in ChainerX op test (6334)

Page 4 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.