This is the release note of v7.0.0b3. See [here](https://github.com/chainer/chainer/milestone/96?closed=1) for the complete list of solved issues and merged PRs.
Dropping Support of Python 2
Due to the end-of-life (EOL) of Python 2 in January 2020, Python 2 support has been dropped in this release. Chainer v6.x continues to support Python 2. See the [blog post](https://chainer.org/announcement/2019/08/21/python2.html) for details.
Note on `F.max_pooling_2d` refactoring
Implementation of `F.max_pooling_2d` has been merged to `F.max_pooling_nd`. The behavior is unchanged, so ordinary users should not be affected by this change. However, the `FunctionNode` class recorded in the computational graph corresponding to `F.max_pooling_2d` has changed from `MaxPooling2D` to `MaxPoolingND`. The code explicitly depending on this class will need a fix.
New Features
- Add an option to invoke extensions before training (3511, thanks wkentaro!)
- Add automatic management of snapshots (deletion and load) (6856)
- Add `chainerx.repeat` (7223, thanks durswd!)
- Support mixed indices in `TabularDataset.slice` (7251)
- Add `chainer.dataset.tabular.DelegateDataset` (7276)
- Add `ObservationAggregator` extension to ChainerMN (7302)
- Add strict mode to `scatter_dataset` as well as `scatter_index` (7327)
- Add `chainer.dataset.tabular.from_data` (7361)
- Add `linalg.svd`, `linalg.pinv` to ChainerX (7411, thanks IvanYashchuk!)
- Add `TabularDataset.convert/with_converter` (7428)
- Add `linalg.solve`, `linalg.inv` to ChainerX (7474, thanks IvanYashchuk!)
- Add base `Converter` class (7489)
- Add `chainerx.sigmoid_cross_entropy` (7524, thanks aksub99!)
- Add `chainerx.cumsum` (7558, thanks aksub99!)
- Add `chainerx.nansum` (7719, thanks aksub99!)
- Add `chainerx.nanargmax` and `chainerx.nanargmin` (7755, thanks aksub99!)
- LSTM, GRU and RNN implementation for ChainerX (7764, thanks dido1998!)
- Add `tri*` routines to ChainerX (7791, thanks IvanYashchuk!)
- Add finalize method to ChainerMN `CommunicatorBase` class (7814)
- Add `numerical_grad_dtype` to `FunctionTestCase` and `LinkTestCase` (7817)
- Support callable in `tabular.from_data` (7847)
- Add `chainerx.count_nonzero` (7852, thanks aksub99!)
- Implement hooks for memory pool in ChainerX (7898)
- Add `chainerx.flatten` (7901, thanks aksub99!)
- Add `chainerx.ravel` (7904, thanks aksub99!)
Enhancements
- Use numbers for input check in `roi_{average|max}_{pooling|align}_2d.py` (5636, thanks knorth55!)
- Warn `Link.to_gpu` unless compatible with `to_device` (5762)
- Change `F.dropout` to use cuDNN by default (7185, thanks crcrpar!)
- Fix Adam FP16 overflow on GPU kernels (7694)
- Improve chainerx import check (7738)
- Make `F.average` as accurate as backend (7758)
- Improve NCCL availability error in `PureNcclCommunicator` (7793)
- Fix `type_check` error message on evaluating bool expression (7795)
- Fix module in msg of `type_check` (7803)
- Use scalar array in `chx.leaky_relu`/`elu` (7816)
- Allow `None` inputs to gradient check and generating `None` gradients in `FunctionTestCase` (7831)
- Display ChainerX availability in `print_runtime_info` (7833)
- Add support for input with different dtypes for 'linalg.solve' in ChainerX (7840, thanks IvanYashchuk!)
- Fix `F.clip` for NumPy 1.17 (7843)
- Include `rtol * abs(b)` in `allclose` output (7848)
- Fix SLSTM for omitted upstream gradients (7891)
- Fix LSTM for omitted upstream gradients (7896)
- Insert missing spaces between concatenated string literals (7930)
- Fix a typo in a kernel name (7962)
Bug Fixes
- Fix `TypeError` in `max_pooling_2d` (6835, thanks ishanrai05!)
- Fix multi-device loss scaling (7594)
- Avoid unload module call in `PureNcclCommunicator` (7600)
- Fix decorrelated batch normalization when groups ≠ 1 (7707)
- Fix `create_mnbn_model()` bug (7718)
- Fix `optimizer_hooks.GradientHardClipping` for scalar array (7760)
- Fix "zero division" in resize image (7769, thanks meokz!)
- Fix ChainerX non native deserialization (7830)
- Fix `backends.copyto` from chainerx to non-chainerx (7835)
- Fix backward of `split_axis` for intel64 when `grad_ouputs` contains `None` (7836)
- Support for CUDA async in batched copy (7877)
- Add scatter interface to `CommunicatorBase` (7888)
- Add `DeprecationWarning` to initializer of `BuildingBlock` (7909)
- Fix in-place update of arrays in `Link.serialize` and `optimizers.Adam` (7918)
- Fix precision in `F.max_pooling_2d` (7922)
Code Fixes
- Avoid using `_fallback_workarounds` in `SpectralNormalization` (7539)
- Create `links.rnn` and `functions.rnn` (7725)
- Add `batched_copy` to all `Communicators` (7761)
- Remove unused lambda capture of `axis` (7799)
- Remove unused argument from decorrelated batch norm (7828)
- Fix copies for `linalg.svd` python bindings layer in ChainerX (7866, thanks IvanYashchuk!)
- Replace `n_layer` with `n_layers` for consistency (7871)
- Rename a variable in CUDA SVD kernel (7921, thanks IvanYashchuk!)
- Refactor `pooling_nd` functions (7938)
- Merge implementation of `F.max_pooling_2d` into `F.max_pooling_nd` (7939)
- Fix typo in comment: unique -> deterministic (7775)
Documentation
- Fix `static_graph` docs code examples (7875)
- Add 1.17 to supported NumPy versions (7883)
- Add `scatter` to doc (7897)
- Update stable version in README (7948)
Installation
- Relax typing version requirement in Python 3 (7811)
- Remove mypy from requirements (7812)
- Add OpenMP option for cuSOLVER (7839)
- Fix Windows build of ChainerX (7967, thanks cloudhan!)
Examples
- Improve VAE example (7250)
- Show prompt in text classification example (7858, thanks UmashankarTriforce!)
Tests
- Add test to ensure no mutable default arguments (4413)
- Simplify `F.max_pooling_2d` test (6836, thanks ishanrai05!)
- Simplify `F.lstm` test (7808, thanks dido1998!)
- Simplify `F.slstm` test (7805, thanks dido1998!)
- Simplify `F.n_step_rnn` test (7804, thanks dido1998!)
- Simplify `F.n_step_lstm` test (7807, thanks dido1998!)
- Simplify `F.n_step_gru` test (7806, thanks dido1998!)
- Simplify `F.embed_id` test (7903, thanks dido1998!)
- Add ChainerCV's tests to pfnCI (7060)
- Add mixed16 tests to multi-node chain list (7630)
- Add mixed16 tests to collective functions (7633)
- Add mixed16 tests to `point_to_point` communications (7637)
- Add mixed16 tests to `pseudo_connect` (7638)
- Skip flaky `TestConv*TensorCore` (7710)
- Fix test of `chx.reshape` (7762)
- Revert tentative workaround related to OpenSSL (7790)
- Switch current directory in Jenkins tests (7834)
- Fix flaky `TestHuberLoss` (7837)
- Configure tolerances of `F.average_pooling_2d` test (7841)
- Fix `F.clipped_relu` test for NumPy 1.17 (7842)
- Add `test_accuracy.py` to the list of slow test files (7851)
- Fix `BatchNorm` flaky of ChainerX (7857)
- Refactor convolution functions tests (7863)
- Relax tolerances in convolution function tests when using old cuDNN (7864)
- Fix `test_TrilTriu` (7865)
- Fix `chainerx.logsumexp` test tolerance (7867)
- Relax tolerances in convolution link tests when using old cuDNN (7868)
- Relax float16 tolerances in ChainerX binary math tests (7874)
- `F.tree_lstm` test for ChainerX (7881, thanks dido1998!)
- Avoid `ndarray.data` access and fix wrong test (7890)
- Sample stable inputs in tests of group normalization (7894)
- Avoid unstable inputs in tests of decorrelated batch normalization (7900)
- Relax fp16 tolerance in `TrueDiv` test (7917)
- Avoid testing `F.cast` from negative floating-point to unsigned (7920)
- Fix tolerance in `L.CRF1d` test (7926)
- Refactor `DecorrelatedBatchNormalizationTest` and add stable input (7932)
- Relax tolerances in old cuDNN convolution tests (7942)
- Fix flaky `chainerx.power` test (7950)
- Increase CPU memory for test instance in PFN CI (7951)
- Relax fp16 tolerances in `TestContrastive` (7953)
- Relax float16 tolerances in `F.batch_inv` test (7971)
Others
- Drop support for Python 2.7 (7826)