Fixed
- Fixed `NeptuneLogger` when using DDP ([11030](https://github.com/PyTorchLightning/pytorch-lightning/pull/11030))
- Fixed a bug to disable logging hyperparameters in logger if there are no hparams ([11105](https://github.com/PyTorchLightning/pytorch-lightning/issues/11105))
- Avoid the deprecated `onnx.export(example_outputs=...)` in torch 1.10 ([11116](https://github.com/PyTorchLightning/pytorch-lightning/pull/11116))
- Fixed an issue when torch-scripting a `LightningModule` after training with `Trainer(sync_batchnorm=True)` ([11078](https://github.com/PyTorchLightning/pytorch-lightning/pull/11078))
- Fixed an `AttributeError` occuring when using a `CombinedLoader` (multiple dataloaders) for prediction ([11111](https://github.com/PyTorchLightning/pytorch-lightning/pull/11111))
- Fixed bug where `Trainer(track_grad_norm=..., logger=False)` would fail ([11114](https://github.com/PyTorchLightning/pytorch-lightning/pull/11114))
- Fixed an incorrect warning being produced by the model summary when using `bf16` precision on CPU ([11161](https://github.com/PyTorchLightning/pytorch-lightning/pull/11161))
Changed
- DeepSpeed does not require lightning module zero 3 partitioning ([10655](https://github.com/PyTorchLightning/pytorch-lightning/pull/10655))
- The `ModelCheckpoint` callback now saves and restores attributes `best_k_models`, `kth_best_model_path`, `kth_value`, and `last_model_path` ([10995](https://github.com/PyTorchLightning/pytorch-lightning/pull/10995))
Contributors
awaelchli borchero carmocca guyang3532 kaushikb11 ORippler Raalsky rohitgr7 SeanNaren
_If we forgot someone due to not matching commit email with GitHub account, let us know :]_