Lightning

Latest version: v2.5.0.post0

Safety actively analyzes 715032 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 13 of 32

1.6.0

The core team is excited to announce the PyTorch Lightning 1.6 release ⚡

- [Highlights](highlights)
- [Backward Incompatible Changes](bc-changes)
- [Full Changelog](changelog)
- [Contributors](contributors)

<a name="highlights"></a>
Highlights

PyTorch Lightning 1.6 is the work of 99 contributors who have worked on features, bug-fixes, and documentation for a total of over 750 commits since 1.5. This is our most active release yet. Here are some highlights:

Introducing Intel's Habana Accelerator

1.5.10

Fixed

- Fixed an issue to avoid validation loop run on restart ([11552](https://github.com/PyTorchLightning/pytorch-lightning/pull/11552))
- The Rich progress bar now correctly shows the `on_epoch` logged values on train epoch end ([11689](https://github.com/PyTorchLightning/pytorch-lightning/pull/11689))
- Fixed an issue to make the `step` argument in `WandbLogger.log_image` work ([11716](https://github.com/PyTorchLightning/pytorch-lightning/pull/11716))
- Fixed `restore_optimizers` for mapping states ([11757](https://github.com/PyTorchLightning/pytorch-lightning/pull/11757))
- With `DPStrategy`, the batch is not explicitly moved to the device ([11780](https://github.com/PyTorchLightning/pytorch-lightning/pull/11780))
- Fixed an issue to avoid val bar disappear after `trainer.validate()` ([11700](https://github.com/PyTorchLightning/pytorch-lightning/pull/11700))
- Fixed supporting remote filesystems with `Trainer.weights_save_path` for fault-tolerant training ([11776](https://github.com/PyTorchLightning/pytorch-lightning/pull/11776))
- Fixed check for available modules ([11526](https://github.com/PyTorchLightning/pytorch-lightning/pull/11526))
- Fixed bug where the path for "last" checkpoints was not getting saved correctly which caused newer runs to not remove the previous "last" checkpoint ([11481](https://github.com/PyTorchLightning/pytorch-lightning/pull/11481))
- Fixed bug where the path for best checkpoints was not getting saved correctly when no metric was monitored which caused newer runs to not use the best checkpoint ([11481](https://github.com/PyTorchLightning/pytorch-lightning/pull/11481))

Contributors

ananthsub borda circlecrystal NathanGodey nithinraok rohitgr7


If we forgot someone due to not matching commit email with GitHub account, let us know :]

1.5.9

Fixed

- Pinned sphinx-autodoc-typehints with <v1.15 ([11400](https://github.com/PyTorchLightning/pytorch-lightning/pull/11400))
- Skipped testing with PyTorch 1.7 and Python 3.9 on Ubuntu ([11217](https://github.com/PyTorchLightning/pytorch-lightning/pull/11217))
- Fixed type promotion when tensors of higher category than float are logged ([11401](https://github.com/PyTorchLightning/pytorch-lightning/pull/11401))
- Fixed the format of the configuration saved automatically by the CLI's `SaveConfigCallback` ([11532](https://github.com/PyTorchLightning/pytorch-lightning/pull/11532))

Changed

- Changed `LSFEnvironment` to use `LSB_DJOB_RANKFILE` environment variable instead of `LSB_HOSTS` for determining node rank and main address ([10825](https://github.com/PyTorchLightning/pytorch-lightning/pull/10825))
- Disabled sampler replacement when using `IterableDataset` ([11507](https://github.com/PyTorchLightning/pytorch-lightning/pull/11507))

Contributors

ajtritt akihironitta carmocca rohitgr7

If we forgot someone due to not matching commit email with GitHub account, let us know :]

1.5.8

Fixed

- Fixed `LightningCLI` race condition while saving the config ([11199](https://github.com/PyTorchLightning/pytorch-lightning/pull/11199))
- Fixed the default value used with `log(reduce_fx=min|max)` ([11310](https://github.com/PyTorchLightning/pytorch-lightning/pull/11310))
- Fixed data fetcher selection ([11294](https://github.com/PyTorchLightning/pytorch-lightning/pull/11294))
- Fixed a race condition that could result in incorrect (zero) values being observed in prediction writer callbacks ([11288](https://github.com/PyTorchLightning/pytorch-lightning/pull/11288))
- Fixed dataloaders not getting reloaded the correct amount of times when setting `reload_dataloaders_every_n_epochs` and `check_val_every_n_epoch` ([10948](https://github.com/PyTorchLightning/pytorch-lightning/pull/10948))


Contributors

adamviola akihironitta awaelchli Borda carmocca edpizzi

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.5.7

Fixed

- Fixed `NeptuneLogger` when using DDP ([11030](https://github.com/PyTorchLightning/pytorch-lightning/pull/11030))
- Fixed a bug to disable logging hyperparameters in logger if there are no hparams ([11105](https://github.com/PyTorchLightning/pytorch-lightning/issues/11105))
- Avoid the deprecated `onnx.export(example_outputs=...)` in torch 1.10 ([11116](https://github.com/PyTorchLightning/pytorch-lightning/pull/11116))
- Fixed an issue when torch-scripting a `LightningModule` after training with `Trainer(sync_batchnorm=True)` ([11078](https://github.com/PyTorchLightning/pytorch-lightning/pull/11078))
- Fixed an `AttributeError` occuring when using a `CombinedLoader` (multiple dataloaders) for prediction ([11111](https://github.com/PyTorchLightning/pytorch-lightning/pull/11111))
- Fixed bug where `Trainer(track_grad_norm=..., logger=False)` would fail ([11114](https://github.com/PyTorchLightning/pytorch-lightning/pull/11114))
- Fixed an incorrect warning being produced by the model summary when using `bf16` precision on CPU ([11161](https://github.com/PyTorchLightning/pytorch-lightning/pull/11161))

Changed

- DeepSpeed does not require lightning module zero 3 partitioning ([10655](https://github.com/PyTorchLightning/pytorch-lightning/pull/10655))
- The `ModelCheckpoint` callback now saves and restores attributes `best_k_models`, `kth_best_model_path`, `kth_value`, and `last_model_path` ([10995](https://github.com/PyTorchLightning/pytorch-lightning/pull/10995))


Contributors

awaelchli borchero carmocca guyang3532 kaushikb11 ORippler Raalsky rohitgr7 SeanNaren

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.5.6

Fixed

- Fixed a bug where the DeepSpeedPlugin arguments `cpu_checkpointing` and `contiguous_memory_optimization` were not being forwarded to deepspeed correctly ([10874](https://github.com/PyTorchLightning/pytorch-lightning/issues/10874))
- Fixed an issue with `NeptuneLogger` causing checkpoints to be uploaded with a duplicated file extension ([11015](https://github.com/PyTorchLightning/pytorch-lightning/issues/11015))
- Fixed support for logging within callbacks returned from `LightningModule` ([10991](https://github.com/PyTorchLightning/pytorch-lightning/pull/10991))
- Fixed running sanity check with `RichProgressBar` ([10913](https://github.com/PyTorchLightning/pytorch-lightning/pull/10913))
- Fixed support for `CombinedLoader` while checking for warning raised with eval dataloaders ([10994](https://github.com/PyTorchLightning/pytorch-lightning/pull/10994))
- The TQDM progress bar now correctly shows the `on_epoch` logged values on train epoch end ([11069](https://github.com/PyTorchLightning/pytorch-lightning/pull/11069))
- Fixed bug where the TQDM updated the training progress bar during `trainer.validate` ([11069](https://github.com/PyTorchLightning/pytorch-lightning/pull/11069))


Contributors

carmocca jona-0 kaushikb11 Raalsky rohitgr7

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

Page 13 of 32

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.