Lightning

Latest version: v2.5.1

Safety actively analyzes 723929 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 32

2.1.0.rc1

:rabbit:

2.0.9.post0

Not secure

2.0.9

Not secure
App

Fixed

- Replace LightningClient with import from lightning_cloud (18544)

---

Fabric

Fixed

- Fixed an issue causing the `_FabricOptimizer.state` to remain outdated after loading with `load_state_dict` (18488)

---

PyTorch

Fixed

- Fixed an issue that wouldn't prevent the user to set the `log_model` parameter in `WandbLogger` via the LightningCLI (18458)
- Fixed the display of `v_num` in the progress bar when running with `Trainer(fast_dev_run=True)` (18491)
- Fixed `UnboundLocalError` when running with `python -O` (18496)
- Fixed visual glitch with the TQDM progress bar leaving the validation bar incomplete before switching back to the training display (18503)
- Fixed false positive warning about logging interval when running with `Trainer(fast_dev_run=True)` (18550)

---

Contributors

awaelchli, borda, justusschock, SebastianGer

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

2.0.8

Not secure
App

Changed

- Change top folder (18212)
- Remove `_handle_is_headless` calls in app run loop (18362)

Fixed

- refactor path to root preventing circular import (18357)

---

Fabric

Changed

- On XLA, avoid setting the global rank before processes have been launched as this will initialize the PJRT computation client in the main process (16966)

Fixed

- Fixed model parameters getting shared between processes when running with `strategy="ddp_spawn"` and `accelerator="cpu"`; this has a necessary memory impact, as parameters are replicated for each process now (18238)
- Removed false positive warning when using `fabric.no_backward_sync` with XLA strategies (17761)
- Fixed issue where Fabric would not initialize the global rank, world size, and rank-zero-only rank after initialization and before launch (16966)
- Fixed FSDP full-precision `param_dtype` training (`16-mixed`, `bf16-mixed` and `32-true` configurations) to avoid FSDP assertion errors with PyTorch < 2.0 (18278)

---

PyTorch

Changed

- On XLA, avoid setting the global rank before processes have been launched as this will initialize the PJRT computation client in the main process (16966)
- Fix inefficiency in rich progress bar (18369)

Fixed

- Fixed FSDP full-precision `param_dtype` training (`16-mixed` and `bf16-mixed` configurations) to avoid FSDP assertion errors with PyTorch < 2.0 (18278)
- Fixed an issue that prevented the use of custom logger classes without an `experiment` property defined (18093)
- Fixed setting the tracking uri in `MLFlowLogger` for logging artifacts to the MLFlow server (18395)
- Fixed redundant `iter()` call to dataloader when checking dataloading configuration (18415)
- Fixed model parameters getting shared between processes when running with `strategy="ddp_spawn"` and `accelerator="cpu"`; this has a necessary memory impact, as parameters are replicated for each process now (18238)
- Properly manage `fetcher.done` with `dataloader_iter` (18376)

---

Contributors

awaelchli, Borda, carmocca, quintenroets, rlizzo, speediedan, tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

2.0.7

Not secure
App

Changed

- Removed the top-level import `lightning.pdb`; import `lightning.app.pdb` instead (18177)
- Client retries forever (18065)

Fixed

- Fixed an issue that would prevent the user to set the multiprocessing start method after importing lightning (18177)

---

Fabric

Changed

- Disabled the auto-detection of the Kubeflow environment (18137)

Fixed

- Fixed issue where DDP subprocesses that used Hydra would set hydra's working directory to current directory (18145)
- Fixed an issue that would prevent the user to set the multiprocessing start method after importing lightning (18177)
- Fixed an issue with `Fabric.all_reduce()` not performing an inplace operation for all backends consistently (18235)

---

PyTorch

Added

- Added `LightningOptimizer.refresh()` to update the `__dict__` in case the optimizer it wraps has changed its internal state (18280)

Changed

- Disabled the auto-detection of the Kubeflow environment (18137))

Fixed

- Fixed a `Missing folder` exception when using a Google Storage URL as a `default_root_dir` (18088)
- Fixed an issue that would prevent the user to set the multiprocessing start method after importing lightning (18177)
- Fixed the gradient unscaling logic if the training step skipped backward (by returning `None`) (18267)
- Ensure that the closure running inside the optimizer step has gradients enabled, even if the optimizer step has it disabled (18268)
- Fixed an issue that could cause the `LightningOptimizer` wrapper returned by `LightningModule.optimizers()` have different internal state than the optimizer it wraps (18280)


---

Contributors

0x404, awaelchli, bilelomrani1, borda, ethanwharris, nisheethlahoti

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

2.0.6

Not secure
App

- Fixed handling a `None` request in the file orchestration queue (18111)

---

Fabric

- Fixed `TensorBoardLogger.log_graph` not unwrapping the `_FabricModule` (17844)

---

PyTorch

- `LightningCLI` not saving correctly `seed_everything` when `run=True` and `seed_everything=True` (18056)
- Fixed validation of non-PyTorch LR schedulers in manual optimization mode (18092)
- Fixed an attribute error for `_FaultTolerantMode` when loading an old checkpoint that pickled the enum (18094)


---

Contributors

awaelchli, lantiga, mauvilsa, shihaoyin

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

Page 5 of 32

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.