Lightning

Latest version: v2.3.0

Safety actively analyzes 638730 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 27

2.0.9

App

Fixed

- Replace LightningClient with import from lightning_cloud (18544)

---

Fabric

Fixed

- Fixed an issue causing the `_FabricOptimizer.state` to remain outdated after loading with `load_state_dict` (18488)

---

PyTorch

Fixed

- Fixed an issue that wouldn't prevent the user to set the `log_model` parameter in `WandbLogger` via the LightningCLI (18458)
- Fixed the display of `v_num` in the progress bar when running with `Trainer(fast_dev_run=True)` (18491)
- Fixed `UnboundLocalError` when running with `python -O` (18496)
- Fixed visual glitch with the TQDM progress bar leaving the validation bar incomplete before switching back to the training display (18503)
- Fixed false positive warning about logging interval when running with `Trainer(fast_dev_run=True)` (18550)

---

Contributors

awaelchli, borda, justusschock, SebastianGer

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

2.0.8

App

Changed

- Change top folder (18212)
- Remove `_handle_is_headless` calls in app run loop (18362)

Fixed

- refactor path to root preventing circular import (18357)

---

Fabric

Changed

- On XLA, avoid setting the global rank before processes have been launched as this will initialize the PJRT computation client in the main process (16966)

Fixed

- Fixed model parameters getting shared between processes when running with `strategy="ddp_spawn"` and `accelerator="cpu"`; this has a necessary memory impact, as parameters are replicated for each process now (18238)
- Removed false positive warning when using `fabric.no_backward_sync` with XLA strategies (17761)
- Fixed issue where Fabric would not initialize the global rank, world size, and rank-zero-only rank after initialization and before launch (16966)
- Fixed FSDP full-precision `param_dtype` training (`16-mixed`, `bf16-mixed` and `32-true` configurations) to avoid FSDP assertion errors with PyTorch < 2.0 (18278)

---

PyTorch

Changed

- On XLA, avoid setting the global rank before processes have been launched as this will initialize the PJRT computation client in the main process (16966)
- Fix inefficiency in rich progress bar (18369)

Fixed

- Fixed FSDP full-precision `param_dtype` training (`16-mixed` and `bf16-mixed` configurations) to avoid FSDP assertion errors with PyTorch < 2.0 (18278)
- Fixed an issue that prevented the use of custom logger classes without an `experiment` property defined (18093)
- Fixed setting the tracking uri in `MLFlowLogger` for logging artifacts to the MLFlow server (18395)
- Fixed redundant `iter()` call to dataloader when checking dataloading configuration (18415)
- Fixed model parameters getting shared between processes when running with `strategy="ddp_spawn"` and `accelerator="cpu"`; this has a necessary memory impact, as parameters are replicated for each process now (18238)
- Properly manage `fetcher.done` with `dataloader_iter` (18376)

---

Contributors

awaelchli, Borda, carmocca, quintenroets, rlizzo, speediedan, tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

2.0.7

App

Changed

- Removed the top-level import `lightning.pdb`; import `lightning.app.pdb` instead (18177)
- Client retries forever (18065)

Fixed

- Fixed an issue that would prevent the user to set the multiprocessing start method after importing lightning (18177)

---

Fabric

Changed

- Disabled the auto-detection of the Kubeflow environment (18137)

Fixed

- Fixed issue where DDP subprocesses that used Hydra would set hydra's working directory to current directory (18145)
- Fixed an issue that would prevent the user to set the multiprocessing start method after importing lightning (18177)
- Fixed an issue with `Fabric.all_reduce()` not performing an inplace operation for all backends consistently (18235)

---

PyTorch

Added

- Added `LightningOptimizer.refresh()` to update the `__dict__` in case the optimizer it wraps has changed its internal state (18280)

Changed

- Disabled the auto-detection of the Kubeflow environment (18137))

Fixed

- Fixed a `Missing folder` exception when using a Google Storage URL as a `default_root_dir` (18088)
- Fixed an issue that would prevent the user to set the multiprocessing start method after importing lightning (18177)
- Fixed the gradient unscaling logic if the training step skipped backward (by returning `None`) (18267)
- Ensure that the closure running inside the optimizer step has gradients enabled, even if the optimizer step has it disabled (18268)
- Fixed an issue that could cause the `LightningOptimizer` wrapper returned by `LightningModule.optimizers()` have different internal state than the optimizer it wraps (18280)


---

Contributors

0x404, awaelchli, bilelomrani1, borda, ethanwharris, nisheethlahoti

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

2.0.6

App

- Fixed handling a `None` request in the file orchestration queue (18111)

---

Fabric

- Fixed `TensorBoardLogger.log_graph` not unwrapping the `_FabricModule` (17844)

---

PyTorch

- `LightningCLI` not saving correctly `seed_everything` when `run=True` and `seed_everything=True` (18056)
- Fixed validation of non-PyTorch LR schedulers in manual optimization mode (18092)
- Fixed an attribute error for `_FaultTolerantMode` when loading an old checkpoint that pickled the enum (18094)


---

Contributors

awaelchli, lantiga, mauvilsa, shihaoyin

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

2.0.5

App

Added

- plugin: store source app (17892)
- added colocation identifier (16796)
- Added exponential backoff to HTTPQueue put (18013)
- Content for plugins (17243)

Changed

- Save a reference to created tasks, to avoid tasks disappearing (17946)

---

Fabric

Added

- Added validation against misconfigured device selection when using the DeepSpeed strategy (17952)

Changed

- Avoid info message when loading 0 entry point callbacks (17990)

Fixed

- Fixed the emission of a false-positive warning when calling a method on the Fabric-wrapped module that accepts no arguments (17875)
- Fixed check for FSDP's flat parameters in all parameter groups (17914)
- Fixed automatic step tracking in Fabric's CSVLogger (17942)
- Fixed an issue causing the `torch.set_float32_matmul_precision` info message to show multiple times (17960)
- Fixed loading model state when `Fabric.load()` is called after `Fabric.setup()` (17997)

---

PyTorch

Fixed

- Fixed delayed creation of experiment metadata and checkpoint/log dir name when using `WandbLogger` (17818)
- Fixed incorrect parsing of arguments when augmenting exception messages in DDP (17948)
- Fixed an issue causing the `torch.set_float32_matmul_precision` info message to show multiple times (17960)
- Added missing `map_location` argument for the `LightningDataModule.load_from_checkpoint` function (17950)
- Fix support for `neptune-client` (17939)


---

Contributors

anio, awaelchli, borda, ethanwharris, lantiga, nicolai86, rjarun8, schmidt-ai, schuhschuh, wouterzwerink, yurijmikhalevich

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

2.0.4

App

Fixed

- bumped several dependencies to address security vulnerabilities.

---

Fabric

Fixed

- Fixed validation of parameters of `plugins.precision.MixedPrecision` (17687)
- Fixed an issue with HPU imports leading to performance degradation (17788)

---

PyTorch

Changed

- Changes to the `NeptuneLogger` (16761):
* It now supports neptune-client 0.16.16 and neptune >=1.0, and we have replaced the `log()` method with `append()` and `extend()`.
* It now accepts a namespace `Handler` as an alternative to `Run` for the `run` argument. This means that you can call it `NeptuneLogger(run=run["some/namespace"])` to log everything to the `some/namespace/` location of the run.

Fixed

- Fixed validation of parameters of `plugins.precision.MixedPrecisionPlugin` (17687)
- Fixed deriving default map location in `LightningModule.load_from_checkpoint` when there is an extra state (17812)


---

Contributors

akreuzer, awaelchli, borda, jerome-habana, kshitij12345

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

Page 4 of 27

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.