Lightning

Latest version: v2.3.0

Safety actively analyzes 638741 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 16 of 27

1.3.6

Fixed

- Fixed logs overwriting issue for remote filesystems ([7889](https://github.com/PyTorchLightning/pytorch-lightning/pull/7889))
- Fixed `DataModule.prepare_data` could only be called on the global rank 0 process ([7945](https://github.com/PyTorchLightning/pytorch-lightning/pull/7945))
- Fixed setting `worker_init_fn` to seed dataloaders correctly when using DDP ([7942](https://github.com/PyTorchLightning/pytorch-lightning/pull/7942))
- Fixed `BaseFinetuning` callback to properly handle parent modules w/ parameters ([7931](https://github.com/PyTorchLightning/pytorch-lightning/pull/7931))

Contributors

awaelchli Borda kaushikb11 Queuecumber SeanNaren senarvi speediedan

1.3.5

Added

- Added warning to Training Step output ([7779](https://github.com/PyTorchLightning/pytorch-lightning/pull/7779))

Fixed

- Fixed LearningRateMonitor + BackboneFinetuning ([7835](https://github.com/PyTorchLightning/pytorch-lightning/pull/7835))
- Minor improvements to `apply_to_collection` and type signature of `log_dict` ([7851](https://github.com/PyTorchLightning/pytorch-lightning/pull/7851))
- Fixed docker versions ([7834](https://github.com/PyTorchLightning/pytorch-lightning/pull/7834))
- Fixed sharded training check for fp16 precision ([7825](https://github.com/PyTorchLightning/pytorch-lightning/pull/7825))
- Fixed support for torch Module type hints in LightningCLI ([7807](https://github.com/PyTorchLightning/pytorch-lightning/pull/7807))

Changed

- Move `training_output` validation to after `train_step_end` ([7868](https://github.com/PyTorchLightning/pytorch-lightning/pull/7868))

Contributors

Borda, justusschock, kandluis, mauvilsa, shuyingsunshine21, tchaton

*If we forgot someone due to not matching commit email with GitHub account, let us know :]*

1.3.4

Fixed

- Fixed info message when max training time reached ([7780](https://github.com/PyTorchLightning/pytorch-lightning/pull/7780))
- Fixed missing `__len__` method to `IndexBatchSamplerWrapper` ([7681](https://github.com/PyTorchLightning/pytorch-lightning/pull/7681))

Contributors

awaelchli kaushikb11

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.3.3

Changed

- Changed calling of `untoggle_optimizer(opt_idx)` out of the closure function (7563)

Fixed

- Fixed `ProgressBar` pickling after calling `trainer.predict` (7608)
- Fixed broadcasting in multi-node, multi-gpu DDP using torch 1.7 (7592)
- Fixed dataloaders are not reset when tuning the model (7566)
- Fixed print errors in `ProgressBar` when `trainer.fit` is not called (7674)
- Fixed global step update when the epoch is skipped (7677)
- Fixed training loop total batch counter when accumulate grad batches was enabled (7692)

Contributors

carmocca kaushikb11 ryanking13 Lucklyric ajtritt yifuwang

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.3.2

Changed

- `DataModule`s now avoid duplicate `{setup,teardown,prepare_data}` calls for the same stage (7238)

Fixed

- Fixed parsing of multiple training dataloaders (7433)
- Fixed recursive passing of `wrong_type` keyword argument in `pytorch_lightning.utilities.apply_to_collection` (7433)
- Fixed setting correct `DistribType` for `ddp_cpu` (spawn) backend (7492)
- Fixed incorrect number of calls to LR scheduler when `check_val_every_n_epoch > 1` (7032)

Contributors

alanhdu carmocca justusschock tkng

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.3.1

Fixed

- Fixed DeepSpeed with IterableDatasets (7362)
- Fixed `Trainer.current_epoch` not getting restored after tuning (7434)
- Fixed local rank displayed in console log (7395)

Contributors

akihironitta awaelchli leezu

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

Page 16 of 27

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.