Lightning

Latest version: v2.3.0

Safety actively analyzes 638741 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 19 of 27

1.1.8

Fixed

- Separate epoch validation from step validation (5208)
- Fixed `toggle_optimizers` not handling all optimizer parameters (5775)

Contributors
ananthsub, rohitgr7

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1.7

Fixed

- Fixed `TensorBoardLogger` not closing `SummaryWriter` on `finalize` (5696)
- Fixed filtering of pytorch "unsqueeze" warning when using DP (5622)
- Fixed `num_classes` argument in F1 metric (5663)
- Fixed `log_dir` property (5537)
- Fixed a race condition in `ModelCheckpoint` when checking if a checkpoint file exists (5144)
- Remove unnecessary intermediate layers in Dockerfiles (5697)
- Fixed auto learning rate ordering (5638)

Contributors
awaelchli guillochon noamzilo rohitgr7 SkafteNicki sumanthratna

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1.6

Changed

- Increased TPU check timeout from 20s to 100s (5598)
- Ignored `step` param in Neptune logger's log_metric method (5510)
- Pass batch outputs to `on_train_batch_end` instead of `epoch_end` outputs (4369)

Fixed

- Fixed `toggle_optimizer` to reset `requires_grad` state (5574)
- Fixed FileNotFoundError for best checkpoint when using DDP with Hydra (5629)
- Fixed an error when logging a progress bar metric with a reserved name (5620)
- Fixed `Metric`'s `state_dict` not included when child modules (5614)
- Fixed Neptune logger creating multiple experiments when GPUs > 1 (3256)
- Fixed duplicate logs appearing in console when using the python logging module (5509)
- Fixed tensor printing in `trainer.test()` (5138)
- Fixed not using dataloader when `hparams` present (4559)

Contributors
awaelchli bryant1410 lezwon manipopopo PiotrJander psinger rnett SeanNaren swethmandava tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1.5

Fixed

- Fixed a visual bug in the progress bar display initialization (4579)
- Fixed logging `on_train_batch_end` in a callback with multiple optimizers (5521)
- Fixed `reinit_scheduler_properties` with correct optimizer (5519)
- Fixed `val_check_interval` with `fast_dev_run` (5540)

Contributors

awaelchli, carmocca, rohitgr7

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1.4

Added

- Add automatic optimization property setter to lightning module ([5169](https://github.com/PyTorchLightning/pytorch-lightning/pull/5169))

Changed

- Changed deprecated `enable_pl_optimizer=True` ([5244](https://github.com/PyTorchLightning/pytorch-lightning/pull/5244))

Fixed

- Fixed `transfer_batch_to_device` for DDP with `len(devices_ids) == 1` ([5195](https://github.com/PyTorchLightning/pytorch-lightning/pull/5195))
- Logging only on `not should_accumulate()` during training ([5417](https://github.com/PyTorchLightning/pytorch-lightning/pull/5417))
- Resolve interpolation bug with Hydra ([5406](https://github.com/PyTorchLightning/pytorch-lightning/pull/5406))
- Check environ before selecting a seed to prevent warning message ([4743](https://github.com/PyTorchLightning/pytorch-lightning/pull/4743))

Contributors
ananthsub, SeanNaren, tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

1.1.3

Added

- Added a check for optimizer attached to `lr_scheduler` (5338)
- Added support for passing non-existing `filepaths` to `resume_from_checkpoint` (4402)

Changed

- Skip restore from `resume_from_checkpoint` while `testing` (5161)
- Allowed `log_momentum` for adaptive optimizers in `LearningRateMonitor` (5333)
- Disabled checkpointing, earlystopping and logging with `fast_dev_run` (5277)
- Distributed group defaults to `WORLD` if `None` (5125)

Fixed

- Fixed `trainer.test` returning non-test metrics (5214)
- Fixed metric state reset (5273)
- Fixed `--num-nodes` on `DDPSequentialPlugin` (5327)
- Fixed invalid value for `weights_summary` (5296)
- Fixed `Trainer.test` not using the latest `best_model_path` (5161)
- Fixed existence check for `hparams` not using underlying filesystem (5250)
- Fixed `LightningOptimizer` AMP bug (5191)
- Fixed casted key to string in `_flatten_dict` (5354)

Contributors

8greg8, haven-jeon, kandluis, marload, rohitgr7, tadejsv, tarepan, tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

Page 19 of 27

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.