Lightning

Latest version: v2.5.0.post0

Safety actively analyzes 715032 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 21 of 32

1.1.4

Added

- Add automatic optimization property setter to lightning module ([5169](https://github.com/PyTorchLightning/pytorch-lightning/pull/5169))

Changed

- Changed deprecated `enable_pl_optimizer=True` ([5244](https://github.com/PyTorchLightning/pytorch-lightning/pull/5244))

Fixed

- Fixed `transfer_batch_to_device` for DDP with `len(devices_ids) == 1` ([5195](https://github.com/PyTorchLightning/pytorch-lightning/pull/5195))
- Logging only on `not should_accumulate()` during training ([5417](https://github.com/PyTorchLightning/pytorch-lightning/pull/5417))
- Resolve interpolation bug with Hydra ([5406](https://github.com/PyTorchLightning/pytorch-lightning/pull/5406))
- Check environ before selecting a seed to prevent warning message ([4743](https://github.com/PyTorchLightning/pytorch-lightning/pull/4743))

Contributors
ananthsub, SeanNaren, tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

1.1.3

Added

- Added a check for optimizer attached to `lr_scheduler` (5338)
- Added support for passing non-existing `filepaths` to `resume_from_checkpoint` (4402)

Changed

- Skip restore from `resume_from_checkpoint` while `testing` (5161)
- Allowed `log_momentum` for adaptive optimizers in `LearningRateMonitor` (5333)
- Disabled checkpointing, earlystopping and logging with `fast_dev_run` (5277)
- Distributed group defaults to `WORLD` if `None` (5125)

Fixed

- Fixed `trainer.test` returning non-test metrics (5214)
- Fixed metric state reset (5273)
- Fixed `--num-nodes` on `DDPSequentialPlugin` (5327)
- Fixed invalid value for `weights_summary` (5296)
- Fixed `Trainer.test` not using the latest `best_model_path` (5161)
- Fixed existence check for `hparams` not using underlying filesystem (5250)
- Fixed `LightningOptimizer` AMP bug (5191)
- Fixed casted key to string in `_flatten_dict` (5354)

Contributors

8greg8, haven-jeon, kandluis, marload, rohitgr7, tadejsv, tarepan, tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1.2

Overview

Detail changes

Added

- Support number for logging with `sync_dist=True` (5080)
- Added offset logging step when resuming for Wandb logger (5050)

Removed

- `enable_pl_optimizer=False` by default to temporarily fix AMP issues (5163)

Fixed

- Metric reduction with Logging (5150)
- Remove nan loss in manual optimization (5121)
- Un-balanced logging properly supported (5119)
- Fix hanging in DDP HPC accelerators (5157)
- Fix saved filename in `ModelCheckpoint` if it already exists (4861)
- Fix reset `TensorRunningAccum` (5106)
- Updated `DALIClassificationLoader` to not use deprecated arguments (4925)
- Corrected call to `torch.no_grad` (5124)

Contributors

8greg8, ananthsub, borisdayma, gan3sh500, rohitgr7, SeanNaren, tchaton, VinhLoiIT

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1.1

Overview

Detail changes

Added

- Add a notebook example to reach a quick baseline of ~94% accuracy on CIFAR10 using Resnet in Lightning (4818)

Changed

- Simplify accelerator steps (5015)
- Refactor load in checkpoint connector (4593)

Removed

- Drop duplicate metrics (5014)
- Remove beta arg from F1 class and functional (5076)

Fixed

- Fixed trainer by default `None` in `DDPAccelerator` (4915)
- Fixed `LightningOptimizer` to expose optimizer attributes (5095)
- Do not warn when the `name` key is used in the `lr_scheduler` dict (5057)
- Check if optimizer supports closure (4981)
- Extend LightningOptimizer to exposure underlying Optimizer attributes + update doc (5095)
- Add deprecated metric utility functions back to functional (5067, 5068)
- Allow any input in `to_onnx` and `to_torchscript` (4378)
- Do not warn when the name key is used in the `lr_scheduler` dict (5057)

Contributors

Borda, carmocca, hemildesai, rohitgr7, s-rog, tarepan, tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1

Release highlights: https://bit.ly/3gyLZpP

Learn more about sharded training: https://bit.ly/2W3hgI0

Detail changes

Added

- Added "monitor" key to saved `ModelCheckpoints` (4383)
- Added `ConfusionMatrix` class interface (4348)
- Added multiclass AUROC metric (4236)
- Added global step indexing to the checkpoint name for a better sub-epoch checkpointing experience (3807)
- Added optimizer hooks in callbacks (4379)
- Added option to log momentum (4384)
- Added `current_score` to `ModelCheckpoint.on_save_checkpoint` (4721)
- Added logging using `self.log` in train and evaluation for epoch end hooks (4913)
- Added ability for DDP plugin to modify optimizer state saving (4675)
- Added casting to python types for NumPy scalars when logging `hparams` (4647)
- Added `prefix` argument in loggers (4557)
- Added printing of total num of params, trainable and non-trainable params in ModelSummary (4521)
- Added `PrecisionRecallCurve, ROC, AveragePrecision` class metric (4549)
- Added custom `Apex` and `NativeAMP` as `Precision plugins` (4355)
- Added `DALI MNIST` example (3721)
- Added `sharded plugin` for DDP for multi-GPU training memory optimizations (4773)
- Added `experiment_id` to the NeptuneLogger (3462)
- Added `Pytorch Geometric` integration example with Lightning (4568)
- Added `all_gather` method to `LightningModule` which allows gradient-based tensor synchronizations for use-cases such as negative sampling. (5012)
- Enabled `self.log` in most functions (4969)
- Added changeable extension variable for `ModelCheckpoint` (4977)


Changed

- Removed `multiclass_roc` and `multiclass_precision_recall_curve`, use `roc` and `precision_recall_curve` instead (4549)
- Tuner algorithms will be skipped if `fast_dev_run=True` (3903)
- WandbLogger does not force wandb `reinit` arg to True anymore and creates a run only when needed (4648)
- Changed `automatic_optimization` to be a model attribute (4602)
- Changed `Simple Profiler` report to order by percentage time spent + num calls (4880)
- Simplify optimization Logic (4984)
- Classification metrics overhaul (4837)
- Updated `fast_dev_run` to accept integer representing num_batches (4629)
- Refactored optimizer (4658)


Deprecated

- Deprecated `prefix` argument in `ModelCheckpoint` (4765)
- Deprecated the old way of assigning hyper-parameters through `self.hparams = ...` (4813)
- Deprecated `mode='auto'` from `ModelCheckpoint` and `EarlyStopping` (4695)

Removed

- Removed `reorder` parameter of the `auc` metric (5004)

Fixed

- Added feature to move tensors to CPU before saving (4309)
- Fixed `LoggerConnector` to have logged metrics on root device in DP (4138)
- Auto convert tensors to contiguous format when `gather_all` (4907)
- Fixed `PYTHONPATH` for DDP test model (4528)
- Fixed allowing logger to support indexing (4595)
- Fixed DDP and manual_optimization (4976)

Contributors

ananyahjha93, awaelchli, blatr, Borda, borisdayma, carmocca, ddrevicky, george-gca, gianscarpe, irustandi, janhenriklambrechts, jeremyjordan, justusschock, lezwon, rohitgr7, s-rog, SeanNaren, SkafteNicki, tadejsv, tchaton, williamFalcon, zippeurfou

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1.0

Overview

Page 21 of 32

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.