Lightning

Latest version: v2.3.0

Safety actively analyzes 638730 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 20 of 27

1.1.2

Overview

Detail changes

Added

- Support number for logging with `sync_dist=True` (5080)
- Added offset logging step when resuming for Wandb logger (5050)

Removed

- `enable_pl_optimizer=False` by default to temporarily fix AMP issues (5163)

Fixed

- Metric reduction with Logging (5150)
- Remove nan loss in manual optimization (5121)
- Un-balanced logging properly supported (5119)
- Fix hanging in DDP HPC accelerators (5157)
- Fix saved filename in `ModelCheckpoint` if it already exists (4861)
- Fix reset `TensorRunningAccum` (5106)
- Updated `DALIClassificationLoader` to not use deprecated arguments (4925)
- Corrected call to `torch.no_grad` (5124)

Contributors

8greg8, ananthsub, borisdayma, gan3sh500, rohitgr7, SeanNaren, tchaton, VinhLoiIT

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1.1

Overview

Detail changes

Added

- Add a notebook example to reach a quick baseline of ~94% accuracy on CIFAR10 using Resnet in Lightning (4818)

Changed

- Simplify accelerator steps (5015)
- Refactor load in checkpoint connector (4593)

Removed

- Drop duplicate metrics (5014)
- Remove beta arg from F1 class and functional (5076)

Fixed

- Fixed trainer by default `None` in `DDPAccelerator` (4915)
- Fixed `LightningOptimizer` to expose optimizer attributes (5095)
- Do not warn when the `name` key is used in the `lr_scheduler` dict (5057)
- Check if optimizer supports closure (4981)
- Extend LightningOptimizer to exposure underlying Optimizer attributes + update doc (5095)
- Add deprecated metric utility functions back to functional (5067, 5068)
- Allow any input in `to_onnx` and `to_torchscript` (4378)
- Do not warn when the name key is used in the `lr_scheduler` dict (5057)

Contributors

Borda, carmocca, hemildesai, rohitgr7, s-rog, tarepan, tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1

Release highlights: https://bit.ly/3gyLZpP

Learn more about sharded training: https://bit.ly/2W3hgI0

Detail changes

Added

- Added "monitor" key to saved `ModelCheckpoints` (4383)
- Added `ConfusionMatrix` class interface (4348)
- Added multiclass AUROC metric (4236)
- Added global step indexing to the checkpoint name for a better sub-epoch checkpointing experience (3807)
- Added optimizer hooks in callbacks (4379)
- Added option to log momentum (4384)
- Added `current_score` to `ModelCheckpoint.on_save_checkpoint` (4721)
- Added logging using `self.log` in train and evaluation for epoch end hooks (4913)
- Added ability for DDP plugin to modify optimizer state saving (4675)
- Added casting to python types for NumPy scalars when logging `hparams` (4647)
- Added `prefix` argument in loggers (4557)
- Added printing of total num of params, trainable and non-trainable params in ModelSummary (4521)
- Added `PrecisionRecallCurve, ROC, AveragePrecision` class metric (4549)
- Added custom `Apex` and `NativeAMP` as `Precision plugins` (4355)
- Added `DALI MNIST` example (3721)
- Added `sharded plugin` for DDP for multi-GPU training memory optimizations (4773)
- Added `experiment_id` to the NeptuneLogger (3462)
- Added `Pytorch Geometric` integration example with Lightning (4568)
- Added `all_gather` method to `LightningModule` which allows gradient-based tensor synchronizations for use-cases such as negative sampling. (5012)
- Enabled `self.log` in most functions (4969)
- Added changeable extension variable for `ModelCheckpoint` (4977)


Changed

- Removed `multiclass_roc` and `multiclass_precision_recall_curve`, use `roc` and `precision_recall_curve` instead (4549)
- Tuner algorithms will be skipped if `fast_dev_run=True` (3903)
- WandbLogger does not force wandb `reinit` arg to True anymore and creates a run only when needed (4648)
- Changed `automatic_optimization` to be a model attribute (4602)
- Changed `Simple Profiler` report to order by percentage time spent + num calls (4880)
- Simplify optimization Logic (4984)
- Classification metrics overhaul (4837)
- Updated `fast_dev_run` to accept integer representing num_batches (4629)
- Refactored optimizer (4658)


Deprecated

- Deprecated `prefix` argument in `ModelCheckpoint` (4765)
- Deprecated the old way of assigning hyper-parameters through `self.hparams = ...` (4813)
- Deprecated `mode='auto'` from `ModelCheckpoint` and `EarlyStopping` (4695)

Removed

- Removed `reorder` parameter of the `auc` metric (5004)

Fixed

- Added feature to move tensors to CPU before saving (4309)
- Fixed `LoggerConnector` to have logged metrics on root device in DP (4138)
- Auto convert tensors to contiguous format when `gather_all` (4907)
- Fixed `PYTHONPATH` for DDP test model (4528)
- Fixed allowing logger to support indexing (4595)
- Fixed DDP and manual_optimization (4976)

Contributors

ananyahjha93, awaelchli, blatr, Borda, borisdayma, carmocca, ddrevicky, george-gca, gianscarpe, irustandi, janhenriklambrechts, jeremyjordan, justusschock, lezwon, rohitgr7, s-rog, SeanNaren, SkafteNicki, tadejsv, tchaton, williamFalcon, zippeurfou

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.1.0

Overview

1.0.8

Detail changes

Added

- Added casting to python types for numpy scalars when logging `hparams` (4647)
- Added warning when progress bar refresh rate is less than 20 on Google Colab to prevent crashing (4654)
- Added `F1` class metric (4656)

Changed

- Consistently use `step=trainer.global_step` in `LearningRateMonitor` independently of `logging_interval` (4376)
- Metric states are no longer as default added to `state_dict` (4685)
- Renamed class metric `Fbeta` >> `FBeta` (4656)
- Model summary: add 1 decimal place (4745)
- Do not override `PYTHONWARNINGS` (4700)

Fixed

- Fixed checkpoint `hparams` dict casting when `omegaconf` is available (4770)
- Fixed incomplete progress bars when total batches not divisible by refresh rate (4577)
- Updated SSIM metric (4566)(4656)
- Fixed batch_arg_name - add `batch_arg_name` to all calls to `_adjust_batch_size`bug (4812)
- Fixed `torchtext` data to GPU (4785)
- Fixed a crash bug in MLFlow logger (4716)

Contributors

awaelchli, jonashaag, jungwhank, M-Salti, moi90, pgagarinov, s-rog, Samyak2, SkafteNicki, teddykoker, ydcjeff

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.0.7

Detail changes

Added

- Added lambda closure to `manual_optimizer_step` (4618)

Changed

- Change Metrics `persistent` default mode to `False` (4685)


Fixed

- Prevent crash if `sync_dist=True` on CPU (4626)
- Fixed average pbar Metrics (4534)
- Fixed `setup` callback hook to correctly pass the LightningModule through (4608)
- Allowing decorate model init with saving `hparams` inside (4662)
- Fixed `split_idx` set by `LoggerConnector` in `on_trainer_init` to `Trainer` (4697)

Contributors

ananthsub, Borda, SeanNaren, SkafteNicki, tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

Page 20 of 27

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.