Lightning

Latest version: v2.3.0

Safety actively analyzes 638741 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 21 of 27

1.0.6

Detail changes

Added

- Added metrics aggregation in Horovod and fixed early stopping (3775)
- Added `manual_optimizer_step` which work with `AMP Native` and `accumulated_grad_batches` (4485)
- Added `persistent(mode)` method to metrics, to enable and disable metric states being added to `state_dict` (4482)
- Added congratulations at the end of our notebooks (4555)

Changed

- Changed `fsspec` to tuner (4458)
- Unify sLURM/TorchElastic under backend plugin (4578, 4580, 4581, 4582, 4583)

Fixed

- Fixed feature-lack in `hpc_load` (4526)
- Fixed metrics states being overridden in DDP mode (4482)
- Fixed `lightning_getattr`, `lightning_hasattr` not finding the correct attributes in datamodule (4347)
- Fixed automatic optimization AMP by `manual_optimization_step` (4485)
- Replace `MisconfigurationException` with warning in `ModelCheckpoint` Callback (4560)
- Fixed logged keys in mlflow logger (4412)
- Fixed `is_picklable` by catching `AttributeError` (4508)

Contributors

dscarmo, jtamir, kazhang, maxjeblick, rohitgr7, SkafteNicki, tarepan, tchaton, tgaddair, williamFalcon

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.0.5

Detail changes

Added

- Added PyTorch 1.7 Stable support (3821)
- Added timeout for `tpu_device_exists` to ensure process does not hang indefinitely (4340)

Changed

- W&B log in sync with `Trainer` step (4405)
- Hook `on_after_backward` is called only when `optimizer_step` is being called (4439)
- Moved `track_and_norm_grad` into `training loop` and called only when `optimizer_step` is being called (4439)
- Changed type checker with explicit cast of ref_model object (4457)

Deprecated

- Deprecated passing `ModelCheckpoint` instance to `checkpoint_callback` Trainer argument (4336)

Fixed

- Disable saving checkpoints if not trained (4372)
- Fixed error using `auto_select_gpus=True` with `gpus=-1` (4209)
- Disabled training when `limit_train_batches=0` (4371)
- Fixed that metrics do not store computational graph for all seen data (4313)
- Fixed AMP unscale for `on_after_backward` (4439)
- Fixed TorchScript export when module includes Metrics (4428)
- Fixed CSV logger warning (4419)
- Fixed skip DDP parameter sync (4301)

Contributors

ananthsub, awaelchli, borisdayma, carmocca, justusschock, lezwon, rohitgr7, SeanNaren, SkafteNicki, ssaru, tchaton, ydcjeff

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.0.4

Detail changes

Added

- Added `dirpath` and `filename` parameter in `ModelCheckpoint` (4213)
- Added plugins docs and DDPPlugin to customize ddp across all accelerators (4258)
- Added `strict` option to the scheduler dictionary (3586)
- Added `fsspec` support for profilers (4162)
- Added autogenerated helptext to `Trainer.add_argparse_args` (4344)
- Added support for string values in `Trainer`'s `profiler` parameter (3656)

Changed

- Improved error messages for invalid `configure_optimizers` returns (3587)
- Allow changing the logged step value in `validation_step` (4130)
- Allow setting `replace_sampler_ddp=True` with a distributed sampler already added (4273)
- Fixed santized parameters for `WandbLogger.log_hyperparams` (4320)

Deprecated

- Deprecated `filepath` in `ModelCheckpoint` (4213)
- Deprecated `reorder` parameter of the `auc` metric (4237)
- Deprecated bool values in `Trainer`'s `profiler` parameter (3656)

Fixed

- Fixed setting device ids in DDP (4297)
- Fixed synchronization of best model path in `ddp_accelerator` (4323)
- Fixed `WandbLogger` not uploading checkpoint artifacts at the end of training (4341)

Contributors

ananthsub, awaelchli, carmocca, ddrevicky, louis-she, mauvilsa, rohitgr7, SeanNaren, tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.0.3

Detail changes

Added

- Added persistent flag to `Metric.add_state` (4195)

Changed

- Used `checkpoint_connector.hpc_save` in SLURM (4217)
- Moved base req. to root (4219)

Fixed

- Fixed `hparams` assign in init (4189)
- Fixed overwrite check for model hooks (4010)

Contributors

Borda, EspenHa, teddykoker

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.0.2

Fixes the last major bugs for validation logging.
Also removes duplicate charts for metric / metric_loss.
Doing this minor release because correct validation metrics logging is critical.

Details changes

Added

- Added trace functionality to the function `to_torchscript` (4142)

Changed

- Called `on_load_checkpoint` before loading `state_dict` (4057)

Removed

- Removed duplicate metric vs step log for train loop (4173)

Fixed

- Fixed the self.log problem in `validation_step()` (4169)
- Fixed `hparams` saving - save the state when `save_hyperparameters()` is called [in `__init__`] (4163)
- Fixed runtime failure while exporting `hparams` to yaml (4158)

Contributors

Borda, NumesSanguis, rohitgr7, williamFalcon

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.0.1

Obligatory post 1.0 minor release. Main fix is to make Lightning module fully compatible with Jit (had some edge-cases we had not covered).

Page 21 of 27

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.