Lightning

Latest version: v2.3.0

Safety actively analyzes 638730 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 14 of 27

1.4.8

- Fixed error reporting in DDP process reconciliation when processes are launched by an external agent ([9389](https://github.com/PyTorchLightning/pytorch-lightning/pull/9389))
- Added PL_RECONCILE_PROCESS environment variable to enable process reconciliation regardless of cluster environment settings ()([9389](https://github.com/PyTorchLightning/pytorch-lightning/pull/9389))
- Fixed `add_argparse_args` raising `TypeError` when args are typed as `typing.Generic` in Python 3.6 ([9554](https://github.com/PyTorchLightning/pytorch-lightning/pull/9554))
- Fixed back-compatibility for saving hyperparameters from a single container and inferring its argument name by reverting [9125](https://github.com/PyTorchLightning/pytorch-lightning/pull/9125) ([#9642](https://github.com/PyTorchLightning/pytorch-lightning/pull/9642))
Contributors

ananthsub akihironitta awaelchli carmocca tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.4.7

- Fixed logging of nan parameters ([9364](https://github.com/PyTorchLightning/pytorch-lightning/pull/9364))
- Fixed `replace_sampler` missing the batch size under specific conditions ([9367](https://github.com/PyTorchLightning/pytorch-lightning/pull/9367))
- Pass init args to ShardedDataParallel ([9483](https://github.com/PyTorchLightning/pytorch-lightning/pull/9483))
- Fixed collision of user argument when using ShardedDDP ([9512](https://github.com/PyTorchLightning/pytorch-lightning/pull/9512))
- Fixed DeepSpeed crash for RNNs ([9489](https://github.com/PyTorchLightning/pytorch-lightning/pull/9489))

Contributors

asanakoy awaelchli borisdayma carmocca guotuofeng justusschock kaushikb11 rohitgr7 SeanNaren

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.4.6

- Fixed an issues with export to ONNX format when a model has multiple inputs ([8800](https://github.com/PyTorchLightning/pytorch-lightning/pull/8800))
- Removed deprecation warnings being called for `on_{task}_dataloader` ([9279](https://github.com/PyTorchLightning/pytorch-lightning/pull/9279))
- Fixed save/load/resume from checkpoint for DeepSpeed Plugin ([8397](https://github.com/PyTorchLightning/pytorch-lightning/pull/8397), [#8644](https://github.com/PyTorchLightning/pytorch-lightning/pull/8644), [#8627](https://github.com/PyTorchLightning/pytorch-lightning/pull/8627))
- Fixed `EarlyStopping` running on train epoch end when `check_val_every_n_epoch>1` is set ([9156](https://github.com/PyTorchLightning/pytorch-lightning/pull/9156))
- Fixed an issue with logger outputs not being finalized correctly after prediction runs ([8333](https://github.com/PyTorchLightning/pytorch-lightning/issues/8333))
- Fixed the Apex and DeepSpeed plugin closure running after the `on_before_optimizer_step` hook ([9288](https://github.com/PyTorchLightning/pytorch-lightning/issues/9288))
- Fixed the Native AMP plugin closure not running with manual optimization ([9288](https://github.com/PyTorchLightning/pytorch-lightning/issues/9288))
- Fixed bug where data-loading functions where not getting the correct running stage passed ([8858](https://github.com/PyTorchLightning/pytorch-lightning/pull/8858))
- Fixed intra-epoch evaluation outputs staying in memory when the respective `*_epoch_end` hook wasn't overridden ([9261](https://github.com/PyTorchLightning/pytorch-lightning/pull/9261))
- Fixed error handling in DDP process reconciliation when `_sync_dir` was not initialized ([9267](https://github.com/PyTorchLightning/pytorch-lightning/pull/9267))
- Fixed PyTorch Profiler not enabled for manual optimization ([9316](https://github.com/PyTorchLightning/pytorch-lightning/pull/9316))
- Fixed inspection of other args when a container is specified in `save_hyperparameters` ([9125](https://github.com/PyTorchLightning/pytorch-lightning/pull/9125))
- Fixed signature of `Timer.on_train_epoch_end` and `StochasticWeightAveraging.on_train_epoch_end` to prevent unwanted deprecation warnings ([9347](https://github.com/PyTorchLightning/pytorch-lightning/pull/9347))

Contributors

ananthsub awaelchli Borda four4fish justusschock kaushikb11 s-rog SeanNaren tangbinh tchaton xerus

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.4.5

- Fixed reduction using `self.log(sync_dict=True, reduce_fx={mean,max})` ([9142](https://github.com/PyTorchLightning/pytorch-lightning/pull/9142))
- Fixed not setting a default value for `max_epochs` if `max_time` was specified on the `Trainer` constructor ([9072](https://github.com/PyTorchLightning/pytorch-lightning/pull/9072))
- Fixed the CometLogger, no longer modifies the metrics in place. Instead creates a copy of metrics before performing any operations ([9150](https://github.com/PyTorchLightning/pytorch-lightning/pull/9150))
- Fixed `DDP` "CUDA error: initialization error" due to a `copy` instead of `deepcopy` on `ResultCollection` ([9239](https://github.com/PyTorchLightning/pytorch-lightning/pull/9239))

Contributors

ananthsub bamblebam carmocca daniellepintz ethanwharris kaushikb11 sohamtiwari3120 tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.4.4

- Fixed a bug in the binary search mode of auto batch size scaling where exception was raised if the first trainer run resulted in OOM ([8954](https://github.com/PyTorchLightning/pytorch-lightning/pull/8954))
- Fixed a bug causing logging with `log_gpu_memory='min_max'` not working ([9013](https://github.com/PyTorchLightning/pytorch-lightning/pull/9013))

Contributors

SkafteNicki eladsegal

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.4.3

- Fixed plateau scheduler stepping on incomplete epoch ([8861](https://github.com/PyTorchLightning/pytorch-lightning/pull/8861))
- Fixed infinite loop with `CycleIterator` and multiple loaders ([8889](https://github.com/PyTorchLightning/pytorch-lightning/pull/8889))
- Fixed `StochasticWeightAveraging` with a list of learning rates not applying them to each param group ([8747](https://github.com/PyTorchLightning/pytorch-lightning/issues/8747))
- Restore original loaders if replaced by entrypoint ([8885](https://github.com/PyTorchLightning/pytorch-lightning/pull/8885))
- Fixed lost reference to `_Metadata` object in `ResultMetricCollection` ([8932](https://github.com/PyTorchLightning/pytorch-lightning/pull/8932))
- Ensure the existence of `DDPPlugin._sync_dir` in `reconciliate_processes` ([8939](https://github.com/PyTorchLightning/pytorch-lightning/pull/8939))


Contributors

awaelchli carmocca justusschock tchaton yifuwang
_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

Page 14 of 27

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.