Lightning

Latest version: v2.3.0

Safety actively analyzes 638741 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 12 of 27

1.5.9

Fixed

- Pinned sphinx-autodoc-typehints with <v1.15 ([11400](https://github.com/PyTorchLightning/pytorch-lightning/pull/11400))
- Skipped testing with PyTorch 1.7 and Python 3.9 on Ubuntu ([11217](https://github.com/PyTorchLightning/pytorch-lightning/pull/11217))
- Fixed type promotion when tensors of higher category than float are logged ([11401](https://github.com/PyTorchLightning/pytorch-lightning/pull/11401))
- Fixed the format of the configuration saved automatically by the CLI's `SaveConfigCallback` ([11532](https://github.com/PyTorchLightning/pytorch-lightning/pull/11532))

Changed

- Changed `LSFEnvironment` to use `LSB_DJOB_RANKFILE` environment variable instead of `LSB_HOSTS` for determining node rank and main address ([10825](https://github.com/PyTorchLightning/pytorch-lightning/pull/10825))
- Disabled sampler replacement when using `IterableDataset` ([11507](https://github.com/PyTorchLightning/pytorch-lightning/pull/11507))

Contributors

ajtritt akihironitta carmocca rohitgr7

If we forgot someone due to not matching commit email with GitHub account, let us know :]

1.5.8

Fixed

- Fixed `LightningCLI` race condition while saving the config ([11199](https://github.com/PyTorchLightning/pytorch-lightning/pull/11199))
- Fixed the default value used with `log(reduce_fx=min|max)` ([11310](https://github.com/PyTorchLightning/pytorch-lightning/pull/11310))
- Fixed data fetcher selection ([11294](https://github.com/PyTorchLightning/pytorch-lightning/pull/11294))
- Fixed a race condition that could result in incorrect (zero) values being observed in prediction writer callbacks ([11288](https://github.com/PyTorchLightning/pytorch-lightning/pull/11288))
- Fixed dataloaders not getting reloaded the correct amount of times when setting `reload_dataloaders_every_n_epochs` and `check_val_every_n_epoch` ([10948](https://github.com/PyTorchLightning/pytorch-lightning/pull/10948))


Contributors

adamviola akihironitta awaelchli Borda carmocca edpizzi

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.5.7

Fixed

- Fixed `NeptuneLogger` when using DDP ([11030](https://github.com/PyTorchLightning/pytorch-lightning/pull/11030))
- Fixed a bug to disable logging hyperparameters in logger if there are no hparams ([11105](https://github.com/PyTorchLightning/pytorch-lightning/issues/11105))
- Avoid the deprecated `onnx.export(example_outputs=...)` in torch 1.10 ([11116](https://github.com/PyTorchLightning/pytorch-lightning/pull/11116))
- Fixed an issue when torch-scripting a `LightningModule` after training with `Trainer(sync_batchnorm=True)` ([11078](https://github.com/PyTorchLightning/pytorch-lightning/pull/11078))
- Fixed an `AttributeError` occuring when using a `CombinedLoader` (multiple dataloaders) for prediction ([11111](https://github.com/PyTorchLightning/pytorch-lightning/pull/11111))
- Fixed bug where `Trainer(track_grad_norm=..., logger=False)` would fail ([11114](https://github.com/PyTorchLightning/pytorch-lightning/pull/11114))
- Fixed an incorrect warning being produced by the model summary when using `bf16` precision on CPU ([11161](https://github.com/PyTorchLightning/pytorch-lightning/pull/11161))

Changed

- DeepSpeed does not require lightning module zero 3 partitioning ([10655](https://github.com/PyTorchLightning/pytorch-lightning/pull/10655))
- The `ModelCheckpoint` callback now saves and restores attributes `best_k_models`, `kth_best_model_path`, `kth_value`, and `last_model_path` ([10995](https://github.com/PyTorchLightning/pytorch-lightning/pull/10995))


Contributors

awaelchli borchero carmocca guyang3532 kaushikb11 ORippler Raalsky rohitgr7 SeanNaren

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.5.6

Fixed

- Fixed a bug where the DeepSpeedPlugin arguments `cpu_checkpointing` and `contiguous_memory_optimization` were not being forwarded to deepspeed correctly ([10874](https://github.com/PyTorchLightning/pytorch-lightning/issues/10874))
- Fixed an issue with `NeptuneLogger` causing checkpoints to be uploaded with a duplicated file extension ([11015](https://github.com/PyTorchLightning/pytorch-lightning/issues/11015))
- Fixed support for logging within callbacks returned from `LightningModule` ([10991](https://github.com/PyTorchLightning/pytorch-lightning/pull/10991))
- Fixed running sanity check with `RichProgressBar` ([10913](https://github.com/PyTorchLightning/pytorch-lightning/pull/10913))
- Fixed support for `CombinedLoader` while checking for warning raised with eval dataloaders ([10994](https://github.com/PyTorchLightning/pytorch-lightning/pull/10994))
- The TQDM progress bar now correctly shows the `on_epoch` logged values on train epoch end ([11069](https://github.com/PyTorchLightning/pytorch-lightning/pull/11069))
- Fixed bug where the TQDM updated the training progress bar during `trainer.validate` ([11069](https://github.com/PyTorchLightning/pytorch-lightning/pull/11069))


Contributors

carmocca jona-0 kaushikb11 Raalsky rohitgr7

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.5.5

Fixed

- Disabled batch_size extraction for torchmetric instances because they accumulate the metrics internally ([10815](https://github.com/PyTorchLightning/pytorch-lightning/pull/10815))
- Fixed an issue with `SignalConnector` not restoring the default signal handlers on teardown when running on SLURM or with fault-tolerant training enabled ([10611](https://github.com/PyTorchLightning/pytorch-lightning/pull/10611))
- Fixed `SignalConnector._has_already_handler` check for callable type ([10483](https://github.com/PyTorchLightning/pytorch-lightning/pull/10483))
- Fixed an issue to return the results for each dataloader separately instead of duplicating them for each ([10810](https://github.com/PyTorchLightning/pytorch-lightning/pull/10810))
- Improved exception message if `rich` version is less than `10.2.2` ([10839](https://github.com/PyTorchLightning/pytorch-lightning/pull/10839))
- Fixed uploading best model checkpoint in NeptuneLogger ([10369](https://github.com/PyTorchLightning/pytorch-lightning/pull/10369))
- Fixed early schedule reset logic in PyTorch profiler that was causing data leak ([10837](https://github.com/PyTorchLightning/pytorch-lightning/pull/10837))
- Fixed a bug that caused incorrect batch indices to be passed to the `BasePredictionWriter` hooks when using a dataloader with `num_workers > 0` ([10870](https://github.com/PyTorchLightning/pytorch-lightning/pull/10870))
- Fixed an issue with item assignment on the logger on rank > 0 for those who support it ([10917](https://github.com/PyTorchLightning/pytorch-lightning/pull/10917))
- Fixed importing `torch_xla.debug` for `torch-xla<1.8` ([10836](https://github.com/PyTorchLightning/pytorch-lightning/pull/10836))
- Fixed an issue with `DDPSpawnPlugin` and related plugins leaving a temporary checkpoint behind ([10934](https://github.com/PyTorchLightning/pytorch-lightning/pull/10934))
- Fixed a `TypeError` occuring in the `SingalConnector.teardown()` method ([10961](https://github.com/PyTorchLightning/pytorch-lightning/pull/10961))


Contributors

awaelchli carmocca four4fish kaushikb11 lucmos mauvilsa Raalsky rohitgr7

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

1.5.4

Fixed

- Fixed support for `--key.help=class` with the `LightningCLI` ([10767](https://github.com/PyTorchLightning/pytorch-lightning/pull/10767))
- Fixed `_compare_version` for python packages ([10762](https://github.com/PyTorchLightning/pytorch-lightning/pull/10762))
- Fixed TensorBoardLogger `SummaryWriter` not close before spawning the processes ([10777](https://github.com/PyTorchLightning/pytorch-lightning/pull/10777))
- Fixed a consolidation error in Lite when attempting to save the state dict of a sharded optimizer ([10746](https://github.com/PyTorchLightning/pytorch-lightning/pull/10746))
- Fixed the default logging level for batch hooks associated with training from `on_step=False, on_epoch=True` to `on_step=True, on_epoch=False` ([10756](https://github.com/PyTorchLightning/pytorch-lightning/pull/10756))


Removed

- Removed PyTorch 1.6 support ([10367](https://github.com/PyTorchLightning/pytorch-lightning/pull/10367), [#10738](https://github.com/PyTorchLightning/pytorch-lightning/pull/10738))


Contributors

awaelchli carmocca kaushikb11 rohitgr7 tchaton

_If we forgot someone due to not matching commit email with GitHub account, let us know :]_

Page 12 of 27

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.