Safety vulnerability ID: 43581
The information on this page was manually curated by our Cybersecurity Intelligence Team.
Pytorch-lightning 1.6.0 updates its dependency 'pyyaml' to v5.4 and uses yaml.safe_load() to fix code execution vulnerabilities.
Latest version: 2.4.0
PyTorch Lightning is the lightweight PyTorch wrapper for ML researchers. Scale your models. Write less boilerplate.
Added
- Added a flag `SLURMEnvironment(auto_requeue=True|False)` to control whether Lightning handles the requeuing ([10601](https://github.com/PyTorchLightning/pytorch-lightning/issues/10601))
- Fault Tolerant Manual
* Add `_SupportsStateDict` protocol to detect if classes are stateful ([10646](https://github.com/PyTorchLightning/pytorch-lightning/issues/10646))
* Add `_FaultTolerantMode` enum used to track different supported fault tolerant modes ([10645](https://github.com/PyTorchLightning/pytorch-lightning/issues/10645))
* Add a `_rotate_worker_indices` utility to reload the state according the latest worker ([10647](https://github.com/PyTorchLightning/pytorch-lightning/issues/10647))
* Add stateful workers ([10674](https://github.com/PyTorchLightning/pytorch-lightning/issues/10674))
* Add an utility to collect the states across processes ([10639](https://github.com/PyTorchLightning/pytorch-lightning/issues/10639))
* Add logic to reload the states across data loading components ([10699](https://github.com/PyTorchLightning/pytorch-lightning/issues/10699))
* Cleanup some fault tolerant utilities ([10703](https://github.com/PyTorchLightning/pytorch-lightning/issues/10703))
* Enable Fault Tolerant Manual Training ([10707](https://github.com/PyTorchLightning/pytorch-lightning/issues/10707))
* Broadcast the `_terminate_gracefully` to all processes and add support for DDP ([10638](https://github.com/PyTorchLightning/pytorch-lightning/issues/10638))
- Added support for re-instantiation of custom (subclasses of) `DataLoaders` returned in the `*_dataloader()` methods, i.e., automatic replacement of samplers now works with custom types of `DataLoader` ([10680](https://github.com/PyTorchLightning/pytorch-lightning/issues/10639))
- Added a function to validate if fault tolerant training is supported. ([10465](https://github.com/PyTorchLightning/pytorch-lightning/issues/10465))
- Show a better error message when a custom `DataLoader` implementation is not well implemented and we need to reconstruct it ([10719](https://github.com/PyTorchLightning/pytorch-lightning/issues/10719))
- Save the `Loop`'s state by default in the checkpoint ([10784](https://github.com/PyTorchLightning/pytorch-lightning/issues/10784))
- Added `Loop.replace` to easily switch one loop for another ([10324](https://github.com/PyTorchLightning/pytorch-lightning/issues/10324))
- Added support for `--lr_scheduler=ReduceLROnPlateau` to the `LightningCLI` ([10860](https://github.com/PyTorchLightning/pytorch-lightning/issues/10860))
- Added `LightningCLI.configure_optimizers` to override the `configure_optimizers` return value ([10860](https://github.com/PyTorchLightning/pytorch-lightning/issues/10860))
- Added a warning that shows when `max_epochs` in the `Trainer` is not set ([10700](https://github.com/PyTorchLightning/pytorch-lightning/issues/10700))
- Added support for returning a single Callback from `LightningModule.configure_callbacks` without wrapping it into a list ([11060](https://github.com/PyTorchLightning/pytorch-lightning/issues/11060))
- Added `console_kwargs` for `RichProgressBar` to initialize inner Console ([10875](https://github.com/PyTorchLightning/pytorch-lightning/pull/10875))
Changed
- Raised exception in `init_dist_connection()` when torch distibuted is not available ([10418](https://github.com/PyTorchLightning/pytorch-lightning/issues/10418))
- The `monitor` argument in the `EarlyStopping` callback is no longer optional ([10328](https://github.com/PyTorchLightning/pytorch-lightning/pull/10328))
- Do not fail if batch size could not be inferred for logging when using DeepSpeed ([10438](https://github.com/PyTorchLightning/pytorch-lightning/issues/10438))
- Raised `MisconfigurationException` when `enable_progress_bar=False` and a progress bar instance has been passed in the callback list ([10520](https://github.com/PyTorchLightning/pytorch-lightning/issues/10520))
- Moved `trainer.connectors.env_vars_connector._defaults_from_env_vars` to `utilities.argsparse._defaults_from_env_vars` ([10501](https://github.com/PyTorchLightning/pytorch-lightning/pull/10501))
- Changes in `LightningCLI` required for the new major release of jsonargparse v4.0.0 ([10426](https://github.com/PyTorchLightning/pytorch-lightning/pull/10426))
- Renamed `refresh_rate_per_second` parameter to `refresh_rate` for `RichProgressBar` signature ([10497](https://github.com/PyTorchLightning/pytorch-lightning/pull/10497))
- Moved ownership of the `PrecisionPlugin` into `TrainingTypePlugin` and updated all references ([10570](https://github.com/PyTorchLightning/pytorch-lightning/pull/10570))
- Fault Tolerant relies on `signal.SIGTERM` to gracefully exit instead of `signal.SIGUSR1` ([10605](https://github.com/PyTorchLightning/pytorch-lightning/pull/10605))
- Raised an error if the `batch_size` cannot be inferred from the current batch if it contained a string or was a custom batch object ([10541](https://github.com/PyTorchLightning/pytorch-lightning/pull/10541))
- The validation loop is now disabled when `overfit_batches > 0` is set in the Trainer ([9709](https://github.com/PyTorchLightning/pytorch-lightning/pull/9709))
- Moved optimizer related logics from `Accelerator` to `TrainingTypePlugin` ([10596](https://github.com/PyTorchLightning/pytorch-lightning/pull/10596))
- Moved `batch_to_device` method from `Accelerator` to `TrainingTypePlugin` ([10649](https://github.com/PyTorchLightning/pytorch-lightning/pull/10649))
- The `DDPSpawnPlugin` no longer overrides the `post_dispatch` plugin hook ([10034](https://github.com/PyTorchLightning/pytorch-lightning/pull/10034))
- The `LightningModule.{add_to_queue,get_from_queue}` hooks no longer get a `torch.multiprocessing.SimpleQueue` and instead receive a list based queue ([10034](https://github.com/PyTorchLightning/pytorch-lightning/pull/10034))
- Changed `training_step`, `validation_step`, `test_step` and `predict_step` method signatures in `Accelerator` and updated input from caller side ([10908](https://github.com/PyTorchLightning/pytorch-lightning/pull/10908))
- Changed the name of the temporary checkpoint that the `DDPSpawnPlugin` and related plugins save ([10934](https://github.com/PyTorchLightning/pytorch-lightning/pull/10934))
- Redesigned process creation for spawn-based plugins (`DDPSpawnPlugin`, `TPUSpawnPlugin`, etc.) ([10896](https://github.com/PyTorchLightning/pytorch-lightning/pull/10896))
* All spawn-based plugins now spawn processes immediately upon calling `Trainer.{fit,validate,test,predict}`
* The hooks/callbacks `prepare_data`, `setup`, `configure_sharded_model` and `teardown` now run under initialized process group for spawn-based plugins just like their non-spawn counterparts
* Some configuration errors that were previously raised as `MisconfigurationException`s will now be raised as `ProcessRaisedException` (torch>=1.8) or as `Exception` (torch<1.8)
* Removed the `TrainingTypePlugin.pre_dispatch()` method and merged it with `TrainingTypePlugin.setup()` ([11137](https://github.com/PyTorchLightning/pytorch-lightning/pull/11137))
- Changed profiler to index and display the names of the hooks with a new pattern [<base class>]<class>.<hook name> ([11026](https://github.com/PyTorchLightning/pytorch-lightning/pull/11026))
- Changed `batch_to_device` entry in profiling from stage-specific to generic, to match profiling of other hooks ([11031](https://github.com/PyTorchLightning/pytorch-lightning/pull/11031))
- Changed the info message for finalizing ddp-spawn worker processes to a debug-level message ([10864](https://github.com/PyTorchLightning/pytorch-lightning/pull/10864))
- Removed duplicated file extension when uploading model checkpoints with `NeptuneLogger` ([11015](https://github.com/PyTorchLightning/pytorch-lightning/pull/11015))
- Removed `__getstate__` and `__setstate__` of `RichProgressBar` ([11100](https://github.com/PyTorchLightning/pytorch-lightning/pull/11100))
- The `DDPPlugin` and `DDPSpawnPlugin` and their subclasses now remove the `SyncBatchNorm` wrappers in `teardown()` to enable proper support at inference after fitting ([11078](https://github.com/PyTorchLightning/pytorch-lightning/pull/11078))
- Moved ownership of the `Accelerator` instance to the `TrainingTypePlugin`; all training-type plugins now take an optional parameter `accelerator` ([11022](https://github.com/PyTorchLightning/pytorch-lightning/pull/11022))
- Renamed the `TrainingTypePlugin` to `Strategy` ([11120](https://github.com/PyTorchLightning/pytorch-lightning/pull/11120))
* Renamed `SingleTPUPlugin` to `SingleTPUStrategy` ([11182](https://github.com/PyTorchLightning/pytorch-lightning/pull/11182))
* Renamed the `DDPPlugin` to `DDPStrategy` ([11142](https://github.com/PyTorchLightning/pytorch-lightning/pull/11142))
* Renamed the `DeepSpeedPlugin` to `DeepSpeedStrategy` ([11194](https://github.com/PyTorchLightning/pytorch-lightning/pull/11194))
* Renamed the `IPUPlugin` to `IPUStrategy` ([11193](https://github.com/PyTorchLightning/pytorch-lightning/pull/11193))
* Renamed the `TPUSpawnPlugin` to `TPUSpawnStrategy` ([11190](https://github.com/PyTorchLightning/pytorch-lightning/pull/11190))
* Renamed the `DDPShardedPlugin` to `DDPShardedStrategy` ([11186](https://github.com/PyTorchLightning/pytorch-lightning/pull/11186))
* Renamed the `DDP2Plugin` to `DDP2Strategy` ([11184](https://github.com/PyTorchLightning/pytorch-lightning/pull/11184))
- Marked the `ResultCollection`, `ResultMetric`, and `ResultMetricCollection` classes as protected ([11130](https://github.com/PyTorchLightning/pytorch-lightning/pull/11130))
- DeepSpeed does not require lightning module zero 3 partitioning ([10655](https://github.com/PyTorchLightning/pytorch-lightning/pull/10655))
- Renamed the `DDPFullyShardedPlugin` to `DDPFullyShardedStrategy` ([11143](https://github.com/PyTorchLightning/pytorch-lightning/pull/11143))
Deprecated
- Deprecated `ClusterEnvironment.master_{address,port}` in favor of `ClusterEnvironment.main_{address,port}` ([10103](https://github.com/PyTorchLightning/pytorch-lightning/issues/10103))
- Deprecated `DistributedType` in favor of `_StrategyType` ([10505](https://github.com/PyTorchLightning/pytorch-lightning/pull/10505))
- Deprecated the `precision_plugin` constructor argument from `Accelerator` ([10570](https://github.com/PyTorchLightning/pytorch-lightning/pull/10570))
- Deprecated `DeviceType` in favor of `_AcceleratorType` ([10503](https://github.com/PyTorchLightning/pytorch-lightning/pull/10503))
- Deprecated the property `Trainer.slurm_job_id` in favor of the new `SLURMEnvironment.job_id()` method ([10622](https://github.com/PyTorchLightning/pytorch-lightning/pull/10622))
- Deprecated the access to the attribute `IndexBatchSamplerWrapper.batch_indices` in favor of `IndexBatchSamplerWrapper.seen_batch_indices` ([10870](https://github.com/PyTorchLightning/pytorch-lightning/pull/10870))
- Deprecated `on_init_start` and `on_init_end` callback hooks ([10940](https://github.com/PyTorchLightning/pytorch-lightning/pull/10940))
- Deprecated `Trainer.call_hook` in favor of `Trainer._call_callback_hooks`, `Trainer._call_lightning_module_hook`, `Trainer._call_ttp_hook`, and `Trainer._call_accelerator_hook` ([10979](https://github.com/PyTorchLightning/pytorch-lightning/pull/10979))
- Deprecated `TrainingTypePlugin.post_dispatch` in favor of `TrainingTypePlugin.teardown` ([10939](https://github.com/PyTorchLightning/pytorch-lightning/pull/10939))
- Deprecated `ModelIO.on_hpc_{save/load}` in favor of `CheckpointHooks.on_{save/load}_checkpoint` ([10911](https://github.com/PyTorchLightning/pytorch-lightning/pull/10911))
- Deprecated `Trainer.run_stage` in favor of `Trainer.{fit,validate,test,predict}` ([11000](https://github.com/PyTorchLightning/pytorch-lightning/pull/11000))
- Deprecated `Trainer.verbose_evaluate` in favor of `EvaluationLoop(verbose=...)` ([10931](https://github.com/PyTorchLightning/pytorch-lightning/pull/10931))
- Deprecated `Trainer.should_rank_save_checkpoint` Trainer property ([11068](https://github.com/PyTorchLightning/pytorch-lightning/pull/11068))
- Deprecated `TrainerCallbackHookMixin` ([11148](https://github.com/PyTorchLightning/pytorch-lightning/pull/11148))
Removed
- Removed deprecated parameter `method` in `pytorch_lightning.utilities.model_helpers.is_overridden` ([10507](https://github.com/PyTorchLightning/pytorch-lightning/pull/10507))
- Remove deprecated method `ClusterEnvironment.creates_children` ([10339](https://github.com/PyTorchLightning/pytorch-lightning/issues/10339))
- Removed deprecated `TrainerModelHooksMixin.is_function_implemented` and `TrainerModelHooksMixin.has_arg` ([10322](https://github.com/PyTorchLightning/pytorch-lightning/pull/10322))
- Removed deprecated `pytorch_lightning.utilities.device_dtype_mixin.DeviceDtypeModuleMixin` in favor of `pytorch_lightning.core.mixins.device_dtype_mixin.DeviceDtypeModuleMixin` ([10442](https://github.com/PyTorchLightning/pytorch-lightning/pull/10442))
- Removed deprecated `LightningModule.loaded_optimizer_states_dict` property ([10346](https://github.com/PyTorchLightning/pytorch-lightning/pull/10346))
- Removed deprecated `Trainer.fit(train_dataloader=)`, `Trainer.validate(val_dataloaders=)`, and `Trainer.test(test_dataloader=)` ([10325](https://github.com/PyTorchLightning/pytorch-lightning/pull/10325))
- Removed deprecated `has_prepared_data`, `has_setup_fit`, `has_setup_validate`, `has_setup_test`, `has_setup_predict`, `has_teardown_fit`, `has_teardown_validate`, `has_teardown_test` and `has_teardown_predict` datamodule lifecycle properties ([10350](https://github.com/PyTorchLightning/pytorch-lightning/pull/10350))
- Removed deprecated `every_n_val_epochs` parameter of ModelCheckpoint ([10366](https://github.com/PyTorchLightning/pytorch-lightning/pull/10366))
- Removed deprecated `import pytorch_lightning.profiler.profilers` in favor of `import pytorch_lightning.profiler` ([10443](https://github.com/PyTorchLightning/pytorch-lightning/pull/10443))
- Removed deprecated property `configure_slurm_dpp` from accelerator connector ([10370](https://github.com/PyTorchLightning/pytorch-lightning/pull/10370))
- Removed deprecated arguments `num_nodes` and `sync_batchnorm` from `DDPPlugin`, `DDPSpawnPlugin`, `DeepSpeedPlugin` ([10357](https://github.com/PyTorchLightning/pytorch-lightning/pull/10357))
- Removed deprecated property `is_slurm_managing_tasks` from AcceleratorConnector ([10353](https://github.com/PyTorchLightning/pytorch-lightning/pull/10353))
- Removed deprecated `LightningModule.log(tbptt_reduce_fx, tbptt_reduce_token, sync_dist_op)` ([10423](https://github.com/PyTorchLightning/pytorch-lightning/pull/10423))
- Removed deprecated `Plugin.task_idx` ([10441](https://github.com/PyTorchLightning/pytorch-lightning/pull/10441))
- Removed deprecated method `master_params` from PrecisionPlugin ([10372](https://github.com/PyTorchLightning/pytorch-lightning/pull/10372))
- Removed the automatic detachment of "extras" returned from `training_step`. For example, `return {'loss': ..., 'foo': foo.detach()}` will now be necessary if `foo` has gradients which you do not want to store ([10424](https://github.com/PyTorchLightning/pytorch-lightning/pull/10424))
- Removed deprecated passthrough methods and properties from `Accelerator` base class:
* ([10403](https://github.com/PyTorchLightning/pytorch-lightning/pull/10403))
* ([10448](https://github.com/PyTorchLightning/pytorch-lightning/pull/10448))
- Removed deprecated signature for `transfer_batch_to_device` hook. The new argument `dataloader_idx` is now required ([10480](https://github.com/PyTorchLightning/pytorch-lightning/pull/10480))
- Removed deprecated `utilities.distributed.rank_zero_{warn/deprecation}` ([10451](https://github.com/PyTorchLightning/pytorch-lightning/pull/10451))
- Removed deprecated `mode` argument from `ModelSummary` class ([10449](https://github.com/PyTorchLightning/pytorch-lightning/pull/10449))
- Removed deprecated `Trainer.train_loop` property in favor of `Trainer.fit_loop` ([10482](https://github.com/PyTorchLightning/pytorch-lightning/pull/10482))
- Removed deprecated `Trainer.train_loop` property in favor of `Trainer.fit_loop` ([10482](https://github.com/PyTorchLightning/pytorch-lightning/pull/10482))
- Removed deprecated `disable_validation` property from Trainer ([10450](https://github.com/PyTorchLightning/pytorch-lightning/pull/10450))
- Removed deprecated `CheckpointConnector.hpc_load` property in favor of `CheckpointConnector.restore` ([10525](https://github.com/PyTorchLightning/pytorch-lightning/pull/10525))
- Removed deprecated `reload_dataloaders_every_epoch` from `Trainer` in favour of `reload_dataloaders_every_n_epochs` ([10481](https://github.com/PyTorchLightning/pytorch-lightning/pull/10481))
- Removed the `precision_plugin` attribute from `Accelerator` in favor of its equivalent attribute `precision_plugin` in the `TrainingTypePlugin` ([10570](https://github.com/PyTorchLightning/pytorch-lightning/pull/10570))
- Removed `DeepSpeedPlugin.{precision,amp_type,amp_level}` properties ([10657](https://github.com/PyTorchLightning/pytorch-lightning/pull/10657))
- Removed argument `return_result` from the `DDPSpawnPlugin.spawn()` method ([10867](https://github.com/PyTorchLightning/pytorch-lightning/pull/10867))
- Removed the property `TrainingTypePlugin.results` and corresponding properties in subclasses ([10034](https://github.com/PyTorchLightning/pytorch-lightning/pull/10034))
- Removed the `mp_queue` attribute from `DDPSpawnPlugin` and `TPUSpawnPlugin` ([10034](https://github.com/PyTorchLightning/pytorch-lightning/pull/10034))
- Removed unnecessary `_move_optimizer_state` method overrides from `TPUSpawnPlugin` and `SingleTPUPlugin` ([10849](https://github.com/PyTorchLightning/pytorch-lightning/pull/10849))
- Removed `should_rank_save_checkpoint` property from `TrainingTypePlugin` ([11070](https://github.com/PyTorchLightning/pytorch-lightning/pull/11070))
- Removed `model_sharded_context` method from `Accelerator` ([10886](https://github.com/PyTorchLightning/pytorch-lightning/pull/10886))
- Removed method `pre_dispatch` from the `PrecisionPlugin` ([10887](https://github.com/PyTorchLightning/pytorch-lightning/pull/10887))
- Removed method `setup_optimizers_in_pre_dispatch` from the `strategies` and achieve the same logic in `setup` and `pre_dispatch` methods ([10906](https://github.com/PyTorchLightning/pytorch-lightning/pull/10906))
- Removed methods `pre_dispatch`, `dispatch` and `post_dispatch` from the `Accelerator` ([10885](https://github.com/PyTorchLightning/pytorch-lightning/pull/10885))
- Removed method `training_step`, `test_step`, `validation_step` and `predict_step` from the `Accelerator` ([10890](https://github.com/PyTorchLightning/pytorch-lightning/pull/10890))
- Removed `TrainingTypePlugin.start_{training,evaluating,predicting}` hooks and the same in all subclasses ([10989](https://github.com/PyTorchLightning/pytorch-lightning/pull/10989), [#10896](https://github.com/PyTorchLightning/pytorch-lightning/pull/10896))
- Removed `Accelerator.on_train_start` ([10999](https://github.com/PyTorchLightning/pytorch-lightning/pull/10999))
- Removed support for Python 3.6 ([11117](https://github.com/PyTorchLightning/pytorch-lightning/pull/11117))
Fixed
- Fixed `NeptuneLogger` when using DDP ([11030](https://github.com/PyTorchLightning/pytorch-lightning/pull/11030))
- Fixed security vulnerabilities CVE-2020-1747 and CVE-2020-14343 caused by the `PyYAML` dependency ([11099](https://github.com/PyTorchLightning/pytorch-lightning/pull/11099))
- Fixed a bug to disable logging hyperparameters in logger if there are no hparams ([11105](https://github.com/PyTorchLightning/pytorch-lightning/issues/11105))
- Avoid the deprecated `onnx.export(example_outputs=...)` in torch 1.10 ([11116](https://github.com/PyTorchLightning/pytorch-lightning/pull/11116))
- Fixed an issue when torch-scripting a `LightningModule` after training with `Trainer(sync_batchnorm=True)` ([11078](https://github.com/PyTorchLightning/pytorch-lightning/pull/11078))
- Fixed an `AttributeError` occuring when using a `CombinedLoader` (multiple dataloaders) for prediction ([11111](https://github.com/PyTorchLightning/pytorch-lightning/pull/11111))
- Fixed bug where `Trainer(track_grad_norm=..., logger=False)' would fail ([11114](https://github.com/PyTorchLightning/pytorch-lightning/pull/11114))
- Fixed logging on `{test,validation}_epoch_end` with multiple dataloaders ([11132](https://github.com/PyTorchLightning/pytorch-lightning/pull/11132))
- Fixed double evaluation bug with fault-tolerance enabled where the second call was completely skipped ([11119](https://github.com/PyTorchLightning/pytorch-lightning/pull/11119))
- Fixed an incorrect warning being produced by the model summary when using `bf16` precision on CPU ([11161](https://github.com/PyTorchLightning/pytorch-lightning/pull/11161))
-
-
Scan your Python project for dependency vulnerabilities in two minutes
Scan your application