Finetuning-scheduler

Latest version: v2.4.0

Safety actively analyzes 681866 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 7

2.0.0

Added

- Support for PyTorch and PyTorch Lightning 2.0.0!
- New ``enforce_phase0_params`` feature. FTS ensures the optimizer configured in ``configure_optimizers`` will optimize the parameters (and only those parameters) scheduled to be optimized in phase ``0`` of the current fine-tuning schedule. ([9](https://github.com/speediedan/finetuning-scheduler/pull/9))
- Support for ``torch.compile``
- Support for numerous new FSDP options including preview support for some FSDP options coming soon to Lightning (e.g. ``use_orig_params``)
- When using FTS with FSDP, support the use of ``_FSDPPolicy`` ``auto_wrap_policy`` wrappers (new in PyTorch 2.0.0)
- Extensive testing for FSDP in many newly supported 2.x contexts (including 1.x FSDP compatibility multi-gpu tests)
- Support for strategies that do not have a canonical `strategy_name` but use `_strategy_flag`

Changed

- Now that the core Lightning package is `lightning` rather than `pytorch-lightning`, Fine-Tuning Scheduler (FTS) by default depends upon the `lightning` package rather than the standalone `pytorch-lightning`. If you would like to continue to use FTS with the standalone `pytorch-lightning` package instead, you can still do so (see [README](https://github.com/speediedan/finetuning-scheduler/blob/main/README.md)). Resolves ([#8](https://github.com/speediedan/finetuning-scheduler/issues/8)).
- Fine-Tuning Scheduler (FTS) major version numbers will align with the rest of the PyTorch ecosystem (e.g. FTS 2.x supports PyTorch and Lightning >= 2.0)
- Switched to use ``ruff`` instead of ``flake8`` for linting
- Replaced `fsdp_optim_view` with either `fsdp_optim_transform` or `fsdp_optim_inspect` depending on usage context because the transformation is now not always read-only
- Moved Lightning 1.x examples to `legacy` subfolder and created new FTS/Lightning 2.x examples in `stable` subfolder


Removed

- Removed ``training_epoch_end`` and ``validation_epoch_end`` in accord with Lightning
- Removed `DP` strategy support in accord with Lightning
- Removed support for Python `3.7` and PyTorch `1.10` in accord with Lightning

Fixed

- Adapted loop synchronization during training resume to upstream Lightning changes

0.4.1

Added

- Support for ``pytorch-lightning`` 1.9.4 (which may be the final Lightning 1.x release as PyTorch 2.0 will be released tomorrow)

0.4.0

Added

- **FSDP Scheduled Fine-Tuning** is now supported! [See the tutorial here.](https://finetuning-scheduler.readthedocs.io/en/stable/advanced/fsdp_scheduled_fine_tuning.html)
- Introduced [``StrategyAdapter``](https://finetuning-scheduler.readthedocs.io/en/stable/api/finetuning_scheduler.strategy_adapters.html#finetuning_scheduler.strategy_adapters.StrategyAdapter)s. If you want to extend Fine-Tuning Scheduler (FTS) to use a custom, currently unsupported strategy or override current FTS behavior in the context of a given training strategy, subclassing ``StrategyAdapter`` is now a way to do so. See [``FSDPStrategyAdapter``](https://finetuning-scheduler.readthedocs.io/en/stable/api/finetuning_scheduler.strategy_adapters.html#finetuning_scheduler.strategy_adapters.FSDPStrategyAdapter) for an example implementation.
- support for `pytorch-lightning` 1.9.0

Changed

- decomposed ``add_optimizer_groups`` to accommodate the corner case where FTS is being used without an lr scheduler configuration, also cleanup unrequired example testing warning exceptions
- updated the fts repo issue template


Fixed

- removed PATH adjustments that are no longer necessary due to https://github.com/Lightning-AI/lightning/pull/15485

Removed

- removed references to the ``finetuning-scheduler`` conda-forge package (at least temporarily) due to the current unavailability of upstream dependencies (i.e. the [pytorch-lightning conda-forge package](https://anaconda.org/conda-forge/pytorch-lightning/files) ). Installation of FTS via pip within a conda env is the recommended installation approach (both in the interim and in general).

0.3.4

Added

- support for `pytorch-lightning` 1.8.6
- Notify the user when ``max_depth`` is reached and provide the current training session stopping conditions. Resolves [7](https://github.com/speediedan/finetuning-scheduler/issues/7).


Changed

- set package version ceilings for the examples requirements along with a note regarding their introduction for stability
- promoted PL CLI references to top-level package

Fixed

- replaced deprecated ``Batch`` object reference with ``LazyDict``

0.3.3

Added

- support for `pytorch-lightning` 1.8.4

Changed

- pinned `jsonargparse` dependency to <4.18.0 until [205](https://github.com/omni-us/jsonargparse/issues/205) is fixed

0.3.2

Added

- support for `pytorch-lightning` 1.8.2

Page 4 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.