Added
- Support for Lightning and PyTorch ``2.5.0``
- FTS support for PyTorch's composable distributed (e.g. ``fully_shard``, ``checkpoint``) and Tensor Parallelism (TP) APIs
- Support for Lightning's ``ModelParallelStrategy``
- Experimental 'Auto' FSDP2 Plan Configuration feature, allowing application of the ``fully_shard`` API using module
name/pattern-based configuration instead of manually inspecting modules and applying the API in ``LightningModule.configure_model``
- FSDP2 'Auto' Plan Convenience Aliases, simplifying use of both composable and non-composable activation checkpointing APIs
- Flexible orchestration of advanced profiling combining multiple complementary PyTorch profilers with FTS ``MemProfiler``
Fixed
- Added logic to more robustly condition depth-aligned checkpoint metadata updates to address edge-cases where `current_score` precisely equaled the `best_model_score` at multiple different depths. Resolved [15](https://github.com/speediedan/finetuning-scheduler/issues/15).
Deprecated
- As upstream PyTorch [has deprecated](https://github.com/pytorch/pytorch/issues/138506) official Anaconda channel builds, `finetuning-scheduler` will no longer be releasing conda builds. Installation of FTS via pip (irrespective of the virtual environment used) is the recommended installation approach.
- removed support for PyTorch `2.1`