**Blog post: https://adapterhub.ml/blog/2024/08/adapters-update-reft-qlora-merging-models**
This version is built for Hugging Face Transformers **v4.43.x**.
New Adapter Methods & Model Support
- Add **[Representation Fine-Tuning (ReFT)](https://arxiv.org/pdf/2404.03592)** implementation (LoReFT, NoReFT, DiReFT) (calpt via #705)
- Add LoRA weight merging with **Task Arithmetics** (lenglaender via 698)
- Add **Whisper** model support + notebook (TimoImhof via 693; julian-fong via 717)
- Add **Mistral** model support (KorventennFR via 609)
- Add **PLBart** model support (FahadEbrahim via 709)
Breaking Changes & Deprecations
- Remove support for loading from archived Hub repository (calpt via 724)
- Remove deprecated add_fusion() & train_fusion() methods (calpt via 714)
- Remove deprecated arguments in `push_adapter_to_hub()` method (calpt via 724)
- Deprecate support for passing Python lists to adapter activation (calpt via 714)
Minor Fixes & Changes
- Upgrade supported Transformers version (calpt & lenglaender via 712, 719, 727)
- Fix SDPA/ Flash attention support for Llama (calpt via 722)
- Fix gradient checkpointing for Llama and for Bottleneck adapters (calpt via 730)