Brand new Docs
With task guides, conceptual guides, integration guides, and code references all available at your fingertips, 🤗 PEFT's docs (found at https://huggingface.co/docs/peft) provide an insightful and easy-to-follow resource for anyone looking to how to use 🤗 PEFT. Whether you're a seasoned pro or just starting out, PEFT's documentation will help you to get the most out of it.
* [WIP-docs] Accelerate scripts by stevhliu in https://github.com/huggingface/peft/pull/355
* [docs] Quicktour update by stevhliu in https://github.com/huggingface/peft/pull/346
* [docs] Conceptual overview of prompting methods by stevhliu in https://github.com/huggingface/peft/pull/339
* [docs] LoRA for token classification by stevhliu in https://github.com/huggingface/peft/pull/302
* [docs] int8 training by stevhliu in https://github.com/huggingface/peft/pull/332
* [docs] P-tuning for sequence classification by stevhliu in https://github.com/huggingface/peft/pull/281
* [docs] Prompt tuning for CLM by stevhliu in https://github.com/huggingface/peft/pull/264
* [docs] Prefix tuning for Seq2Seq by stevhliu in https://github.com/huggingface/peft/pull/272
* [docs] Add API references by stevhliu in https://github.com/huggingface/peft/pull/241
* [docs] Build notebooks from Markdown by stevhliu in https://github.com/huggingface/peft/pull/240
* [docs] Supported models tables by MKhalusova in https://github.com/huggingface/peft/pull/364
* [docs] Task guide with Dreambooth LoRA example by MKhalusova in https://github.com/huggingface/peft/pull/330
* [docs] LoRA conceptual guide by MKhalusova in https://github.com/huggingface/peft/pull/331
* [docs] Task guide for semantic segmentation with LoRA by MKhalusova in https://github.com/huggingface/peft/pull/307
* Move image classification example to the docs by MKhalusova in https://github.com/huggingface/peft/pull/239
Comprehensive Testing Suite
Comprised of both unit and integration tests, it rigorously tests core features, examples, and various models on different setups, including single and multiple GPUs. This commitment to testing helps ensure that PEFT maintains the highest levels of correctness, usability, and performance, while continuously improving in all areas.
* [`CI`] Add ci tests by younesbelkada in https://github.com/huggingface/peft/pull/203
* Fix CI tests by younesbelkada in https://github.com/huggingface/peft/pull/210
* [`CI`] Add more ci tests by younesbelkada in https://github.com/huggingface/peft/pull/223
* [`tests`] Adds more tests + fix failing tests by younesbelkada in https://github.com/huggingface/peft/pull/238
* [`tests`] Adds GPU tests by younesbelkada in https://github.com/huggingface/peft/pull/256
* [`tests`] add slow tests to GH workflow by younesbelkada in https://github.com/huggingface/peft/pull/304
* [`core`] Better log messages by younesbelkada in https://github.com/huggingface/peft/pull/366
Multi Adapter Support
PEFT just got even more versatile with its new Multi Adapter Support! Now you can train and infer with multiple adapters, or even combine multiple LoRA adapters in a weighted combination. This is especially handy for RLHF training, where you can save memory by using a single base model with multiple adapters for actor, critic, reward, and reference. And the icing on the cake? Check out the LoRA Dreambooth inference example notebook to see this feature in action.
* Multi Adapter support by pacman100 in https://github.com/huggingface/peft/pull/263
New PEFT methods: AdaLoRA and Adaption Prompt
PEFT just got even better, thanks to the contributions of the community! The AdaLoRA method is one of the exciting new additions. It takes the highly regarded LoRA method and improves it by allocating trainable parameters across the model to maximize performance within a given parameter budget. Another standout is the Adaption Prompt method, which enhances the already popular Prefix Tuning by introducing zero init attention.
* The Implementation of AdaLoRA (ICLR 2023) by QingruZhang in https://github.com/huggingface/peft/pull/233
* Implement adaption prompt from Llama-Adapter paper by yeoedward in https://github.com/huggingface/peft/pull/268
New LoRA utilities
Good news for LoRA users! PEFT now allows you to merge LoRA parameters into the base model's parameters, giving you the freedom to remove the PEFT wrapper and apply downstream optimizations related to inference and deployment. Plus, you can use all the features that are compatible with the base model without any issues.
* [`utils`] add merge_lora utility function by younesbelkada in https://github.com/huggingface/peft/pull/227
* Add nn.Embedding Support to Lora by Splo2t in https://github.com/huggingface/peft/pull/337
What's Changed