Peft

Latest version: v0.14.0

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 5

0.4.0.dev0

* do not use self.device. In FSDP cpu offload mode. self.device is "CPU… by sywangyi in https://github.com/huggingface/peft/pull/352
* add accelerate example for DDP and FSDP in sequence classification fo… by sywangyi in https://github.com/huggingface/peft/pull/358
* [`CI`] Fix CI - pin urlib by younesbelkada in https://github.com/huggingface/peft/pull/402
* [docs] Fix index by stevhliu in https://github.com/huggingface/peft/pull/397
* Fix documentation links on index page by mikeorzel in https://github.com/huggingface/peft/pull/406
* Zero 3 init ReadME update by dumpmemory in https://github.com/huggingface/peft/pull/399
* [`Tests`] Add soundfile to docker images by younesbelkada in https://github.com/huggingface/peft/pull/401
* 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by TimDettmers in https://github.com/huggingface/peft/pull/476
* [`core`] Protect 4bit import by younesbelkada in https://github.com/huggingface/peft/pull/480
* [`core`] Raise warning on using `prepare_model_for_int8_training` by younesbelkada in https://github.com/huggingface/peft/pull/483
* Remove merge_weights by Atry in https://github.com/huggingface/peft/pull/392
* [`core`] Add gradient checkpointing check by younesbelkada in https://github.com/huggingface/peft/pull/404
* [docs] Fix LoRA image classification docs by stevhliu in https://github.com/huggingface/peft/pull/524
* [docs] Prettify index by stevhliu in https://github.com/huggingface/peft/pull/478
* change comment in tuners.lora, lora_alpha float to int by codingchild2424 in https://github.com/huggingface/peft/pull/448
* [`LoRA`] Allow applying LoRA at different stages by younesbelkada in https://github.com/huggingface/peft/pull/429
* Enable PeftConfig & PeftModel to load from revision by lewtun in https://github.com/huggingface/peft/pull/433
* [`Llama-Adapter`] fix half precision inference + add tests by younesbelkada in https://github.com/huggingface/peft/pull/456
* fix merge_and_unload when LoRA targets embedding layer by 0x000011b in https://github.com/huggingface/peft/pull/438
* return load_result when load_adapter by dkqkxx in https://github.com/huggingface/peft/pull/481
* Fixed problem with duplicate same code. by hotchpotch in https://github.com/huggingface/peft/pull/517
* Add starcoder model to target modules dict by mrm8488 in https://github.com/huggingface/peft/pull/528
* Fix a minor typo where a non-default token_dim would crash prompt tuning by thomas-schillaci in https://github.com/huggingface/peft/pull/459
* Remove device_map when training 4,8-bit model. by SunMarc in https://github.com/huggingface/peft/pull/534
* add library name to model card by pacman100 in https://github.com/huggingface/peft/pull/549
* Add thousands separator in print_trainable_parameters by BramVanroy in https://github.com/huggingface/peft/pull/443
* [doc build] Use secrets by mishig25 in https://github.com/huggingface/peft/pull/556
* improve readability of LoRA code by martin-liu in https://github.com/huggingface/peft/pull/409
* [`core`] Add safetensors integration by younesbelkada in https://github.com/huggingface/peft/pull/553
* [`core`] Fix config kwargs by younesbelkada in https://github.com/huggingface/peft/pull/561
* Fix typo and url to `openai/whisper-large-v2` by alvarobartt in https://github.com/huggingface/peft/pull/563
* feat: add type hint to `get_peft_model` by samsja in https://github.com/huggingface/peft/pull/566
* Add issues template by younesbelkada in https://github.com/huggingface/peft/pull/562
* [BugFix] Set alpha and dropout defaults in LoraConfig by apbard in https://github.com/huggingface/peft/pull/390
* enable lora for mpt by sywangyi in https://github.com/huggingface/peft/pull/576
* Fix minor typo in bug-report.yml by younesbelkada in https://github.com/huggingface/peft/pull/582
* [`core`] Correctly passing the kwargs all over the place by younesbelkada in https://github.com/huggingface/peft/pull/575
* Fix adalora device mismatch issue by younesbelkada in https://github.com/huggingface/peft/pull/583
* LoRA for Conv2d layer, script to convert kohya_ss LoRA to PEFT by kovalexal in https://github.com/huggingface/peft/pull/461
* Fix typo at peft_model.py by Beomi in https://github.com/huggingface/peft/pull/588
* [`test`] Adds more CI tests by younesbelkada in https://github.com/huggingface/peft/pull/586
* when from_pretrained is called in finetune case of lora with flag "… by sywangyi in https://github.com/huggingface/peft/pull/591
* feat: Add PeftModelForQuestionAnswering by sjrl in https://github.com/huggingface/peft/pull/473
* Improve the README when using PEFT by pacman100 in https://github.com/huggingface/peft/pull/594
* [`tests`] Fix dockerfile by younesbelkada in https://github.com/huggingface/peft/pull/608
* Fix final failing slow tests by younesbelkada in https://github.com/huggingface/peft/pull/609
* [`core`] Add `adapter_name` in `get_peft_model` by younesbelkada in https://github.com/huggingface/peft/pull/610
* [`core`] Stronger import of bnb by younesbelkada in https://github.com/huggingface/peft/pull/605
* Added Civitai LoRAs conversion to PEFT, PEFT LoRAs conversion to webui by kovalexal in https://github.com/huggingface/peft/pull/596
* update whisper test by pacman100 in https://github.com/huggingface/peft/pull/617
* Update README.md, citation by pminervini in https://github.com/huggingface/peft/pull/616
* Update train_dreambooth.py by nafiturgut in https://github.com/huggingface/peft/pull/624
* [`Adalora`] Add adalora 4bit by younesbelkada in https://github.com/huggingface/peft/pull/598
* [`AdaptionPrompt`] Add 8bit + 4bit support for adaption prompt by younesbelkada in https://github.com/huggingface/peft/pull/604
* Add seq2seq prompt tuning support by thomas-schillaci in https://github.com/huggingface/peft/pull/519
* [Bugfix] Fixed LoRA conv2d merge by kovalexal in https://github.com/huggingface/peft/pull/637
* [Bugfix] Inserted adapter_name to get_peft_model_state_dict function by nafiturgut in https://github.com/huggingface/peft/pull/626
* fix Prefix-tuning error in clm Float16 evaluation by sywangyi in https://github.com/huggingface/peft/pull/520
* fix ptun and prompt tuning generation issue by sywangyi in https://github.com/huggingface/peft/pull/543
* feat(model): Allow from_pretrained to accept PeftConfig class by aarnphm in https://github.com/huggingface/peft/pull/612
* Fix `PeftModel.disable_adapter` by ain-soph in https://github.com/huggingface/peft/pull/644
* bitsandbytes version check by glerzing in https://github.com/huggingface/peft/pull/646
* DOC: Remove loralib requirements from examples, a few small fixes by BenjaminBossan in https://github.com/huggingface/peft/pull/640
* style: tentatively add hints for some public function by aarnphm in https://github.com/huggingface/peft/pull/614
* Add pytest-cov for reporting test coverage by BenjaminBossan in https://github.com/huggingface/peft/pull/641
* Require Python version >= 3.8 by BenjaminBossan in https://github.com/huggingface/peft/pull/649
* Fixed LoraConfig alpha modification on add_weighted_adapter by kovalexal in https://github.com/huggingface/peft/pull/654
* [docs] API example by stevhliu in https://github.com/huggingface/peft/pull/650
* FIX: bug resulting in config copies not working by BenjaminBossan in https://github.com/huggingface/peft/pull/653
* Update clm-prompt-tuning.mdx by richard087 in https://github.com/huggingface/peft/pull/652
* Adding support for RoBERTa layers_transform in COMMON_LAYERS_PATTERN by sunyuhan19981208 in https://github.com/huggingface/peft/pull/669
* TST: Remove skipping certain tests by BenjaminBossan in https://github.com/huggingface/peft/pull/668
* Added wandb support for lora train_dreambooth by nafiturgut in https://github.com/huggingface/peft/pull/639
* FIX: Embedding LoRA weights are initialized randomly by BenjaminBossan in https://github.com/huggingface/peft/pull/681
* Fix broken docker images by younesbelkada in https://github.com/huggingface/peft/pull/684
* Add functionality to support IA3 by SumanthRH in https://github.com/huggingface/peft/pull/578
* Fix base_model_torch_dtype when using model.half() after init by rayrayraykk in https://github.com/huggingface/peft/pull/688
* Init IA³ weights randomly when so configured by BenjaminBossan in https://github.com/huggingface/peft/pull/693
* add support for Feature Extraction using PEFT by pacman100 in https://github.com/huggingface/peft/pull/647
* Fix a small bug in forward method of IA³ by BenjaminBossan in https://github.com/huggingface/peft/pull/696
* Update Readme to include IA3 by SumanthRH in https://github.com/huggingface/peft/pull/702
* Fix code typo in int8-asr.mdx by zsamboki in https://github.com/huggingface/peft/pull/698
* chore(type): annotate that peft does contains type hints by aarnphm in https://github.com/huggingface/peft/pull/678
* Introducing `AutoPeftModelForxxx` by younesbelkada in https://github.com/huggingface/peft/pull/694
* [WIP] FIX for disabling adapter, adding tests by BenjaminBossan in https://github.com/huggingface/peft/pull/683
* [Core] Enhancements and refactoring of LoRA method by pacman100 in https://github.com/huggingface/peft/pull/695
* [`Feature`] Save only selected adapters for LoRA by younesbelkada in https://github.com/huggingface/peft/pull/705
* [`Auto`] Support `AutoPeftModel` for custom HF models by younesbelkada in https://github.com/huggingface/peft/pull/707
* FEAT: Make LoRA work with custom models by BenjaminBossan in https://github.com/huggingface/peft/pull/676
* [`core`] Better hub kwargs management by younesbelkada in https://github.com/huggingface/peft/pull/712
* FIX: Removes warnings about unknown pytest marker by BenjaminBossan in https://github.com/huggingface/peft/pull/715

New Contributors
* sywangyi made their first contribution in https://github.com/huggingface/peft/pull/352
* mikeorzel made their first contribution in https://github.com/huggingface/peft/pull/406
* TimDettmers made their first contribution in https://github.com/huggingface/peft/pull/476
* Atry made their first contribution in https://github.com/huggingface/peft/pull/392
* codingchild2424 made their first contribution in https://github.com/huggingface/peft/pull/448
* lewtun made their first contribution in https://github.com/huggingface/peft/pull/433
* 0x000011b made their first contribution in https://github.com/huggingface/peft/pull/438
* dkqkxx made their first contribution in https://github.com/huggingface/peft/pull/481
* hotchpotch made their first contribution in https://github.com/huggingface/peft/pull/517
* thomas-schillaci made their first contribution in https://github.com/huggingface/peft/pull/459
* SunMarc made their first contribution in https://github.com/huggingface/peft/pull/534
* BramVanroy made their first contribution in https://github.com/huggingface/peft/pull/443
* mishig25 made their first contribution in https://github.com/huggingface/peft/pull/556
* martin-liu made their first contribution in https://github.com/huggingface/peft/pull/409
* alvarobartt made their first contribution in https://github.com/huggingface/peft/pull/563
* samsja made their first contribution in https://github.com/huggingface/peft/pull/566
* apbard made their first contribution in https://github.com/huggingface/peft/pull/390
* kovalexal made their first contribution in https://github.com/huggingface/peft/pull/461
* Beomi made their first contribution in https://github.com/huggingface/peft/pull/588
* sjrl made their first contribution in https://github.com/huggingface/peft/pull/473
* pminervini made their first contribution in https://github.com/huggingface/peft/pull/616
* nafiturgut made their first contribution in https://github.com/huggingface/peft/pull/624
* aarnphm made their first contribution in https://github.com/huggingface/peft/pull/612
* ain-soph made their first contribution in https://github.com/huggingface/peft/pull/644
* glerzing made their first contribution in https://github.com/huggingface/peft/pull/646
* BenjaminBossan made their first contribution in https://github.com/huggingface/peft/pull/640
* richard087 made their first contribution in https://github.com/huggingface/peft/pull/652
* sunyuhan19981208 made their first contribution in https://github.com/huggingface/peft/pull/669
* SumanthRH made their first contribution in https://github.com/huggingface/peft/pull/578
* rayrayraykk made their first contribution in https://github.com/huggingface/peft/pull/688
* zsamboki made their first contribution in https://github.com/huggingface/peft/pull/698

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.3.0...v0.4.0

Significant community contributions
The following contributors have made significant changes to the library over the last release:

TimDettmers
* 4-bit QLoRA via bitsandbytes (4-bit base model + LoRA) by TimDettmers in https://github.com/huggingface/peft/pull/476

SumanthRH
* Add functionality to support IA3 by SumanthRH in https://github.com/huggingface/peft/pull/578

kovalexal
* LoRA for Conv2d layer, script to convert kohya_ss LoRA to PEFT by kovalexal in https://github.com/huggingface/peft/pull/461
* Added Civitai LoRAs conversion to PEFT, PEFT LoRAs conversion to webui by kovalexal in https://github.com/huggingface/peft/pull/596
* [Bugfix] Fixed LoRA conv2d merge by kovalexal in https://github.com/huggingface/peft/pull/637
* Fixed LoraConfig alpha modification on add_weighted_adapter by kovalexal in https://github.com/huggingface/peft/pull/654

sywangyi
* do not use self.device. In FSDP cpu offload mode. self.device is "CPU… by sywangyi in https://github.com/huggingface/peft/pull/352
* add accelerate example for DDP and FSDP in sequence classification fo… by sywangyi in https://github.com/huggingface/peft/pull/358
* enable lora for mpt by sywangyi in https://github.com/huggingface/peft/pull/576
* fix Prefix-tuning error in clm Float16 evaluation by sywangyi in https://github.com/huggingface/peft/pull/520
* fix ptun and prompt tuning generation issue by sywangyi in https://github.com/huggingface/peft/pull/543
* when from_pretrained is called in finetune case of lora with flag "… by sywangyi in https://github.com/huggingface/peft/pull/591

aarnphm
* feat(model): Allow from_pretrained to accept PeftConfig class by aarnphm in https://github.com/huggingface/peft/pull/612
* style: tentatively add hints for some public function by aarnphm in https://github.com/huggingface/peft/pull/614
* chore(type): annotate that peft does contains type hints by aarnphm in https://github.com/huggingface/peft/pull/678

martin-liu
* improve readability of LoRA code by martin-liu in https://github.com/huggingface/peft/pull/409

thomas-schillaci
* Add seq2seq prompt tuning support by thomas-schillaci in https://github.com/huggingface/peft/pull/519

0.3.0

Brand new Docs
With task guides, conceptual guides, integration guides, and code references all available at your fingertips, 🤗 PEFT's docs (found at https://huggingface.co/docs/peft) provide an insightful and easy-to-follow resource for anyone looking to how to use 🤗 PEFT. Whether you're a seasoned pro or just starting out, PEFT's documentation will help you to get the most out of it.

* [WIP-docs] Accelerate scripts by stevhliu in https://github.com/huggingface/peft/pull/355
* [docs] Quicktour update by stevhliu in https://github.com/huggingface/peft/pull/346
* [docs] Conceptual overview of prompting methods by stevhliu in https://github.com/huggingface/peft/pull/339
* [docs] LoRA for token classification by stevhliu in https://github.com/huggingface/peft/pull/302
* [docs] int8 training by stevhliu in https://github.com/huggingface/peft/pull/332
* [docs] P-tuning for sequence classification by stevhliu in https://github.com/huggingface/peft/pull/281
* [docs] Prompt tuning for CLM by stevhliu in https://github.com/huggingface/peft/pull/264
* [docs] Prefix tuning for Seq2Seq by stevhliu in https://github.com/huggingface/peft/pull/272
* [docs] Add API references by stevhliu in https://github.com/huggingface/peft/pull/241
* [docs] Build notebooks from Markdown by stevhliu in https://github.com/huggingface/peft/pull/240
* [docs] Supported models tables by MKhalusova in https://github.com/huggingface/peft/pull/364
* [docs] Task guide with Dreambooth LoRA example by MKhalusova in https://github.com/huggingface/peft/pull/330
* [docs] LoRA conceptual guide by MKhalusova in https://github.com/huggingface/peft/pull/331
* [docs] Task guide for semantic segmentation with LoRA by MKhalusova in https://github.com/huggingface/peft/pull/307
* Move image classification example to the docs by MKhalusova in https://github.com/huggingface/peft/pull/239

Comprehensive Testing Suite
Comprised of both unit and integration tests, it rigorously tests core features, examples, and various models on different setups, including single and multiple GPUs. This commitment to testing helps ensure that PEFT maintains the highest levels of correctness, usability, and performance, while continuously improving in all areas.

* [`CI`] Add ci tests by younesbelkada in https://github.com/huggingface/peft/pull/203
* Fix CI tests by younesbelkada in https://github.com/huggingface/peft/pull/210
* [`CI`] Add more ci tests by younesbelkada in https://github.com/huggingface/peft/pull/223
* [`tests`] Adds more tests + fix failing tests by younesbelkada in https://github.com/huggingface/peft/pull/238
* [`tests`] Adds GPU tests by younesbelkada in https://github.com/huggingface/peft/pull/256
* [`tests`] add slow tests to GH workflow by younesbelkada in https://github.com/huggingface/peft/pull/304
* [`core`] Better log messages by younesbelkada in https://github.com/huggingface/peft/pull/366

Multi Adapter Support
PEFT just got even more versatile with its new Multi Adapter Support! Now you can train and infer with multiple adapters, or even combine multiple LoRA adapters in a weighted combination. This is especially handy for RLHF training, where you can save memory by using a single base model with multiple adapters for actor, critic, reward, and reference. And the icing on the cake? Check out the LoRA Dreambooth inference example notebook to see this feature in action.

* Multi Adapter support by pacman100 in https://github.com/huggingface/peft/pull/263

New PEFT methods: AdaLoRA and Adaption Prompt
PEFT just got even better, thanks to the contributions of the community! The AdaLoRA method is one of the exciting new additions. It takes the highly regarded LoRA method and improves it by allocating trainable parameters across the model to maximize performance within a given parameter budget. Another standout is the Adaption Prompt method, which enhances the already popular Prefix Tuning by introducing zero init attention.

* The Implementation of AdaLoRA (ICLR 2023) by QingruZhang in https://github.com/huggingface/peft/pull/233
* Implement adaption prompt from Llama-Adapter paper by yeoedward in https://github.com/huggingface/peft/pull/268

New LoRA utilities
Good news for LoRA users! PEFT now allows you to merge LoRA parameters into the base model's parameters, giving you the freedom to remove the PEFT wrapper and apply downstream optimizations related to inference and deployment. Plus, you can use all the features that are compatible with the base model without any issues.

* [`utils`] add merge_lora utility function by younesbelkada in https://github.com/huggingface/peft/pull/227
* Add nn.Embedding Support to Lora by Splo2t in https://github.com/huggingface/peft/pull/337

What's Changed

0.3.0.dev0

* fixing merged_linear lora issues by pacman100 in https://github.com/huggingface/peft/pull/172
* Replace base_model's function temporarily by PanQiWei in https://github.com/huggingface/peft/pull/170
* Support for LLaMA models by zphang in https://github.com/huggingface/peft/pull/160
* [`core`] Fix peft multi-gpu issue by younesbelkada in https://github.com/huggingface/peft/pull/145
* Update README.md by dumpmemory in https://github.com/huggingface/peft/pull/167
* ChatGLM support by mymusise in https://github.com/huggingface/peft/pull/180
* [`CI`] Add ci tests by younesbelkada in https://github.com/huggingface/peft/pull/203
* Fix CI tests by younesbelkada in https://github.com/huggingface/peft/pull/210
* Update train_dreambooth.py by haofanwang in https://github.com/huggingface/peft/pull/204
* Fix failing test on `main` by younesbelkada in https://github.com/huggingface/peft/pull/224
* Causal LM generation fix for prefix tuning: GPT2 model by vineetm in https://github.com/huggingface/peft/pull/222
* [`CI`] Add more ci tests by younesbelkada in https://github.com/huggingface/peft/pull/223
* Show CONFIG_NAME instead of "config.json" by aitor-gamarra in https://github.com/huggingface/peft/pull/231
* add docs by pacman100 in https://github.com/huggingface/peft/pull/214
* [`utils`] add merge_lora utility function by younesbelkada in https://github.com/huggingface/peft/pull/227
* Have fix typo in README by guspan-tanadi in https://github.com/huggingface/peft/pull/243
* Move image classification example to the docs by MKhalusova in https://github.com/huggingface/peft/pull/239
* [docs] Add API references by stevhliu in https://github.com/huggingface/peft/pull/241
* [docs] Build notebooks from Markdown by stevhliu in https://github.com/huggingface/peft/pull/240
* [`core`] Fix offload issue by younesbelkada in https://github.com/huggingface/peft/pull/248
* [`Automation`] Add stale bot by younesbelkada in https://github.com/huggingface/peft/pull/247
* [resources] replace pdf links with abs links by stas00 in https://github.com/huggingface/peft/pull/255
* [`Automation`] Update stale.py by younesbelkada in https://github.com/huggingface/peft/pull/254
* docs: have fix bit typo README by guspan-tanadi in https://github.com/huggingface/peft/pull/252
* Update other.py by tpoisonooo in https://github.com/huggingface/peft/pull/250
* Fixing a bug where a wrong parameter name is used for the offload_folder by toncho11 in https://github.com/huggingface/peft/pull/257
* [`tests`] Adds more tests + fix failing tests by younesbelkada in https://github.com/huggingface/peft/pull/238
* The Implementation of AdaLoRA (ICLR 2023) by QingruZhang in https://github.com/huggingface/peft/pull/233
* Add BLIP2 Example by younesbelkada in https://github.com/huggingface/peft/pull/260
* Multi Adapter support by pacman100 in https://github.com/huggingface/peft/pull/263
* Fix typo in examples/causal_language_modeling/peft_lora_clm_accelerate_ds_zero3_offload.py by rmill040 in https://github.com/huggingface/peft/pull/277
* [`tests`] Adds GPU tests by younesbelkada in https://github.com/huggingface/peft/pull/256
* Fix half precision forward by younesbelkada in https://github.com/huggingface/peft/pull/261
* fix trainable params setting by pacman100 in https://github.com/huggingface/peft/pull/283
* [docs] Prefix tuning for Seq2Seq by stevhliu in https://github.com/huggingface/peft/pull/272
* Fix lora_dropout operator type when dropout=0 by bigeagle in https://github.com/huggingface/peft/pull/288
* [`test`] Add Dockerfile by younesbelkada in https://github.com/huggingface/peft/pull/278
* fix and update examples and readme by pacman100 in https://github.com/huggingface/peft/pull/295
* [docs] Prompt tuning for CLM by stevhliu in https://github.com/huggingface/peft/pull/264
* Change gather for gather_for_metrics in eval. by JulesGM in https://github.com/huggingface/peft/pull/296
* Fix: unexpected keyword argument 'has_fp16_weights' by cyberfox in https://github.com/huggingface/peft/pull/299
* [`tests`] add CI training tests by younesbelkada in https://github.com/huggingface/peft/pull/311
* [docs] Task guide for semantic segmentation with LoRA by MKhalusova in https://github.com/huggingface/peft/pull/307
* [docs] P-tuning for sequence classification by stevhliu in https://github.com/huggingface/peft/pull/281
* Fix `merge_and_unload` when having additional trainable modules by pacman100 in https://github.com/huggingface/peft/pull/322
* feat(ci): add `pip` caching to CI by SauravMaheshkar in https://github.com/huggingface/peft/pull/314
* Fix eval for causal language modeling example by BabyChouSr in https://github.com/huggingface/peft/pull/327
* [docs] LoRA for token classification by stevhliu in https://github.com/huggingface/peft/pull/302
* [docs] int8 training by stevhliu in https://github.com/huggingface/peft/pull/332
* fix lora modules_to_save issue by pacman100 in https://github.com/huggingface/peft/pull/343
* [docs] Task guide with Dreambooth LoRA example by MKhalusova in https://github.com/huggingface/peft/pull/330
* [docs] LoRA conceptual guide by MKhalusova in https://github.com/huggingface/peft/pull/331
* [docs] Conceptual overview of prompting methods by stevhliu in https://github.com/huggingface/peft/pull/339
* Implement adaption prompt from Llama-Adapter paper by yeoedward in https://github.com/huggingface/peft/pull/268
* [`tests`] add slow tests to GH workflow by younesbelkada in https://github.com/huggingface/peft/pull/304
* [`core`] Better log messages by younesbelkada in https://github.com/huggingface/peft/pull/366
* Use `try` and `finally` in `disable_adapter()` to catch exceptions by mukobi in https://github.com/huggingface/peft/pull/368
* [docs] Supported models tables by MKhalusova in https://github.com/huggingface/peft/pull/364
* [WIP-docs] Accelerate scripts by stevhliu in https://github.com/huggingface/peft/pull/355
* [docs] Quicktour update by stevhliu in https://github.com/huggingface/peft/pull/346
* [`CI`] Fix nightly CI issues by younesbelkada in https://github.com/huggingface/peft/pull/375
* Fix a link to the example script by nzw0301 in https://github.com/huggingface/peft/pull/383
* Add nn.Embedding Support to Lora by Splo2t in https://github.com/huggingface/peft/pull/337
* Fix missing arg for transpose in AdaLora by yasyf in https://github.com/huggingface/peft/pull/347
* fix INT8 prepare function by pacman100 in https://github.com/huggingface/peft/pull/389

New Contributors
* PanQiWei made their first contribution in https://github.com/huggingface/peft/pull/170
* mymusise made their first contribution in https://github.com/huggingface/peft/pull/180
* haofanwang made their first contribution in https://github.com/huggingface/peft/pull/204
* vineetm made their first contribution in https://github.com/huggingface/peft/pull/222
* aitor-gamarra made their first contribution in https://github.com/huggingface/peft/pull/231
* guspan-tanadi made their first contribution in https://github.com/huggingface/peft/pull/243
* MKhalusova made their first contribution in https://github.com/huggingface/peft/pull/239
* stevhliu made their first contribution in https://github.com/huggingface/peft/pull/241
* stas00 made their first contribution in https://github.com/huggingface/peft/pull/255
* tpoisonooo made their first contribution in https://github.com/huggingface/peft/pull/250
* toncho11 made their first contribution in https://github.com/huggingface/peft/pull/257
* QingruZhang made their first contribution in https://github.com/huggingface/peft/pull/233
* rmill040 made their first contribution in https://github.com/huggingface/peft/pull/277
* bigeagle made their first contribution in https://github.com/huggingface/peft/pull/288
* JulesGM made their first contribution in https://github.com/huggingface/peft/pull/296
* cyberfox made their first contribution in https://github.com/huggingface/peft/pull/299
* BabyChouSr made their first contribution in https://github.com/huggingface/peft/pull/327
* yeoedward made their first contribution in https://github.com/huggingface/peft/pull/268
* mukobi made their first contribution in https://github.com/huggingface/peft/pull/368
* nzw0301 made their first contribution in https://github.com/huggingface/peft/pull/383
* Splo2t made their first contribution in https://github.com/huggingface/peft/pull/337
* yasyf made their first contribution in https://github.com/huggingface/peft/pull/347

Significant community contributions
The following contributors have made significant changes to the library over the last release:

QingruZhang
* The Implementation of AdaLoRA (ICLR 2023) in https://github.com/huggingface/peft/pull/233

yeoedward
* Implement adaption prompt from Llama-Adapter paper in https://github.com/huggingface/peft/pull/268

Splo2t
* Add nn.Embedding Support to Lora in https://github.com/huggingface/peft/pull/337

0.2.0

Whisper large tuning using PEFT LoRA+INT-8 on T4 GPU in Colab notebooks
We tested PEFT on [OpenAI](https://twitter.com/OpenAI)'s Whisper Large model and got:
i) 5x larger batch sizes
ii) Less than 8GB GPU VRAM
iii) Best part? Almost no degredation to WER 🤯

Without PEFT:
- OOM on a T4 GPU ❌
- 6GB checkpoint ❌
- 13.64 WER ✅

With PEFT:
- Train on a T4 GPU ✅
- 60MB checkpoint ✅
- 14.01 WER ✅

* adding whisper large peft+int8 training example by pacman100 in https://github.com/huggingface/peft/pull/95

`prepare_for_int8_training` utility
This utility enables preprocessing the base model to be ready for INT8 training.

* [`core`] add `prepare_model_for_training` by younesbelkada in https://github.com/huggingface/peft/pull/85
* [`core`] Some changes with `prepare_model_for_training` & few fixes by younesbelkada in https://github.com/huggingface/peft/pull/105

`disable_adapter()` context manager
Enables to disable adapter layers to get the outputs from the frozen base models.
An exciting application of this feature allows only a single model copy to be used for policy model and reference model generations in RLHF.

* add disable adapter context manager by pacman100 in https://github.com/huggingface/peft/pull/106

What's Changed

0.2.0.dev0

* Update README.md by sayakpaul in https://github.com/huggingface/peft/pull/72
* Fixed typo in Readme by Muhtasham in https://github.com/huggingface/peft/pull/73
* Update README.md by pacman100 in https://github.com/huggingface/peft/pull/77
* convert prompt tuning vocab to fp32 by mayank31398 in https://github.com/huggingface/peft/pull/68
* [`core`] add `prepare_model_for_training` by younesbelkada in https://github.com/huggingface/peft/pull/85
* [`bnb`] add flan-t5 example by younesbelkada in https://github.com/huggingface/peft/pull/86
* making `prepare_model_for_training` flexible by pacman100 in https://github.com/huggingface/peft/pull/90
* adding whisper large peft+int8 training example by pacman100 in https://github.com/huggingface/peft/pull/95
* making `bnb` optional by pacman100 in https://github.com/huggingface/peft/pull/97
* add support for regex target modules in lora by pacman100 in https://github.com/huggingface/peft/pull/104
* [`core`] Some changes with `prepare_model_for_training` & few fixes by younesbelkada in https://github.com/huggingface/peft/pull/105
* Fix typo by mrm8488 in https://github.com/huggingface/peft/pull/107
* add disable adapter context manager by pacman100 in https://github.com/huggingface/peft/pull/106
* add `EleutherAI/gpt-neox-20b` to support matrix by pacman100 in https://github.com/huggingface/peft/pull/109
* fix merging lora weights for inference by pacman100 in https://github.com/huggingface/peft/pull/117
* [`core`] Fix autocast issue by younesbelkada in https://github.com/huggingface/peft/pull/121
* fixes `prepare_for_int8_training` by pacman100 in https://github.com/huggingface/peft/pull/127
* issue126: torch.load device issue. by gabinguo in https://github.com/huggingface/peft/pull/134
* fix: count params when zero init'd by zanussbaum in https://github.com/huggingface/peft/pull/140
* chore: update `pyproject.toml` by SauravMaheshkar in https://github.com/huggingface/peft/pull/125
* support option for encoder only prompts by mayank31398 in https://github.com/huggingface/peft/pull/150
* minor fixes to the examples by pacman100 in https://github.com/huggingface/peft/pull/149
* Add local saving for whisper largev2 example notebook by alvanli in https://github.com/huggingface/peft/pull/163
* fix count by dumpmemory in https://github.com/huggingface/peft/pull/162
* Add Prefix Tuning citation by zphang in https://github.com/huggingface/peft/pull/159
* lora fixes and adding 8bitMegredLinear lora by pacman100 in https://github.com/huggingface/peft/pull/157
* Update README.md by pacman100 in https://github.com/huggingface/peft/pull/164
* minor changes by pacman100 in https://github.com/huggingface/peft/pull/165

New Contributors
* Muhtasham made their first contribution in https://github.com/huggingface/peft/pull/73
* mayank31398 made their first contribution in https://github.com/huggingface/peft/pull/68
* mrm8488 made their first contribution in https://github.com/huggingface/peft/pull/107
* gabinguo made their first contribution in https://github.com/huggingface/peft/pull/134
* zanussbaum made their first contribution in https://github.com/huggingface/peft/pull/140
* SauravMaheshkar made their first contribution in https://github.com/huggingface/peft/pull/125
* alvanli made their first contribution in https://github.com/huggingface/peft/pull/163
* dumpmemory made their first contribution in https://github.com/huggingface/peft/pull/162
* zphang made their first contribution in https://github.com/huggingface/peft/pull/159

Significant community contributions

The following contributors have made significant changes to the library over the last release:

* mayank31398
* Prompt Tuning method enhancements and fixes (68, 150)

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.1.0...v0.2.0

0.1.0

Initial release of 🤗 PEFT. Checkout the main [README](https://github.com/huggingface/peft) to learn more about it!

Page 5 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.