Peft

Latest version: v0.15.1

Safety actively analyzes 723177 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 6

0.6.3.dev0

* FIX: Adding 2 adapters when target_modules is a str fails by BenjaminBossan in https://github.com/huggingface/peft/pull/1111
* Prompt tuning: Allow to pass additional args to AutoTokenizer.from_pretrained by BenjaminBossan in https://github.com/huggingface/peft/pull/1053
* Fix: TorchTracemalloc ruins Windows performance by lukaskuhn-lku in https://github.com/huggingface/peft/pull/1126
* TST: Improve requires grad testing: by BenjaminBossan in https://github.com/huggingface/peft/pull/1131
* FEAT: Make safe serialization the default one by younesbelkada in https://github.com/huggingface/peft/pull/1088
* FEAT: Merging only specified `adapter_names` when calling `merge` by younesbelkada in https://github.com/huggingface/peft/pull/1132
* Refactor base layer pattern by BenjaminBossan in https://github.com/huggingface/peft/pull/1106
* [`Tests`] Fix daily CI by younesbelkada in https://github.com/huggingface/peft/pull/1136
* [`core` / `LoRA`] Add `adapter_names` in bnb layers by younesbelkada in https://github.com/huggingface/peft/pull/1139
* [`Tests`] Do not stop tests if a job failed by younesbelkada in https://github.com/huggingface/peft/pull/1141
* CI Add Python 3.11 to test matrix by BenjaminBossan in https://github.com/huggingface/peft/pull/1143
* FIX: A few issues with AdaLora, extending GPU tests by BenjaminBossan in https://github.com/huggingface/peft/pull/1146
* Use `huggingface_hub.file_exists` instead of custom helper by Wauplin in https://github.com/huggingface/peft/pull/1145
* Delete IA3 adapter by alexrs in https://github.com/huggingface/peft/pull/1153
* [Docs fix] Relative path issue by mishig25 in https://github.com/huggingface/peft/pull/1157
* Dataset was loaded twice in 4-bit finetuning script by lukaskuhn-lku in https://github.com/huggingface/peft/pull/1164
* fix `add_weighted_adapter` method by pacman100 in https://github.com/huggingface/peft/pull/1169
* (minor) correct type annotation by vwxyzjn in https://github.com/huggingface/peft/pull/1166
* Update release checklist about release notes by BenjaminBossan in https://github.com/huggingface/peft/pull/1170
* [docs] Migrate doc files to Markdown by stevhliu in https://github.com/huggingface/peft/pull/1171
* Fix dockerfile build by younesbelkada in https://github.com/huggingface/peft/pull/1177
* FIX: Wrong use of base layer by BenjaminBossan in https://github.com/huggingface/peft/pull/1183
* [`Tests`] Migrate to AWS runners by younesbelkada in https://github.com/huggingface/peft/pull/1185
* Fix code example in quicktour.md by merveenoyan in https://github.com/huggingface/peft/pull/1181
* DOC Update a few places in the README by BenjaminBossan in https://github.com/huggingface/peft/pull/1152
* Fix issue where you cannot call PeftModel.from_pretrained with a private adapter by elyxlz in https://github.com/huggingface/peft/pull/1076
* Added lora support for phi by umarbutler in https://github.com/huggingface/peft/pull/1186
* add options to save or push model by callanwu in https://github.com/huggingface/peft/pull/1159
* ENH: Different initialization methods for LoRA by BenjaminBossan in https://github.com/huggingface/peft/pull/1189
* Training PEFT models with new tokens being added to the embedding layers and tokenizer by pacman100 in https://github.com/huggingface/peft/pull/1147
* LoftQ: Add LoftQ method integrated into LoRA. Add example code for LoftQ usage. by yxli2123 in https://github.com/huggingface/peft/pull/1150
* Parallel linear Lora by zhangsheng377 in https://github.com/huggingface/peft/pull/1092
* [Feature] Support OFT by okotaku in https://github.com/huggingface/peft/pull/1160
* Mixed adapter models by BenjaminBossan in https://github.com/huggingface/peft/pull/1163
* [DOCS] README.md by Akash190104 in https://github.com/huggingface/peft/pull/1054
* Fix parallel linear lora by zhangsheng377 in https://github.com/huggingface/peft/pull/1202
* ENH: Enable OFT adapter for mixed adapter models by BenjaminBossan in https://github.com/huggingface/peft/pull/1204
* DOC: Update & improve docstrings and type annotations for common methods and classes by BenjaminBossan in https://github.com/huggingface/peft/pull/1201
* remove HF tokens by yxli2123 in https://github.com/huggingface/peft/pull/1207
* [docs] Update index and quicktour by stevhliu in https://github.com/huggingface/peft/pull/1191
* [docs] API docs by stevhliu in https://github.com/huggingface/peft/pull/1196
* MNT: Delete the delete doc workflows by BenjaminBossan in https://github.com/huggingface/peft/pull/1213
* DOC: Initialization options for LoRA by BenjaminBossan in https://github.com/huggingface/peft/pull/1218
* Fix an issue with layer merging for LoHa and OFT by lukaskuhn-lku in https://github.com/huggingface/peft/pull/1210
* DOC: How to configure new transformers models by BenjaminBossan in https://github.com/huggingface/peft/pull/1195
* Raise error when `modules_to_save` is specified and multiple adapters are being unloaded by pacman100 in https://github.com/huggingface/peft/pull/1137
* TST: Add regression tests 2 by BenjaminBossan in https://github.com/huggingface/peft/pull/1115

0.6.2

This patch release refactors the adapter deletion API and fixes to `ModulesToSaveWrapper` when using Low-level API.

Refactor adapter deletion
* Refactor adapter deletion by BenjaminBossan in https://github.com/huggingface/peft/pull/1105

Fix `ModulesToSaveWrapper` when using Low-level API
* Correctly deal with `ModulesToSaveWrapper` when using Low-level API by younesbelkada in https://github.com/huggingface/peft/pull/1112

What's Changed

What's Changed

0.6.1

This patch release fixes the compatbility issues with Adaptation Prompt that users faced with transformers 4.35.0. Moreover, it fixes an issue with token classification PEFT models when saving them using safetensors

Adaptation prompt fixes

* FIX: Skip adaption prompt tests with new transformers versions by BenjaminBossan in https://github.com/huggingface/peft/pull/1077
* FIX: fix adaptation prompt CI and compatibility with latest transformers (4.35.0) by younesbelkada in https://github.com/huggingface/peft/pull/1084

Safetensors fixes:

* [`core`] Fix safetensors serialization for shared tensors by younesbelkada in https://github.com/huggingface/peft/pull/1101

What's Changed
* After release: Bump version to 0.7.0.dev0 by BenjaminBossan in https://github.com/huggingface/peft/pull/1074
* Improve documentation for IA³ by SumanthRH in https://github.com/huggingface/peft/pull/984
* [`Docker`] Update Dockerfile to force-use transformers main by younesbelkada in https://github.com/huggingface/peft/pull/1085
* Update the release checklist by BenjaminBossan in https://github.com/huggingface/peft/pull/1075
* fix-gptq-training by SunMarc in https://github.com/huggingface/peft/pull/1086
* fix the failing CI tests by pacman100 in https://github.com/huggingface/peft/pull/1094
* Fix f-string in import_utils by KCFindstr in https://github.com/huggingface/peft/pull/1091
* Fix IA3 config for Falcon models by SumanthRH in https://github.com/huggingface/peft/pull/1007
* FIX: Failing nightly CI tests due to IA3 config by BenjaminBossan in https://github.com/huggingface/peft/pull/1100
* Change to 0.6.1.dev0 by younesbelkada in https://github.com/huggingface/peft/pull/1102

New Contributors
* KCFindstr made their first contribution in https://github.com/huggingface/peft/pull/1091

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.6.0...v0.6.1

0.6.0

New Contributors
* Psancs05 made their first contribution in https://github.com/huggingface/peft/pull/847
* metaprotium made their first contribution in https://github.com/huggingface/peft/pull/844
* jiqing-feng made their first contribution in https://github.com/huggingface/peft/pull/851
* houx15 made their first contribution in https://github.com/huggingface/peft/pull/888
* tmm1 made their first contribution in https://github.com/huggingface/peft/pull/874
* raghavanone made their first contribution in https://github.com/huggingface/peft/pull/891
* zspo made their first contribution in https://github.com/huggingface/peft/pull/898
* rohithkrn made their first contribution in https://github.com/huggingface/peft/pull/892
* Datta0 made their first contribution in https://github.com/huggingface/peft/pull/946
* kbulutozler made their first contribution in https://github.com/huggingface/peft/pull/982
* Pairshoe made their first contribution in https://github.com/huggingface/peft/pull/964
* ehcalabres made their first contribution in https://github.com/huggingface/peft/pull/1049

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.5.0...v0.6.0

0.6.0.dev0

* DOC: Add a contribution guide by BenjaminBossan in https://github.com/huggingface/peft/pull/848
* clarify the new model size by stas00 in https://github.com/huggingface/peft/pull/839
* DOC: Remove backlog section from README.md by BenjaminBossan in https://github.com/huggingface/peft/pull/853
* MNT: Refactor tuner forward methods for simplicity by BenjaminBossan in https://github.com/huggingface/peft/pull/833
* 🎉 Add Multitask Prompt Tuning by mayank31398 in https://github.com/huggingface/peft/pull/400
* Fix typos in ia3.py by metaprotium in https://github.com/huggingface/peft/pull/844
* Support merge lora module for 4bit and 8bit linear by jiqing-feng in https://github.com/huggingface/peft/pull/851
* Fix seq2seq prompt tuning (439) by glerzing in https://github.com/huggingface/peft/pull/809
* MNT: Move tuners to subpackages by BenjaminBossan in https://github.com/huggingface/peft/pull/807
* FIX: Error in forward of 4bit linear lora layer by BenjaminBossan in https://github.com/huggingface/peft/pull/878
* MNT: Run tests that were skipped previously by BenjaminBossan in https://github.com/huggingface/peft/pull/884
* FIX: PeftModel save_pretrained Doc (881) by houx15 in https://github.com/huggingface/peft/pull/888
* Upgrade docker actions to higher versions by younesbelkada in https://github.com/huggingface/peft/pull/889
* Fix error using deepspeed zero2 + load_in_8bit + lora by tmm1 in https://github.com/huggingface/peft/pull/874
* Fix doc for semantic_segmentation_lora by raghavanone in https://github.com/huggingface/peft/pull/891
* fix_gradient_accumulation_steps_in_examples by zspo in https://github.com/huggingface/peft/pull/898
* FIX: linting issue in example by BenjaminBossan in https://github.com/huggingface/peft/pull/908
* ENH Remove redundant initialization layer calls by BenjaminBossan in https://github.com/huggingface/peft/pull/887
* [docs] Remove duplicate section by stevhliu in https://github.com/huggingface/peft/pull/911
* support prefix tuning for starcoder models by pacman100 in https://github.com/huggingface/peft/pull/913
* Merge lora module to 8bit model by jiqing-feng in https://github.com/huggingface/peft/pull/875
* DOC: Section on common issues encountered with PEFT by BenjaminBossan in https://github.com/huggingface/peft/pull/909
* Enh speed up init emb conv2d by BenjaminBossan in https://github.com/huggingface/peft/pull/915
* Make base_model.peft_config single source of truth by BenjaminBossan in https://github.com/huggingface/peft/pull/921
* Update accelerate dependency version by rohithkrn in https://github.com/huggingface/peft/pull/892
* fix lora layer init by SunMarc in https://github.com/huggingface/peft/pull/928
* Fixed LoRA conversion for kohya_ss by kovalexal in https://github.com/huggingface/peft/pull/916
* [`CI`] Pin diffusers by younesbelkada in https://github.com/huggingface/peft/pull/936
* [`LoRA`] Add scale_layer / unscale_layer by younesbelkada in https://github.com/huggingface/peft/pull/935
* TST: Add GH action to run unit tests with torch.compile by BenjaminBossan in https://github.com/huggingface/peft/pull/943
* FIX: torch compile gh action installs pytest by BenjaminBossan in https://github.com/huggingface/peft/pull/944
* Fix NotImplementedError for no bias. by Datta0 in https://github.com/huggingface/peft/pull/946
* TST: Fix some tests that would fail with torch.compile by BenjaminBossan in https://github.com/huggingface/peft/pull/949
* ENH Allow compile GH action to run on torch nightly by BenjaminBossan in https://github.com/huggingface/peft/pull/952
* Install correct PyTorch nightly in GH action by BenjaminBossan in https://github.com/huggingface/peft/pull/954
* support multiple ranks and alphas for LoRA by pacman100 in https://github.com/huggingface/peft/pull/873
* feat: add type hints by SauravMaheshkar in https://github.com/huggingface/peft/pull/858
* FIX: setting requires_grad on adapter layers by BenjaminBossan in https://github.com/huggingface/peft/pull/905
* [`tests`] add transformers & diffusers integration tests by younesbelkada in https://github.com/huggingface/peft/pull/962
* Fix integrations_tests.yml by younesbelkada in https://github.com/huggingface/peft/pull/965
* Add 4-bit support to IA3 - Outperforms QLoRA in both speed and memory consumption by His-Wardship in https://github.com/huggingface/peft/pull/864
* Update integrations_tests.yml by younesbelkada in https://github.com/huggingface/peft/pull/966
* add the lora target modules for Mistral Models by pacman100 in https://github.com/huggingface/peft/pull/974
* TST: Fix broken save_pretrained tests by BenjaminBossan in https://github.com/huggingface/peft/pull/969
* [tests] add multiple active adapters tests by pacman100 in https://github.com/huggingface/peft/pull/961
* Fix missing tokenizer attribute in test by BenjaminBossan in https://github.com/huggingface/peft/pull/977
* Add implementation of LyCORIS LoHa (FedPara-like adapter) for SD&SDXL models by kovalexal in https://github.com/huggingface/peft/pull/956
* update BibTeX by pacman100 in https://github.com/huggingface/peft/pull/989
* FIX: issues with (un)merging multiple LoRA and IA³ adapters by BenjaminBossan in https://github.com/huggingface/peft/pull/976
* add lora target modules for stablelm models by kbulutozler in https://github.com/huggingface/peft/pull/982
* Correct minor errors in example notebooks for causal language modelling by SumanthRH in https://github.com/huggingface/peft/pull/926
* Fix typo in custom_models.mdx by Pairshoe in https://github.com/huggingface/peft/pull/964
* Add base model metadata to model card by BenjaminBossan in https://github.com/huggingface/peft/pull/975
* MNT Make .merged a property by BenjaminBossan in https://github.com/huggingface/peft/pull/979
* Fix lora creation by pacman100 in https://github.com/huggingface/peft/pull/993
* TST: Comment out flaky LoHA test by BenjaminBossan in https://github.com/huggingface/peft/pull/1002
* ENH Support Conv2d layers for IA³ by BenjaminBossan in https://github.com/huggingface/peft/pull/972
* Fix word_embeddings match for deepspeed wrapped model by mayank31398 in https://github.com/huggingface/peft/pull/1000
* FEAT: Add `safe_merge` option in `merge` by younesbelkada in https://github.com/huggingface/peft/pull/1001
* [`core` / `LoRA`] Add `safe_merge` to bnb layers by younesbelkada in https://github.com/huggingface/peft/pull/1009
* ENH: Refactor LoRA bnb layers for faster initialization by BenjaminBossan in https://github.com/huggingface/peft/pull/994
* FIX Don't assume model_config contains the key model_type by BenjaminBossan in https://github.com/huggingface/peft/pull/1012
* FIX stale.py uses timezone-aware datetime by BenjaminBossan in https://github.com/huggingface/peft/pull/1016
* FEAT: Add fp16 + cpu merge support by younesbelkada in https://github.com/huggingface/peft/pull/1017
* fix lora scaling and unscaling by pacman100 in https://github.com/huggingface/peft/pull/1027
* [`LoRA`] Revert original behavior for scale / unscale by younesbelkada in https://github.com/huggingface/peft/pull/1029
* [`LoRA`] Raise error when adapter name not found in `set_scale` by younesbelkada in https://github.com/huggingface/peft/pull/1034
* Fix target_modules type in config.from_pretrained by BenjaminBossan in https://github.com/huggingface/peft/pull/1046
* docs(README): bit misspell current path link StackLLaMa by guspan-tanadi in https://github.com/huggingface/peft/pull/1047
* Fixed wrong construction of LoHa weights, updated adapters conversion script by kovalexal in https://github.com/huggingface/peft/pull/1021
* Fix P-tuning for sequence classification docs by ehcalabres in https://github.com/huggingface/peft/pull/1049
* FIX: Setting active adapter correctly by BenjaminBossan in https://github.com/huggingface/peft/pull/1051
* Fix Conv1D merge error for IA3 by SumanthRH in https://github.com/huggingface/peft/pull/1014
* Add implementation of LyCORIS LoKr (KronA-like adapter) for SD&SDXL models by kovalexal in https://github.com/huggingface/peft/pull/978
* [`core`] Fix `use_reentrant` issues by younesbelkada in https://github.com/huggingface/peft/pull/1036
* [`tests`] Update Dockerfile to use cuda 12.2 by younesbelkada in https://github.com/huggingface/peft/pull/1050
* Add testing for regex matching and other custom kwargs by SumanthRH in https://github.com/huggingface/peft/pull/1031
* Fix Slack bot not displaying error messages by younesbelkada in https://github.com/huggingface/peft/pull/1068
* Fix slow tests not running by younesbelkada in https://github.com/huggingface/peft/pull/1071

0.5.0

GPTQ Integration
Now, you can finetune GPTQ quantized models using PEFT. Here are some examples of how to use PEFT with a GPTQ model: [colab notebook](https://colab.research.google.com/drive/1_TIrmuKOFhuRRiTWN94iLKUFu6ZX4ceb?usp=sharing) and [finetuning](https://gist.github.com/SunMarc/dcdb499ac16d355a8f265aa497645996) script.

* GPTQ Integration by SunMarc in https://github.com/huggingface/peft/pull/771

Low-level API
Enables users and developers to use PEFT as a utility library, at least for injectable adapters (LoRA, IA3, AdaLoRA). It exposes an API to modify the model in place to inject the new layers into the model.

* [`core`] PEFT refactor + introducing inject_adapter_in_model public method by younesbelkada https://github.com/huggingface/peft/pull/749
* [`Low-level-API`] Add docs about LLAPI by younesbelkada in https://github.com/huggingface/peft/pull/836

Support for XPU and NPU devices

Leverage the support for more devices for loading and fine-tuning PEFT adapters.

* Support XPU adapter loading by abhilash1910 in https://github.com/huggingface/peft/pull/737
* Support Ascend NPU adapter loading by statelesshz in https://github.com/huggingface/peft/pull/772

Mix-and-match LoRAs

Stable support and new ways of merging multiple LoRAs. There are currently 3 ways of merging loras supported: `linear`, `svd` and `cat`.

* Added additional parameters to mixing multiple LoRAs through SVD, added ability to mix LoRAs through concatenation by kovalexal in https://github.com/huggingface/peft/pull/817

What's Changed

Page 4 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.