Peft

Latest version: v0.13.2

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 5

0.13.2

This patch release contains a small bug fix for an issue that prevented some LoRA checkpoints to be loaded correctly (mostly concerning stable diffusion checkpoints not trained with PEFT when loaded in diffusers, 2144).

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.13.1...v0.13.2

0.13.1

This patch release contains a small bug fix for the `low_cpu_mem_usage=True` option (2113).

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.13.0...v0.13.1

0.13.0

![peft-v0 13 0](https://github.com/user-attachments/assets/0423db36-73ca-4eb4-af12-c21610a1b35c)

Highlights

New methods

LoRA+

kallewoof added [LoRA\+](https://arxiv.org/abs/2402.12354) to PEFT (#1915). This is a function that allows to [initialize an optimizer](https://huggingface.co/docs/peft/main/en/developer_guides/lora#lora-optimized-lora) with settings that are better suited for training a LoRA adapter.

VB-LoRA

leo-yangli added a new method to PEFT called [VB-LoRA](https://arxiv.org/abs/2405.15179) (#2039). The idea is to have LoRA layers be composed from a single vector bank (hence "VB") that is shared among all layers. This makes VB-LoRA extremely parameter efficient and the checkpoints especially small (comparable to the VeRA method), while still promising good fine-tuning performance. Check the [VB-LoRA docs](https://huggingface.co/docs/peft/main/en/package_reference/vblora) and [example](https://github.com/huggingface/peft/blob/main/examples/sequence_classification/VBLoRA.ipynb).

Enhancements

New Hugging Face team member ariG23498 added the helper function [`rescale_adapter_scale`](https://huggingface.co/docs/peft/main/en/package_reference/helpers#peft.helpers.rescale_adapter_scale) to PEFT (1951). Use this context manager to temporarily increase or decrease the scaling of the LoRA adapter of a model. It also works for PEFT adapters loaded directly into a transformers or diffusers model.

ariG23498 also added [DoRA](https://arxiv.org/abs/2402.09353) support for embedding layers (#2006). So if you're using the `use_dora=True` option in the `LoraConfig`, you can now also target embedding layers.

For some time now, we support [inference with batches that are using different adapters](https://huggingface.co/docs/peft/v0.12.0/en/developer_guides/lora#inference-with-different-lora-adapters-in-the-same-batch) for different samples, so e.g. sample 1-5 use "adapter1" and samples 6-10 use "adapter2". However, this only worked for LoRA layers so far. saeid93 extended this to also work with layers targeted by `modules_to_save` (1990).

When loading a PEFT adapter, you now have the option to pass `low_cpu_mem_usage=True` (1961). This will initialize the adapter with empty weights ("meta" device) before loading the weights instead of initializing on CPU or GPU. This can speed up loading PEFT adapters. So use this option especially if you have a lot of adapters to load at the same time or if these adapters are very big. Please let us know if you encounter issues with this option, as we may make this the default in the future.

Changes

Safe loading of PyTorch weights

Unless indicated otherwise, PEFT adapters are saved and loaded using the secure `safetensors` format. However, we also support the [PyTorch format](https://pytorch.org/docs/stable/generated/torch.load.html) for checkpoints, which relies on the inherently insecure pickle protocol from Python. In the future, PyTorch will be more strict when loading these files to improve security by making the option `weights_only=True` the default. This is generally recommended and should not cause any trouble with PEFT checkpoints, which is why with this release, PEFT will enable this by default. Please open an issue if this causes trouble.

What's Changed
* Bump version to 0.12.1.dev0 by BenjaminBossan in https://github.com/huggingface/peft/pull/1950
* CI Fix Windows permission error on merge test by BenjaminBossan in https://github.com/huggingface/peft/pull/1952
* Check if past_key_values is provided when using prefix_tuning in peft_model by Nidhogg-lyz in https://github.com/huggingface/peft/pull/1942
* Add lora+ implementation by kallewoof in https://github.com/huggingface/peft/pull/1915
* FIX: New bloom changes breaking prompt learning by BenjaminBossan in https://github.com/huggingface/peft/pull/1969
* ENH Update VeRA preconfigured models by BenjaminBossan in https://github.com/huggingface/peft/pull/1941
* fix: lora+: include lr in optimizer kwargs by kallewoof in https://github.com/huggingface/peft/pull/1973
* FIX active_adapters for transformers models by BenjaminBossan in https://github.com/huggingface/peft/pull/1975
* FIX Loading adapter honors offline mode by BenjaminBossan in https://github.com/huggingface/peft/pull/1976
* chore: Update CI configuration for workflows by XciD in https://github.com/huggingface/peft/pull/1985
* Cast to fp32 if using bf16 weights on cpu during `merge_and_unload` by snarayan21 in https://github.com/huggingface/peft/pull/1978
* AdaLora: Trigger warning when user uses 'r' inplace of 'init_r' by bhargavyagnik in https://github.com/huggingface/peft/pull/1981
* [Add] scaling LoRA adapter weights with a context manager by ariG23498 in https://github.com/huggingface/peft/pull/1951
* DOC Small fixes for HQQ and section title by BenjaminBossan in https://github.com/huggingface/peft/pull/1986
* Add docs and examples for X-LoRA by EricLBuehler in https://github.com/huggingface/peft/pull/1970
* fix: fix docker build gpus by XciD in https://github.com/huggingface/peft/pull/1987
* FIX: Adjust transformers version check for bloom by BenjaminBossan in https://github.com/huggingface/peft/pull/1992
* [Hotfix] Fix BOFT mixed precision by Edenzzzz in https://github.com/huggingface/peft/pull/1925
* [Suggestions] Updates suggested for `helper.rescale_adapter_scale` by ariG23498 in https://github.com/huggingface/peft/pull/1989
* MAINT: Default to loading weights only for torch.load by BenjaminBossan in https://github.com/huggingface/peft/pull/1993
* BOFT bug fix when saving by Zeju1997 in https://github.com/huggingface/peft/pull/1994
* FIX Import error in BOFT half precision test by BenjaminBossan in https://github.com/huggingface/peft/pull/1995
* Update lora.md (typos) by nir-sh-automat-it in https://github.com/huggingface/peft/pull/2003
* TST Add LNTuningConfig and LoKrConfig to tests by BenjaminBossan in https://github.com/huggingface/peft/pull/2005
* ENH: Warn when a user provided model name in the config renamed by BenjaminBossan in https://github.com/huggingface/peft/pull/2004
* FIX CI Correctly report outcome of bnb import test by BenjaminBossan in https://github.com/huggingface/peft/pull/2007
* Update docs for X-LoRA and some bugfixes by EricLBuehler in https://github.com/huggingface/peft/pull/2002
* TST: Potentially Skip 8bit bnb regression test if compute capability is too low by BenjaminBossan in https://github.com/huggingface/peft/pull/1998
* CI Activate single core multi backend bnb tests by BenjaminBossan in https://github.com/huggingface/peft/pull/2008
* Fix usage of deprecated parameters/functions in X-LoRA by EricLBuehler in https://github.com/huggingface/peft/pull/2010
* [tests] enable `test_vera_dtypes` on XPU by faaany in https://github.com/huggingface/peft/pull/2017
* CI Remove regression tests from BNB CI by BenjaminBossan in https://github.com/huggingface/peft/pull/2024
* [tests] enable regression tests on XPU by faaany in https://github.com/huggingface/peft/pull/2019
* ENH: Better error msg for replace_lora_weights_loftq when using a local model. by BenjaminBossan in https://github.com/huggingface/peft/pull/2022
* [tests] make cuda-only cases in `TestModelAndLayerStatus` device-agnostic by faaany in https://github.com/huggingface/peft/pull/2026
* [tests] enable `test_mixed_adapter_batches_lora_opt_timing` on XPU by faaany in https://github.com/huggingface/peft/pull/2021
* MAINT: Update ruff version to ~0.6.1 by BenjaminBossan in https://github.com/huggingface/peft/pull/1965
* ENH Raise error when applying modules_to_save on tuner layer by BenjaminBossan in https://github.com/huggingface/peft/pull/2028
* FIX: Don't target the classification head when using target_modules="all-linear" by BenjaminBossan in https://github.com/huggingface/peft/pull/2033
* [tests] enable cuda-only tests in `test_common_gpu.py` to work on XPU by faaany in https://github.com/huggingface/peft/pull/2031
* [Add] DoRA Embedding by ariG23498 in https://github.com/huggingface/peft/pull/2006
* [tests] enable `test_gpu_examples.py` on XPU by faaany in https://github.com/huggingface/peft/pull/2036
* Bug: set correct pre-commit-hooks version by ltoniazzi in https://github.com/huggingface/peft/pull/2034
* Warn if using tied target module with `tie_word_embeddings` by ltoniazzi in https://github.com/huggingface/peft/pull/2025
* ENH: Faster adapter loading if there are a lot of target modules by BenjaminBossan in https://github.com/huggingface/peft/pull/2045
* FIX: Error with OLoRA init when using bnb by BenjaminBossan in https://github.com/huggingface/peft/pull/2011
* FIX: Small numerical discrepancy for p-tuning after loading the model by BenjaminBossan in https://github.com/huggingface/peft/pull/2047
* Add VB-LoRA by leo-yangli in https://github.com/huggingface/peft/pull/2039
* Fixing scalings logging test by EricLBuehler in https://github.com/huggingface/peft/pull/2042
* TST: Fewer inference steps for stable diffusion tests by BenjaminBossan in https://github.com/huggingface/peft/pull/2051
* TST Speed up vision model tests by BenjaminBossan in https://github.com/huggingface/peft/pull/2058
* TST: Make X-LoRA tests faster by BenjaminBossan in https://github.com/huggingface/peft/pull/2059
* Update permissions for githubtoken stale.yml by glegendre01 in https://github.com/huggingface/peft/pull/2061
* MAINT: Give stale bot permissions for PRs too by BenjaminBossan in https://github.com/huggingface/peft/pull/2064
* avoid saving boft_P in adapter model by sywangyi in https://github.com/huggingface/peft/pull/2050
* fix arguments for PiSSA preprocess by keakon in https://github.com/huggingface/peft/pull/2053
* Apply deprecated `evaluation_strategy` by muellerzr in https://github.com/huggingface/peft/pull/1664
* fixing multiple LoRA in the same batch or vit by saeid93 in https://github.com/huggingface/peft/pull/1990
* FIX: Bug that prevents BOFT from loading multiple adapters by BenjaminBossan in https://github.com/huggingface/peft/pull/2068
* [tests] skip some tests for XPU devices by faaany in https://github.com/huggingface/peft/pull/2074
* ENH: PiSSA/OLoRA: Preserve original config on save by BenjaminBossan in https://github.com/huggingface/peft/pull/2077
* Expose bias to to ModulesToSaveWrapper by dengdifan in https://github.com/huggingface/peft/pull/2081
* Update setup.py to update contact info by sayakpaul in https://github.com/huggingface/peft/pull/2086
* ENH: Allow empty initialization of adapter weight by BenjaminBossan in https://github.com/huggingface/peft/pull/1961
* ENH: Add default target layers for gemma2 architecture by BenjaminBossan in https://github.com/huggingface/peft/pull/2078
* FIX: Bug in find_minimal_target_modules by BenjaminBossan in https://github.com/huggingface/peft/pull/2083
* Fix func docstring by kwonmha in https://github.com/huggingface/peft/pull/2087
* ENH: Better DoRA check in mixed adapter batch inference by BenjaminBossan in https://github.com/huggingface/peft/pull/2089

New Contributors
* Nidhogg-lyz made their first contribution in https://github.com/huggingface/peft/pull/1942
* XciD made their first contribution in https://github.com/huggingface/peft/pull/1985
* bhargavyagnik made their first contribution in https://github.com/huggingface/peft/pull/1981
* ariG23498 made their first contribution in https://github.com/huggingface/peft/pull/1951
* Edenzzzz made their first contribution in https://github.com/huggingface/peft/pull/1925
* Zeju1997 made their first contribution in https://github.com/huggingface/peft/pull/1994
* nir-sh-automat-it made their first contribution in https://github.com/huggingface/peft/pull/2003
* faaany made their first contribution in https://github.com/huggingface/peft/pull/2017
* ltoniazzi made their first contribution in https://github.com/huggingface/peft/pull/2034
* leo-yangli made their first contribution in https://github.com/huggingface/peft/pull/2039
* glegendre01 made their first contribution in https://github.com/huggingface/peft/pull/2061
* keakon made their first contribution in https://github.com/huggingface/peft/pull/2053
* muellerzr made their first contribution in https://github.com/huggingface/peft/pull/1664
* saeid93 made their first contribution in https://github.com/huggingface/peft/pull/1990
* dengdifan made their first contribution in https://github.com/huggingface/peft/pull/2081
* kwonmha made their first contribution in https://github.com/huggingface/peft/pull/2087

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.12.0...v0.13.0

0.12.0

New Contributors

* mnoukhov made their first contribution in https://github.com/huggingface/peft/pull/1658
* elementary-particle made their first contribution in https://github.com/huggingface/peft/pull/1668
* sparsh2 made their first contribution in https://github.com/huggingface/peft/pull/1833
* McPatate made their first contribution in https://github.com/huggingface/peft/pull/1841
* dkopi made their first contribution in https://github.com/huggingface/peft/pull/1817
* namanvats made their first contribution in https://github.com/huggingface/peft/pull/1850
* tokenizer-decode made their first contribution in https://github.com/huggingface/peft/pull/1828
* jtatman made their first contribution in https://github.com/huggingface/peft/pull/1861
* cep-ter made their first contribution in https://github.com/huggingface/peft/pull/1862
* delock made their first contribution in https://github.com/huggingface/peft/pull/1888
* PhyscalX made their first contribution in https://github.com/huggingface/peft/pull/1879
* shirinyamani made their first contribution in https://github.com/huggingface/peft/pull/1885
* kallewoof made their first contribution in https://github.com/huggingface/peft/pull/1891
* ret-1 made their first contribution in https://github.com/huggingface/peft/pull/1892
* stillmatic made their first contribution in https://github.com/huggingface/peft/pull/1899
* rahulbshrestha made their first contribution in https://github.com/huggingface/peft/pull/1873
* Phoveran made their first contribution in https://github.com/huggingface/peft/pull/1838
* sujeek made their first contribution in https://github.com/huggingface/peft/pull/1926
* anch0vy made their first contribution in https://github.com/huggingface/peft/pull/1928
* DaShenZi721 made their first contribution in https://github.com/huggingface/peft/pull/1864
* ttw1018 made their first contribution in https://github.com/huggingface/peft/pull/1901
* snarayan21 made their first contribution in https://github.com/huggingface/peft/pull/1944

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.11.1...v0.12.0

0.11.1

Fix a bug that could lead to C++ compilation errors after importing PEFT (1738 1739).

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.11.0...v0.11.1

0.11.0

Highlights

![peft-v0 11 0](https://github.com/huggingface/peft/assets/6229650/ca652d10-c389-4163-ab62-1e0c821c9c5a)

New methods

BOFT

Thanks to yfeng95, Zeju1997, and YuliangXiu, PEFT was extended with BOFT: Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization (1326, [BOFT paper link](https://huggingface.co/papers/2311.06243)). In PEFT v0.7.0, we already added [OFT](https://huggingface.co/papers/2306.07280), but BOFT is even more parameter efficient. Check out the included [BOFT controlnet](https://github.com/huggingface/peft/tree/main/examples/boft_controlnet) and [BOFT dreambooth](https://github.com/huggingface/peft/tree/main/examples/boft_dreambooth) examples.


VeRA

If the parameter reduction of LoRA is not enough for your use case, you should take a close look at VeRA: Vector-based Random Matrix Adaptation (1564, [VeRA paper link](https://huggingface.co/papers/2310.11454)). This method resembles LoRA but adds two learnable scaling vectors to the two LoRA weight matrices. However, the LoRA weights themselves are shared across all layers, considerably reducing the number of trainable parameters.

The bulk of this PR was implemented by contributor vvvm23 with the help of dkopi.

PiSSA

PiSSA, Principal Singular values and Singular vectors Adaptation, is a new initialization method for LoRA, which was added by fxmeng (1626, [PiSSA paper link](https://huggingface.co/papers/2404.02948)). The improved initialization promises to speed up convergence and improve the final performance of LoRA models. When using models quantized with bitsandbytes, PiSSA initialization should reduce the quantization error, similar to LoftQ.

Quantization

HQQ

Thanks to fahadh4ilyas, PEFT LoRA linear layers now support Half-Quadratic Quantization, HQQ (1618, [HQQ repo](https://github.com/mobiusml/hqq/)). HQQ is fast and efficient (down to 2 bits), while not requiring calibration data.

EETQ

Another new quantization method supported in PEFT is Easy & Efficient Quantization for Transformers, EETQ (1675, [EETQ repo](https://github.com/NetEase-FuXi/EETQ)). This 8 bit quantization method works for LoRA linear layers and should be faster than bitsandbytes.

Show adapter layer and model status

We added a feature to show adapter layer and model status of PEFT models in 1663. With the newly added methods, you can easily check what adapters exist on your model, whether gradients are active, whether they are enabled, which ones are active or merged. You will also be informed if irregularities have been detected.

To use this new feature, call `model.get_layer_status()` for layer-level information, and `model.get_model_status()` for model-level information. For more details, check out our [docs on layer and model status](https://huggingface.co/docs/peft/main/en/developer_guides/troubleshooting#check-layer-and-model-status).

Changes

Edge case of how we deal with `modules_to_save`

We had the issue that when we were using classes such as PeftModelForSequenceClassification, we implicitly added the classifier layers to `model.modules_to_save`. However, this would only add a new `ModulesToSaveWrapper` instance for the first adapter being initialized. When initializing a 2nd adapter via `model.add_adapter`, this information was ignored. Now, `peft_config.modules_to_save` is updated explicitly to add the classifier layers (1615). This is a departure from how this worked previously, but it reflects the intended behavior better.

Furthermore, when merging together multiple LoRA adapters using `model.add_weighted_adapter`, if these adapters had `modules_to_save`, the original parameters of these modules would be used. This is unexpected and will most likely result in bad outputs. As there is no clear way to merge these modules, we decided to raise an error in this case (1615).

What's Changed
* Bump version to 0.10.1.dev0 by BenjaminBossan in https://github.com/huggingface/peft/pull/1578
* FIX Minor issues in docs, re-raising exception by BenjaminBossan in https://github.com/huggingface/peft/pull/1581
* FIX / Docs: Fix doc link for layer replication by younesbelkada in https://github.com/huggingface/peft/pull/1582
* DOC: Short section on using transformers pipeline by BenjaminBossan in https://github.com/huggingface/peft/pull/1587
* Extend PeftModel.from_pretrained() to models with disk-offloaded modules by blbadger in https://github.com/huggingface/peft/pull/1431
* [feat] Add `lru_cache` to `import_utils` calls that did not previously have it by tisles in https://github.com/huggingface/peft/pull/1584
* fix deepspeed zero3+prompt tuning bug. word_embeddings.weight shape i… by sywangyi in https://github.com/huggingface/peft/pull/1591
* MNT: Update GH bug report template by BenjaminBossan in https://github.com/huggingface/peft/pull/1600
* fix the torch_dtype and quant_storage_dtype by pacman100 in https://github.com/huggingface/peft/pull/1614
* FIX In the image classification example, Change the model to the LoRA… by changhwa in https://github.com/huggingface/peft/pull/1624
* Remove duplicated import by nzw0301 in https://github.com/huggingface/peft/pull/1622
* FIX: bnb config wrong argument names by BenjaminBossan in https://github.com/huggingface/peft/pull/1603
* FIX Make DoRA work with Conv1D layers by BenjaminBossan in https://github.com/huggingface/peft/pull/1588
* FIX: Send results to correct channel by younesbelkada in https://github.com/huggingface/peft/pull/1628
* FEAT: Allow ignoring mismatched sizes when loading by BenjaminBossan in https://github.com/huggingface/peft/pull/1620
* itemsize is torch>=2.1, use element_size() by winglian in https://github.com/huggingface/peft/pull/1630
* FIX Multiple adapters and modules_to_save by BenjaminBossan in https://github.com/huggingface/peft/pull/1615
* FIX Correctly call element_size by BenjaminBossan in https://github.com/huggingface/peft/pull/1635
* fix: allow load_adapter to use different device by yhZhai in https://github.com/huggingface/peft/pull/1631
* Adalora deepspeed by sywangyi in https://github.com/huggingface/peft/pull/1625
* Adding BOFT: Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization by yfeng95 in https://github.com/huggingface/peft/pull/1326
* Don't use deprecated `Repository` anymore by Wauplin in https://github.com/huggingface/peft/pull/1641
* FIX Errors in the transformers integration docs by BenjaminBossan in https://github.com/huggingface/peft/pull/1629
* update figure assets of BOFT by YuliangXiu in https://github.com/huggingface/peft/pull/1642
* print_trainable_parameters - format `%` to be sensible by stas00 in https://github.com/huggingface/peft/pull/1648
* FIX: Bug with handling of active adapters by BenjaminBossan in https://github.com/huggingface/peft/pull/1659
* Remove `dreambooth` Git link by charliermarsh in https://github.com/huggingface/peft/pull/1660
* add safetensor load in multitask_prompt_tuning by sywangyi in https://github.com/huggingface/peft/pull/1662
* Adds Vera (Vector Based Random Matrix Adaption) 2 by BenjaminBossan in https://github.com/huggingface/peft/pull/1564
* Update deepspeed.md by sanghyuk-choi in https://github.com/huggingface/peft/pull/1679
* ENH: Add multi-backend tests for bnb by younesbelkada in https://github.com/huggingface/peft/pull/1667
* FIX / Workflow: Fix Mac-OS CI issues by younesbelkada in https://github.com/huggingface/peft/pull/1680
* FIX Use trl version of tiny random llama by BenjaminBossan in https://github.com/huggingface/peft/pull/1681
* FIX: Don't eagerly import bnb for LoftQ by BenjaminBossan in https://github.com/huggingface/peft/pull/1683
* FEAT: Add EETQ support in PEFT by younesbelkada in https://github.com/huggingface/peft/pull/1675
* FIX / Workflow: Always notify on slack for docker image workflows by younesbelkada in https://github.com/huggingface/peft/pull/1682
* FIX: upgrade autoawq to latest version by younesbelkada in https://github.com/huggingface/peft/pull/1684
* FIX: Initialize DoRA weights in float32 if float16 is being used by BenjaminBossan in https://github.com/huggingface/peft/pull/1653
* fix bf16 model type issue for ia3 by sywangyi in https://github.com/huggingface/peft/pull/1634
* FIX Issues with AdaLora initialization by BenjaminBossan in https://github.com/huggingface/peft/pull/1652
* FEAT Show adapter layer and model status by BenjaminBossan in https://github.com/huggingface/peft/pull/1663
* Fixing the example by providing correct tokenized seq length by jpodivin in https://github.com/huggingface/peft/pull/1686
* TST: Skiping AWQ tests for now .. by younesbelkada in https://github.com/huggingface/peft/pull/1690
* Add LayerNorm tuning model by DTennant in https://github.com/huggingface/peft/pull/1301
* FIX Use different doc builder docker image by BenjaminBossan in https://github.com/huggingface/peft/pull/1697
* Set experimental dynamo config for compile tests by BenjaminBossan in https://github.com/huggingface/peft/pull/1698
* fix the fsdp peft autowrap policy by pacman100 in https://github.com/huggingface/peft/pull/1694
* Add LoRA support to HQQ Quantization by fahadh4ilyas in https://github.com/huggingface/peft/pull/1618
* FEAT Helper to check if a model is a PEFT model by BenjaminBossan in https://github.com/huggingface/peft/pull/1713
* support Cambricon MLUs device by huismiling in https://github.com/huggingface/peft/pull/1687
* Some small cleanups in docstrings, copyright note by BenjaminBossan in https://github.com/huggingface/peft/pull/1714
* Fix docs typo by NielsRogge in https://github.com/huggingface/peft/pull/1719
* revise run_peft_multigpu.sh by abzb1 in https://github.com/huggingface/peft/pull/1722
* Workflow: Add slack messages workflow by younesbelkada in https://github.com/huggingface/peft/pull/1723
* DOC Document the PEFT checkpoint format by BenjaminBossan in https://github.com/huggingface/peft/pull/1717
* FIX Allow DoRA init on CPU when using BNB by BenjaminBossan in https://github.com/huggingface/peft/pull/1724
* Adding PiSSA as an optional initialization method of LoRA by fxmeng in https://github.com/huggingface/peft/pull/1626

New Contributors
* tisles made their first contribution in https://github.com/huggingface/peft/pull/1584
* changhwa made their first contribution in https://github.com/huggingface/peft/pull/1624
* yhZhai made their first contribution in https://github.com/huggingface/peft/pull/1631
* yfeng95 made their first contribution in https://github.com/huggingface/peft/pull/1326
* YuliangXiu made their first contribution in https://github.com/huggingface/peft/pull/1642
* charliermarsh made their first contribution in https://github.com/huggingface/peft/pull/1660
* sanghyuk-choi made their first contribution in https://github.com/huggingface/peft/pull/1679
* jpodivin made their first contribution in https://github.com/huggingface/peft/pull/1686
* DTennant made their first contribution in https://github.com/huggingface/peft/pull/1301
* fahadh4ilyas made their first contribution in https://github.com/huggingface/peft/pull/1618
* huismiling made their first contribution in https://github.com/huggingface/peft/pull/1687
* NielsRogge made their first contribution in https://github.com/huggingface/peft/pull/1719
* abzb1 made their first contribution in https://github.com/huggingface/peft/pull/1722
* fxmeng made their first contribution in https://github.com/huggingface/peft/pull/1626

**Full Changelog**: https://github.com/huggingface/peft/compare/v0.10.0...v0.11.0

Page 1 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.