* FIX: Adding 2 adapters when target_modules is a str fails by BenjaminBossan in https://github.com/huggingface/peft/pull/1111
* Prompt tuning: Allow to pass additional args to AutoTokenizer.from_pretrained by BenjaminBossan in https://github.com/huggingface/peft/pull/1053
* Fix: TorchTracemalloc ruins Windows performance by lukaskuhn-lku in https://github.com/huggingface/peft/pull/1126
* TST: Improve requires grad testing: by BenjaminBossan in https://github.com/huggingface/peft/pull/1131
* FEAT: Make safe serialization the default one by younesbelkada in https://github.com/huggingface/peft/pull/1088
* FEAT: Merging only specified `adapter_names` when calling `merge` by younesbelkada in https://github.com/huggingface/peft/pull/1132
* Refactor base layer pattern by BenjaminBossan in https://github.com/huggingface/peft/pull/1106
* [`Tests`] Fix daily CI by younesbelkada in https://github.com/huggingface/peft/pull/1136
* [`core` / `LoRA`] Add `adapter_names` in bnb layers by younesbelkada in https://github.com/huggingface/peft/pull/1139
* [`Tests`] Do not stop tests if a job failed by younesbelkada in https://github.com/huggingface/peft/pull/1141
* CI Add Python 3.11 to test matrix by BenjaminBossan in https://github.com/huggingface/peft/pull/1143
* FIX: A few issues with AdaLora, extending GPU tests by BenjaminBossan in https://github.com/huggingface/peft/pull/1146
* Use `huggingface_hub.file_exists` instead of custom helper by Wauplin in https://github.com/huggingface/peft/pull/1145
* Delete IA3 adapter by alexrs in https://github.com/huggingface/peft/pull/1153
* [Docs fix] Relative path issue by mishig25 in https://github.com/huggingface/peft/pull/1157
* Dataset was loaded twice in 4-bit finetuning script by lukaskuhn-lku in https://github.com/huggingface/peft/pull/1164
* fix `add_weighted_adapter` method by pacman100 in https://github.com/huggingface/peft/pull/1169
* (minor) correct type annotation by vwxyzjn in https://github.com/huggingface/peft/pull/1166
* Update release checklist about release notes by BenjaminBossan in https://github.com/huggingface/peft/pull/1170
* [docs] Migrate doc files to Markdown by stevhliu in https://github.com/huggingface/peft/pull/1171
* Fix dockerfile build by younesbelkada in https://github.com/huggingface/peft/pull/1177
* FIX: Wrong use of base layer by BenjaminBossan in https://github.com/huggingface/peft/pull/1183
* [`Tests`] Migrate to AWS runners by younesbelkada in https://github.com/huggingface/peft/pull/1185
* Fix code example in quicktour.md by merveenoyan in https://github.com/huggingface/peft/pull/1181
* DOC Update a few places in the README by BenjaminBossan in https://github.com/huggingface/peft/pull/1152
* Fix issue where you cannot call PeftModel.from_pretrained with a private adapter by elyxlz in https://github.com/huggingface/peft/pull/1076
* Added lora support for phi by umarbutler in https://github.com/huggingface/peft/pull/1186
* add options to save or push model by callanwu in https://github.com/huggingface/peft/pull/1159
* ENH: Different initialization methods for LoRA by BenjaminBossan in https://github.com/huggingface/peft/pull/1189
* Training PEFT models with new tokens being added to the embedding layers and tokenizer by pacman100 in https://github.com/huggingface/peft/pull/1147
* LoftQ: Add LoftQ method integrated into LoRA. Add example code for LoftQ usage. by yxli2123 in https://github.com/huggingface/peft/pull/1150
* Parallel linear Lora by zhangsheng377 in https://github.com/huggingface/peft/pull/1092
* [Feature] Support OFT by okotaku in https://github.com/huggingface/peft/pull/1160
* Mixed adapter models by BenjaminBossan in https://github.com/huggingface/peft/pull/1163
* [DOCS] README.md by Akash190104 in https://github.com/huggingface/peft/pull/1054
* Fix parallel linear lora by zhangsheng377 in https://github.com/huggingface/peft/pull/1202
* ENH: Enable OFT adapter for mixed adapter models by BenjaminBossan in https://github.com/huggingface/peft/pull/1204
* DOC: Update & improve docstrings and type annotations for common methods and classes by BenjaminBossan in https://github.com/huggingface/peft/pull/1201
* remove HF tokens by yxli2123 in https://github.com/huggingface/peft/pull/1207
* [docs] Update index and quicktour by stevhliu in https://github.com/huggingface/peft/pull/1191
* [docs] API docs by stevhliu in https://github.com/huggingface/peft/pull/1196
* MNT: Delete the delete doc workflows by BenjaminBossan in https://github.com/huggingface/peft/pull/1213
* DOC: Initialization options for LoRA by BenjaminBossan in https://github.com/huggingface/peft/pull/1218
* Fix an issue with layer merging for LoHa and OFT by lukaskuhn-lku in https://github.com/huggingface/peft/pull/1210
* DOC: How to configure new transformers models by BenjaminBossan in https://github.com/huggingface/peft/pull/1195
* Raise error when `modules_to_save` is specified and multiple adapters are being unloaded by pacman100 in https://github.com/huggingface/peft/pull/1137
* TST: Add regression tests 2 by BenjaminBossan in https://github.com/huggingface/peft/pull/1115