Adapters

Latest version: v1.0.1

Safety actively analyzes 685525 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 5

0.2.2

This version is built for Hugging Face Transformers **v4.40.x**.

New
- Add example notebook for embeddings training & update docs (hSterz via 706)

Changed
- Upgrade supported Transformers version (calpt via 697)
- Add download redirect for AH adapters to HF (calpt via 704)

Fixed
- Fix saving adapter model with custom heads (hSterz via 700)
- Fix moving adapter head to device with `adapter_to()` (calpt via 708)
- Fix importing encoder-decoder adapter classes (calpt via 711)

0.2.1

This version is built for Hugging Face Transformers **v4.39.x**.

New
- Support saving & loading via Safetensors and `use_safetensors` parameter (calpt via 692)
- Add `adapter_to()` method for moving & converting adapter weights (calpt via https://github.com/adapter-hub/adapters/pull/699)

Fixed
- Fix reading model info in `get_adapter_info()` for HF (calpt via 695)
- Fix `load_best_model_at_end=True` for AdapterTrainer with quantized training (calpt via 699)

0.2.0

This version is built for Hugging Face Transformers **v4.39.x**.

New
- Add support for QLoRA/ QAdapter training via bitsandbytes (calpt via 663): **[Notebook Tutorial](https://github.com/Adapter-Hub/adapters/blob/main/notebooks/QLoRA_Llama_Finetuning.ipynb)**
- Add dropout to bottleneck adapters (calpt via 667)

Changed
- Upgrade supported Transformers version (lenglaender via 654; calpt via 686)
- Deprecate Hub repo in docs (calpt via 668)
- Switch resolving order if source not specified in load_adapter() (calpt via 681)

Fixed
- Fix DataParallel training with adapters (calpt via 658)
- Fix embedding Training Bug (hSterz via 655)
- Fix fp16/ bf16 for Prefix Tuning (calpt via 659)
- Fix Training Error with AdapterDrop and Prefix Tuning (TimoImhof via 673)
- Fix default cache path for adapters loaded from AH repo (calpt via 676)
- Fix skipping composition blocks in not applicable layers (calpt via 665)
- Fix Unipelt Lora default config (calpt via 682)
- Fix compatibility of adapters with HF Accelerate auto device-mapping (calpt via 678)
- Use default head dropout prob if not provided by model (calpt via 685)

0.1.2

This version is built for Hugging Face Transformers **v4.36.x**.

New
- Add MT5 support (sotwi via 629)

Changed
- Upgrade supported Transformers version (calpt via 617)
- Simplify XAdapterModel implementations (calpt via 641)

Fixed
- Fix prediction head loading for T5 (calpt via 640)

0.1.1

This version is built for Hugging Face Transformers **v4.35.x**.

New
- Add `leave_out` to LoRA and (IA)³ (calpt via 608)

Fixed
- Fix error in push_adapter_to_hub() due to deprecated args (calpt via https://github.com/adapter-hub/adapters/pull/613)
- Fix Prefix-Tuning for T5 models where d_kv != d_model / num_heads (calpt via https://github.com/adapter-hub/adapters/pull/621)
- [Bart] Move CLS rep extraction from EOS tokens to head classes (calpt via https://github.com/adapter-hub/adapters/pull/624)
- Fix adapter activation with `skip_layers`/ AdapterDrop training (calpt via 634)

Docs & Notebooks
- Update notebooks & add new complex configuration demo notebook (hSterz & calpt via https://github.com/adapter-hub/adapters/pull/614)

0.1.0

**Blog post: https://adapterhub.ml/blog/2023/11/introducing-adapters/**

With the new _Adapters_ library, we fundamentally refactored the adapter-transformers library and added support for new models and adapter methods.

This version is compatible with Hugging Face Transformers version 4.35.2.

For a guide on how to migrate from adapter-transformers to _Adapters_ have a look at https://docs.adapterhub.ml/transitioning.md.
Changes are given compared to the latest [adapters-transformers v3.2.1](https://github.com/adapter-hub/adapters/releases/tag/adapters3.2.1).

New Models & Adapter Methods
- Add LLaMA model integration (hSterz)
- Add X-MOD model integration (calpt via 581)
- Add Electra model integration (hSterz via 583, based on work of amitkumarj441 and pauli31 in 400)
- Add adapter output & parameter averaging (calpt)
- Add Prompt Tuning (lenglaender and calpt via 595)
- Add Composition Support to LoRA and (IA)³ (calpt via 598)

Breaking Changes
- Renamed bottleneck adapter configs and config strings. The new names can be found here: https://docs.adapterhub.ml/overview.html (calpt)
- Removed the XModelWithHeads classes (lenglaender) _(XModelWithHeads have been deprecated since adapter-transformers version 3.0.0)_

Changes Due to the Refactoring
- Refactored the implementation of all already supported models (calpt, lenglaender, hSterz, TimoImhof)
- Separate the model config (`PretrainedConfig`) from the adapters config (`ModelAdaptersConfig`) (calpt)
- Updated the whole documentation, Jupyter Notebooks and example scripts (hSterz, lenglaender, TimoImhof, calpt)
- Introduced the `load_model` function to load models containing adapters. This replaces the Hugging Face `from_pretrained` function used in the `adapter-transformers` library (lenglaender)
- Sharing more logic for adapter composition between different composition blocks (calpt via 591)
- Added Backwards Compatibility Tests which allow for testing if adaptations of the codebase, such as Refactoring, impair the functionality of the library (TimoImhof via 596)
- Refactored the EncoderDecoderModel by introducing a new mixin (`ModelUsingSubmodelsAdaptersMixin`) for models that contain other models (lenglaender)
- Rename the class `AdapterConfigBase` into `AdapterConfig` (hSterz via 603)

Fixes and Minor Improvements
- Fixed EncoderDecoderModel generate function (lenglaender)
- Fixed deletion of invertible adapters (TimoImhof)
- Automatically convert heads when loading with XAdapterModel (calpt via 594)
- Fix training T5 adapter models with Trainer (calpt via 599)
- Ensure output embeddings are frozen during adapter training (calpt 537)

Page 5 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.