</details>
* API Reference: [`NanoBEIREvaluator`](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#nanobeirevaluator)
PEFT compatibility (https://github.com/UKPLab/sentence-transformers/pull/3000, https://github.com/UKPLab/sentence-transformers/pull/2980, https://github.com/UKPLab/sentence-transformers/pull/3046)
Sentence Transformers has been integrated much more closely with PEFT. Notably, we introduce new methods:
* [active_adapters](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.active_adapters)
* [add_adapter](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.add_adapter)
* [disable_adapters](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.disable_adapters)
* [enable_adapters](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.enable_adapters)
* [get_adapter_state_dict](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.get_adapter_state_dict)
* [load_adapter](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.load_adapter)
* [set_adapter](https://sbert.net/docs/package_reference/sentence_transformer/SentenceTransformer.html#sentence_transformers.SentenceTransformer.set_adapter)
These methods allow you to add new PEFT adapters or load pretrained ones, for example:
Adding a adapter
python
from sentence_transformers import SentenceTransformer
1. Load a model to finetune with 2. (Optional) model card data
model = SentenceTransformer(
"all-MiniLM-L6-v2",
model_card_data=SentenceTransformerModelCardData(
language="en",
license="apache-2.0",
model_name="all-MiniLM-L6-v2 adapter finetuned on GooAQ pairs",
),
)
2. Create a LoRA adapter for the model & add it
peft_config = LoraConfig(
task_type=TaskType.FEATURE_EXTRACTION,
inference_mode=False,
r=8,
lora_alpha=32,
lora_dropout=0.1,
)
model.add_adapter(peft_config)
Proceed as usual... See https://sbert.net/docs/sentence_transformer/training_overview.html
Loading a pretrained adapter
Given [sentence-transformers-testing/stsb-bert-tiny-lora](https://huggingface.co/sentence-transformers-testing/stsb-bert-tiny-lora) as a small adapter model (the `adapter_model.safetensors` file is only 33.8kB!) on top of [sentence-transformers-testing/stsb-bert-tiny-safetensors](https://huggingface.co/sentence-transformers-testing/stsb-bert-tiny-safetensors), you can either load this adapter directly:
python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("sentence-transformers-testing/stsb-bert-tiny-lora")
embeddings = model.encode(["This is an example sentence", "Each sentence is converted"])
print(embeddings.shape)
(2, 128)
Or you can load the original model and load the adapter into it:
python
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("sentence-transformers-testing/stsb-bert-tiny-safetensors")
model.load_adapter("sentence-transformers-testing/stsb-bert-tiny-lora")
embeddings = model.encode(["This is an example sentence", "Each sentence is converted"])
print(embeddings.shape)
(2, 128)
Transformers v4.46.0 compatibility (https://github.com/UKPLab/sentence-transformers/pull/3026, https://github.com/UKPLab/sentence-transformers/pull/3035, https://github.com/UKPLab/sentence-transformers/pull/3037, https://github.com/UKPLab/sentence-transformers/pull/3038)
The recent `transformers` v4.46.0 update introduced a few changes that were incompatible with Sentence Transformers. For example:
* Use "processing_class" argument instead of "tokenizers"
* Add a `num_items_in_batch` argument to the `compute_loss` method in the Trainer
* Adding a `ValueError` if `eval_dataset` is None while `eval_strategy` is not `"no"` (this should be possible in Sentence Transformers, as we accept evaluating with just an `evaluator` as well)
These issues and deprecation warnings have been resolved.
Drop Python 3.8 support (https://github.com/UKPLab/sentence-transformers/pull/3033)
Given that Python 3.8 has now reached it's end of life, Sentence Transformers will no longer support it.
All Changes
* [`peft`] If AutoModel is wrapped with PEFT for prompt learning, then extend the attention mask by tomaarsen in https://github.com/UKPLab/sentence-transformers/pull/3000
* [`integration`] Add support for Transformers v4.46.0 by tomaarsen in https://github.com/UKPLab/sentence-transformers/pull/3026
* add an ImportError to tell the user that `datasets` must be install to fit a model by h4c5 in https://github.com/UKPLab/sentence-transformers/pull/3020
* [`feat`] Integrate NanoBeIR datasets; use `model.similarity` by default in evaluators by ArthurCamara in https://github.com/UKPLab/sentence-transformers/pull/2966
* Fix model name typo in example by programmer-ke in https://github.com/UKPLab/sentence-transformers/pull/3028
* Support OpenVINO int8 static quantization by l-bat in https://github.com/UKPLab/sentence-transformers/pull/3025
* [`fix`] Avoid passing eval_dataset=None to transformers due to >=v4.46.0 crash by tomaarsen in https://github.com/UKPLab/sentence-transformers/pull/3035
* [`docs`] Update the dated example in the NanoBEIREvaluator by tomaarsen in https://github.com/UKPLab/sentence-transformers/pull/3034
* [`deprecate`] Drop Python 3.8 support due to EOL by tomaarsen in https://github.com/UKPLab/sentence-transformers/pull/3033
* [`tests`] Remove evaluation_steps from model.fit test without evaluator by tomaarsen in https://github.com/UKPLab/sentence-transformers/pull/3037
* [`fix`] Fix loading pre-exported OV/ONNX model if export=False by tomaarsen in https://github.com/UKPLab/sentence-transformers/pull/3036
* [`chore`] If Transformers 4.46.0, use processing_class instead of tokenizer when saving by tomaarsen in https://github.com/UKPLab/sentence-transformers/pull/3038
* [`docs`] Add some missing docs for include_prompt in Pooling by tomaarsen in https://github.com/UKPLab/sentence-transformers/pull/3042
* [`feat`] Trainer with prompts and prompt masking by ArthurCamara in https://github.com/UKPLab/sentence-transformers/pull/2964
* [fix] Fix model loading inconsistency after Peft training by using PeftModel by pesuchin in https://github.com/UKPLab/sentence-transformers/pull/2980
* [`enh`] Add Support for multiple adapters on Transformers-based models by carlesonielfa in https://github.com/UKPLab/sentence-transformers/pull/3046 & https://github.com/UKPLab/sentence-transformers/pull/2993
* Moved Model Card Callback init in Trainer to a separate function by tRosenflanz in https://github.com/UKPLab/sentence-transformers/pull/3047
New Contributors
* h4c5 made their first contribution in https://github.com/UKPLab/sentence-transformers/pull/3020
* programmer-ke made their first contribution in https://github.com/UKPLab/sentence-transformers/pull/3028
* l-bat made their first contribution in https://github.com/UKPLab/sentence-transformers/pull/3025
* carlesonielfa made their first contribution in https://github.com/UKPLab/sentence-transformers/pull/3046
* tRosenflanz made their first contribution in https://github.com/UKPLab/sentence-transformers/pull/3047
Special Thanks
Big thanks to ArthurCamara for leading the work on both 1) training with prompts and 2) NanoBEIR.
**Full Changelog**: https://github.com/UKPLab/sentence-transformers/compare/v3.2.1...v3.3.0