Mteb

Latest version: v1.34.14

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 38 of 75

1.12.45

Documentation

* docs: add missing points (959)

* add missing points

* one review; ([`3ebd148`](https://github.com/embeddings-benchmark/mteb/commit/3ebd148ad1956615a43c77e5a1b4c3ea8bcbe59b))

Fix

* fix: Update annotations for PawsX, Opusparcus and SummEval (963)

* Update PawsX.py

* Update OpusparcusPC.py

* Update SummEvalFrSummarization.py

* Update SummEvalSummarization.py

* Add files via upload

---------

Co-authored-by: Tikhonova Maria <m_tikhonova94mail.ru> ([`211d5ae`](https://github.com/embeddings-benchmark/mteb/commit/211d5ae402a7b3a9c86d62db0c4892106e03ad8e))

* fix: Add baseline models for Russian (962)

* add sbert/rubert models with results

* move models to a separate file

* update models meta

* add points ([`664f6da`](https://github.com/embeddings-benchmark/mteb/commit/664f6da9c14940eeddf1c9e794bcdb69563c5d5e))

Unknown

* Update tasks table ([`666fafe`](https://github.com/embeddings-benchmark/mteb/commit/666fafecb4ba6c1e1c8061fc8868a48596285b84))

* Update points table ([`f52d12b`](https://github.com/embeddings-benchmark/mteb/commit/f52d12b1cdbaecc1296bb21baed9c8d122df4c3a))

* Update points.md (961) ([`b467004`](https://github.com/embeddings-benchmark/mteb/commit/b46700463ae82df7f577e17971f05872a25336c0))

* Update points table ([`b134527`](https://github.com/embeddings-benchmark/mteb/commit/b13452723029e9c1235f7e3ce7f8a9728d1ab2a5))

1.12.44

Fix

* fix: Add test case for results folder structure (956)

* add test case for results folder structure

* rerun results

* add points ([`b0f597c`](https://github.com/embeddings-benchmark/mteb/commit/b0f597c19643feee029573a1b46bd34c84161e1d))

Unknown

* Update points table ([`a8a47a0`](https://github.com/embeddings-benchmark/mteb/commit/a8a47a05fc18a9906033873445e5310790225c6a))

* Fix spacing ([`279c5e4`](https://github.com/embeddings-benchmark/mteb/commit/279c5e43daf8dbded23ec9b446670f59231338ea))

1.12.43

Fix

* fix: Merge CrosslingualTask into MultilingualTask (952)

* merge cross lingual task into multilingual task

* Apply suggestions from code review

Co-authored-by: Niklas Muennighoff <n.muennighoffgmail.com>

---------

Co-authored-by: Niklas Muennighoff <n.muennighoffgmail.com> ([`07f80c4`](https://github.com/embeddings-benchmark/mteb/commit/07f80c479a7223b188341e3be6ac5e0424f297b7))

1.12.42

Fix

* fix: Backward compatibility fixes for clustering (954)

* Added max_document_to_embed to all existing clustering tasks

* format ([`623d833`](https://github.com/embeddings-benchmark/mteb/commit/623d83300157921fe71bc78aa6700c85a5f45486))

1.12.41

Fix

* fix: Add MINERS Bitext retrieval benchmark (951)

* add new task
* add miners bitext mining benchmark
* Update TaskMetadata.py
* Add NollySenti
* rename metadata
* Update mteb/benchmarks.py
Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com>
* Update benchmarks.py
* Update benchmarks.py
---------
Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`f95b9e0`](https://github.com/embeddings-benchmark/mteb/commit/f95b9e0e17ec36272e249fbb754b7f7020727303))

Unknown

* Update points table ([`efbce71`](https://github.com/embeddings-benchmark/mteb/commit/efbce71e314fb5d97c4d08f7b546b16c7c9a2790))

* Update points table ([`5f39d55`](https://github.com/embeddings-benchmark/mteb/commit/5f39d55c1b7d92428c39e089e7f081049a1c08b5))

1.12.40

Documentation

* docs: Add point for PR 948 (950)

* add point

* add point ([`34286f2`](https://github.com/embeddings-benchmark/mteb/commit/34286f2a36d8bf11c0bab1160d38c5cae3b95461))

Fix

* fix: Compare Cluster and ClusterFast scores and speedup (892)

* first go at getting spearman corr for e5-base
* add back large
* small and large results
* v3 means downsampling by stratified subsampling + bootstrap to k=max_documents_per_cluster
* v3-1 means swapping values of max_documents_per_cluster and max_documents_to_embed
* v3-2 means increasing max_documents_per_cluster to 65536
* task-wise comparison
* use recommended syntax
* add back no-op changes
* add back no-op changes
* option c is now v2; remove all v3 variants; add back level 0 in results; add test significance&39;
* paraphrase-multilingual-MiniLM-L12-v2 results
* lint script
* cluster without fast should not have levels
* spearman on significant rank
* add more small model results
* 2x max_documents_to_embed to 4096
* max_documents_to_embed=8192
* t
* Added plots
* format
* use 32k samples for bigger cluster datasets
* use 4% n_samples and update task metadata
* make lint
* tests passing
* make lint
* add paraphrase-multilingual-mpnet-base-v2 and e5-large-v2 results
* add e5_eng_base_v2,labse,mxbai_embed_large_v1,bge_base_en_v1.5
* move plot scripts to mmteb srcipts repo
* replace use_dataset_as_is wtih max_document_to_embed and add description in docstrings
* lint
---------
Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`2bb7623`](https://github.com/embeddings-benchmark/mteb/commit/2bb76239368c497efb92d5ae09a914eedd44a66d))

Unknown

* Update tasks table ([`54c5745`](https://github.com/embeddings-benchmark/mteb/commit/54c5745c3b2eb285a4cd1a10e06a516040eec6f1))

* Update points table ([`6aeaff4`](https://github.com/embeddings-benchmark/mteb/commit/6aeaff45a0d657f708677752cae0b96f0fb875a6))

Page 38 of 75

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.