Mteb

Latest version: v1.36.22

Safety actively analyzes 714815 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 13 of 82

1.29.14

Fix

* fix: Fix zeta alpha mistral (1736)

* fix zeta alpha mistral

* update use_instructions

* update training datasets

* Update mteb/models/e5_instruct.py

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com>

* update float

* Update mteb/models/e5_instruct.py

---------

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`4985da9`](https://github.com/embeddings-benchmark/mteb/commit/4985da94cbc4c1368debab737fa8195f6bb91ce2))

* fix: Hotfixed public_training_data type annotation (1857)

Fixed public_training_data flag type to include boolean, as this is how all models are annotated ([`4bd7328`](https://github.com/embeddings-benchmark/mteb/commit/4bd7328f1d43ff36564eb5941e7b32daf826f456))

Unknown

* Add more annotations (1833)

* apply additions from 1794

* add annotations for rumodels

* add nomic training data

* fix metadata

* update rest of model meta

* fix bge reranker ([`12ed9c5`](https://github.com/embeddings-benchmark/mteb/commit/12ed9c50debd83b7fd6f589373d1fd4539f2aa17))

1.29.13

Fix

* fix: Fixed leaderboard search bar (1852)

Fixed leaderboard search bar ([`fe33061`](https://github.com/embeddings-benchmark/mteb/commit/fe330611b6e433096501d0d9814b2c644c33e984))

1.29.12

Fix

* fix: Leaderboard Refinements (1849)

* Added better descriptions to benchmarks and removed beta tags

* Fixed zero-shot filtering on app loading

* Added zero-shot definition in an accordion

* NaN values are now filled with blank

* Added type hints to filter_models ([`a8cc887`](https://github.com/embeddings-benchmark/mteb/commit/a8cc88778623ee4e46c7c27ea5b5bc98e534165e))

1.29.11

Fix

* fix: Add reported annotation and re-added public_training_data (1846)

* fix: Add additional dataset annotations

* fix: readded public training data

* update voyage annotations ([`a7a8144`](https://github.com/embeddings-benchmark/mteb/commit/a7a8144a6964641614c7d407e43c75ab5b7c40ca))

1.29.10

Fix

* fix: Remove default params, `public_training_data` and `memory usage` in `ModelMeta` (1794)

* fix: Leaderboard: `K` instead of `M`
Fixes 1752
* format
* fixed existing annotations to refer to task name instead of hf dataset
* added annotation to nvidia
* added voyage
* added uae annotations
* Added stella annotations
* sentence trf models
* added salesforce and e5
* jina
* bge + model2vec
* added llm2vec annotations
* add jasper
* format
* format
* Updated annotations and moved jina models
* make models parameters needed to be filled
* fix tests
* remove comments
* remove model meta from test
* fix model meta from split
* fix: add even more training dataset annotations (1793)
* fix: update max tokens for OpenAI (1772)
update max tokens
* ci: skip AfriSentiLID for now (1785)
* skip AfriSentiLID for now
* skip relevant test case instead
---------
Co-authored-by: Isaac Chung <isaac.chungteam.wrike.com>
* 1.28.7
Automatically generated by python-semantic-release
* ci: fix model loading test (1775)
* pass base branch into the make command as an arg
* test a file that has custom wrapper
* what about overview
* just dont check overview
* revert instance check
* explicitly omit overview and init
* remove test change
* try on a lot of models
* revert test model file
---------
Co-authored-by: Isaac Chung <isaac.chungteam.wrike.com>
* feat: Update task filtering, fixing bug which included cross-lingual tasks in overly many benchmarks (1787)
* feat: Update task filtering, fixing bug on MTEB
- Updated task filtering adding exclusive_language_filter and hf_subset
- fix bug in MTEB where cross-lingual splits were included
- added missing language filtering to MTEB(europe, beta) and MTEB(indic, beta)
The following code outlines the problems:
py
import mteb
from mteb.benchmarks import MTEB_ENG_CLASSIC
task = [t for t in MTEB_ENG_CLASSIC.tasks if t.metadata.name == &34;STS22&34;][0]
was eq. to:
task = mteb.get_task(&34;STS22&34;, languages=[&34;eng&34;])
task.hf_subsets
correct filtering to English datasets:
[&39;en&39;, &39;de-en&39;, &39;es-en&39;, &39;pl-en&39;, &39;zh-en&39;]
However it should be:
[&39;en&39;]
with the changes it is:
task = [t for t in MTEB_ENG_CLASSIC.tasks if t.metadata.name == &34;STS22&34;][0]
task.hf_subsets
[&39;en&39;]
eq. to
task = mteb.get_task(&34;STS22&34;, hf_subsets=[&34;en&34;])
which you can also obtain using the exclusive_language_filter (though not if there was multiple english splits):
task = mteb.get_task(&34;STS22&34;, languages=[&34;eng&34;], exclusive_language_filter=True)

* format
* remove &34;en-ext&34; from AmazonCounterfactualClassification
* fixed mteb(deu)
* fix: simplify in a few areas
* fix: Add gritlm
* 1.29.0
Automatically generated by python-semantic-release
* fix: Added more annotations!
* fix: Added C-MTEB (1786)
Added C-MTEB
* 1.29.1
Automatically generated by python-semantic-release
* docs: Add contact to MMTEB benchmarks (1796)
* Add myself to MMTEB benchmarks
* lint
* fix: loading pre 11 (1798)
* fix loading pre 11
* add similarity
* lint
* run all task types
* 1.29.2
Automatically generated by python-semantic-release
* fix: allow to load no revision available (1801)
* fix allow to load no revision available
* lint
* add require_model_meta to leaderboard
* lint
* 1.29.3
Automatically generated by python-semantic-release
---------
Co-authored-by: Roman Solomatin <samoed.romangmail.com>
Co-authored-by: Isaac Chung <chungisaac1217gmail.com>
Co-authored-by: Isaac Chung <isaac.chungteam.wrike.com>
Co-authored-by: github-actions <github-actionsgithub.com>
Co-authored-by: Márton Kardos <power.up1163gmail.com>
* fig merges
* update models info
* change public_training_code to str
* change `public_training_code=False` to None
* remove annotations
* remove annotations
* remove changed annotations
* remove changed annotations
* remove `public_training_data` and `memory usage`
* make framework not optional
* make framework non-optional
* empty frameworks
* add framework
* fix tests
* Update mteb/models/overview.py
Co-authored-by: Isaac Chung <chungisaac1217gmail.com>
---------
Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com>
Co-authored-by: Isaac Chung <chungisaac1217gmail.com>
Co-authored-by: Isaac Chung <isaac.chungteam.wrike.com>
Co-authored-by: github-actions <github-actionsgithub.com>
Co-authored-by: Márton Kardos <power.up1163gmail.com> ([`0a83e38`](https://github.com/embeddings-benchmark/mteb/commit/0a83e383efe41e86e51c0d4cdca18d9ed5d42821))

* fix: subsets to run (1830)

* fix split evals
* add test
* lint
* fix moka
* add assert ([`8be6b2e`](https://github.com/embeddings-benchmark/mteb/commit/8be6b2e36abb005822e07c034484c245345f6eb2))

1.29.9

Fix

* fix: Fixed eval split for MultilingualSentiment in C-MTEB (1804)

* Fixed eval split for MultilingualSentiment in C-MTEB

* FIxed splits for atec, bq and stsb in C-MTEB ([`96f639b`](https://github.com/embeddings-benchmark/mteb/commit/96f639bc34153caaac422a3a13e0d9f3626d65b9))

Page 13 of 82

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.