Mteb

Latest version: v1.36.22

Safety actively analyzes 714815 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 28 of 82

1.18.6

Fix

* fix: Integrate prompts to task metadata (1300)

* init

* add DatasetDict

* add classification

* add clustering

* add pair classification

* add retrieval

* add all prompts

* start integrating prompts

* refactor instruct models

* lint

* fix test

* fix

* fix no prompt in prompt dict

* add more logging

* add more logging

* Apply suggestions from code review

Co-authored-by: Isaac Chung <chungisaac1217gmail.com>

* fix code review

* fix use_instructions

* add log if instruction template not set

* fix metadata

* lint

* fix brazilian

* remove MetadataDatasetDict

* rollback test metadata

---------

Co-authored-by: Isaac Chung <chungisaac1217gmail.com> ([`029d378`](https://github.com/embeddings-benchmark/mteb/commit/029d378b2c17d7e05a4c9c30a17966917cb83a33))

1.18.5

Fix

* fix: Speed up leaderboard by caching and skipping validation (1365)

* Made loading and filtering faster by removing unnecessary validation

* Made select_tasks faster by removing validation

* Added caching to leaderboard

* Ran linting

* Added missing future import ([`f1bc375`](https://github.com/embeddings-benchmark/mteb/commit/f1bc3758d6e1cc91fbe22b26dcbd1cfe3b640f06))

1.18.4

Fix

* fix: make sure test is the default split for FEVER (1361)

The other splits can still be run as long as they are specified. ([`d9626ab`](https://github.com/embeddings-benchmark/mteb/commit/d9626abbc5d438024a21e7f21a29d4741bb94188))

1.18.3

Fix

* fix: Update KorSarcasm to avoid trust-remote code (1364) ([`756ba7e`](https://github.com/embeddings-benchmark/mteb/commit/756ba7e46e6daa8d1bff6b4d3db254296e37e7dc))

Unknown

* Leaderboard updates: Model meta + task and benchmark info (1345)

* Added benchmark description and citation to leaderboard

* Added model information to main table

* Fixed citation box

* Added table tab with task information

* Added button for benchmark link if specified

* Formatted model column in per_task table properly

* Implemented model filtering based on metadata

* Fixed maximum minimum model sizes

* Ran linting

* Replaced mean rank with borda rank in main table ([`298b0bd`](https://github.com/embeddings-benchmark/mteb/commit/298b0bde0c5dec52dc395f7786ba8397c757430a))

1.18.2

Fix

* fix: upload BrazilianToxicTweetsClassification to hf (1352)

upload to hf ([`9c7a1c2`](https://github.com/embeddings-benchmark/mteb/commit/9c7a1c2a8f99966c2d98ec9efe7666ea8b5672a5))

1.18.1

Fix

* fix: Add jina, uae, stella models (1319)

* add models

* fix

* fix

* fix prompt

* Update mteb/models/jina_models.py

Co-authored-by: Wang Bo <bo.wangjina.ai>

* Update mteb/models/jina_models.py

Co-authored-by: Wang Bo <bo.wangjina.ai>

* try reeval stella

* change to e5

* change to e5

* add metadata

* update languages

* Update mteb/models/jina_models.py

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com>

* remove docstring

* remove trust remote

* update model meta

* Set minimal version

---------

Co-authored-by: Wang Bo <bo.wangjina.ai>
Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`0b846ff`](https://github.com/embeddings-benchmark/mteb/commit/0b846ff3ad8ec9f16342c913d81da74aa9ca0643))

* fix: remove accidentally commited file ([`16a333e`](https://github.com/embeddings-benchmark/mteb/commit/16a333ee3b9c1b26468a8b13d42e2697e0474a85))

Page 28 of 82

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.