Mteb

Latest version: v1.34.14

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 42 of 75

1.12.21

Documentation

* docs: Added source for CmedqaRetrieval (886)

docs: update CmedqaRetrieval description to specify source ([`3e910ff`](https://github.com/embeddings-benchmark/mteb/commit/3e910ff76b5cc4713682d2cef2dd860e38de513a))

Fix

* fix: Add error reporting for Retrieval (873)

* add error reporting.
* no message
* Update mteb/abstasks/AbsTaskRetrieval.py
Co-authored-by: Isaac Chung <chungisaac1217gmail.com>
* add kwarg args
* change to metadata.name
* fix format
* Update mteb/abstasks/AbsTaskRetrieval.py
Co-authored-by: Isaac Chung <chungisaac1217gmail.com>
* change to explicit args
* remove cmdline changes
---------
Co-authored-by: Isaac Chung <chungisaac1217gmail.com> ([`5397bd2`](https://github.com/embeddings-benchmark/mteb/commit/5397bd2153700ff026697038d6174fe67b6033d8))

1.12.20

Fix

* fix: Updated CLI for MTEB (882)

* Updated CLI for MTEB

It now includes three main commands one for running, one for getting an overview and one for creating the metadata for hf.

I also
- added lower bound on dependencies as it caused a few issues.
- deleted some results with a model revision attached (also created a fix to make sure that doesn&39;t happen as much going forward)
- Added a test for the cli
- Added a quite extensive docstring to the CLI
- made relevant changed to the documentation

* ensure tests pass

* minor changes to PR template

* Minor changes to PR template

* remove tmp file

* fixed failing test and updated it to avoid future false positives

* fix deprecation warning for logger.warn

* don&39;t clear path before running tests as it disturbs other tests when run in parallel ([`c6f618b`](https://github.com/embeddings-benchmark/mteb/commit/c6f618b0ab5b265acf8ac736db1ddca8d73222c5))

1.12.19

Documentation

* docs: minor fix for point validation to avoid error when people split up points ([`7d0d631`](https://github.com/embeddings-benchmark/mteb/commit/7d0d6319f67251b65b37ec835adea605aa05c893))

Fix

* fix: Add CEDR, SensitiveTopics for multilabel and RuBQ for reranking (881)

* add russian reranking and multilabel tasks

* fix import order

* add results for baselines

* add points

* add points for review ([`9128df4`](https://github.com/embeddings-benchmark/mteb/commit/9128df46f4d9deb06ad1878b321942ac281cfc7d))

Unknown

* Update tasks table ([`7d3ce53`](https://github.com/embeddings-benchmark/mteb/commit/7d3ce53d017a11b9427ed04a7af7828fee5cc7b7))

* Update points table ([`ef52f95`](https://github.com/embeddings-benchmark/mteb/commit/ef52f95e13ed00caef0b77a5c6d8e39c18293793))

* Merge branch &39;main&39; of https://github.com/embeddings-benchmark/mteb ([`7c7ee2b`](https://github.com/embeddings-benchmark/mteb/commit/7c7ee2bd27c01930d05a63e55891882b5d70a761))

* Updated CLI for MTEB

It now includes three main commands one for running, one for getting an overview and one for creating the metadata for hf.

I also
- added lower bound on dependencies as it caused a few issues.
- deleted some results with a model revision attached (also created a fix to make sure that doesn&39;t happen as much going forward)
- Added a test for the cli
- Added a quite extensive docstring to the CLI
- made relevant changed to the documentation ([`29b1c34`](https://github.com/embeddings-benchmark/mteb/commit/29b1c347270594d4fe4ad6b928b245a92c885077))

1.12.18

Fix

* fix: Ensure result are consistently stored in the same way (876)

* Ensure result are consistently stored in the same way

- (due to failing test): updated missing dataset references
- (to test with more than one model) Added e5 models base and large
- updated mteb.get_model to now include metadata in the model object
- ensure that model name is always included when saving (with a default when it is not available)
- use the ModelMeta for the model_meta.json

* format

* minor test fixes

* docs: Minor updated to repro. workflow docs

* fixed failing test

* format

* Apply suggestions from code review

Co-authored-by: Isaac Chung <chungisaac1217gmail.com>

* docs: update PR template

* fix: Added benchmark object (878)

* removed duplicate task

* Added benchmark object

* removed import for duplicate task

* fix dataset references

* added seb

* Added test for running benchmarks

* changed tasks to be an iterable

* format

* Apply suggestions from code review

Co-authored-by: Niklas Muennighoff <n.muennighoffgmail.com>
Co-authored-by: Isaac Chung <chungisaac1217gmail.com>

---------

Co-authored-by: Isaac Chung <chungisaac1217gmail.com>
Co-authored-by: Niklas Muennighoff <n.muennighoffgmail.com> ([`fb843d0`](https://github.com/embeddings-benchmark/mteb/commit/fb843d040e8af63b4d0c2d61b78a69ca55652bcd))

Unknown

* Update tasks table ([`dfbdfdc`](https://github.com/embeddings-benchmark/mteb/commit/dfbdfdce8423dddb9821ffa349a7ee1df8bdb2ca))

1.12.17

Fix

* fix: Add FaithDialRetrieval dataset (874)

* faithdial dataset

* results

* added metadata

* add points

* Apply suggestions from code review

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com>

* Add reviewer points

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com>

* only test set

---------

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`582381b`](https://github.com/embeddings-benchmark/mteb/commit/582381bd75ec111ba1ed8c81dc2df21336091546))

Unknown

* Update tasks table ([`35dcec7`](https://github.com/embeddings-benchmark/mteb/commit/35dcec7358ee5038c968545976a2abee007b2713))

* Update points table ([`527c5eb`](https://github.com/embeddings-benchmark/mteb/commit/527c5eb718c7b8b5f82972a9f4eae4f32460d2ae))

1.12.16

Fix

* fix: Add feedbackQA dataset (856)

* add feedbackQA
* Update mteb/tasks/Retrieval/eng/FeedbackQARetrieval.py
Co-authored-by: Xing Han Lu <21180505+xhlucausers.noreply.github.com>
* Update mteb/tasks/Retrieval/eng/FeedbackQARetrieval.py
Co-authored-by: Xing Han Lu <21180505+xhlucausers.noreply.github.com>
* Update mteb/tasks/Retrieval/eng/FeedbackQARetrieval.py
Co-authored-by: Xing Han Lu <21180505+xhlucausers.noreply.github.com>
* add feedbackQA
* add feedbackQA
* Update mteb/tasks/Retrieval/eng/FeedbackQARetrieval.py
Co-authored-by: Xing Han Lu <21180505+xhlucausers.noreply.github.com>
* add feedbackQA
* points
* make lint
* typo
* missing datasets
---------
Co-authored-by: Xing Han Lu <21180505+xhlucausers.noreply.github.com>
Co-authored-by: Isaac Chung <chungisaac1217gmail.com> ([`0796efa`](https://github.com/embeddings-benchmark/mteb/commit/0796efa4f987fa65920b3853455c1e592ca6b697))

Unknown

* Update tasks table ([`24e3d92`](https://github.com/embeddings-benchmark/mteb/commit/24e3d92f7a16a56a24fb8efe6feadce15d319aee))

* Update points table ([`8b52f03`](https://github.com/embeddings-benchmark/mteb/commit/8b52f0342ada6a8e81f1867bf70d50f8740fde03))

Page 42 of 75

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.