Mteb

Latest version: v1.36.22

Safety actively analyzes 714815 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 33 of 82

1.14.19

Documentation

* docs: Fix broken links in docs (1212)

* Added fixes for broken links in adding_a_dataset and adding_a_model docs.

* Updated link name ([`b1bd941`](https://github.com/embeddings-benchmark/mteb/commit/b1bd9410715aeadf26af34d6845ddd0a7ee3ade8))

Fix

* fix: Ensure that results are returned even when hitting cache (1215)

Fixes 1122 ([`64e01ae`](https://github.com/embeddings-benchmark/mteb/commit/64e01ae9d6fcf125a4ea6516263fa062b2aafeef))

Unknown

* Update tasks table ([`88b4f6e`](https://github.com/embeddings-benchmark/mteb/commit/88b4f6eda695201ee297cf0e6483344cba9a5985))

* Mismatch of the category of AmazonPolarityClassification (1220)

Fixes 1219 ([`4595b19`](https://github.com/embeddings-benchmark/mteb/commit/4595b198a7aa2f297999e32d25cb116d12ad1e7d))

1.14.18

Fix

* fix: Normalize benchmarks no only include task objects and added getter for benchmarks (1208)

* Normalize benchmarks to only include tasks

- Force benchmarks to only include tasks. This fixes a few bugs where benchmarks can reference a task which is not implemented
- implements `mteb.get_benchmark`, which makes it easier to fetch benchmarks
- Added tests + updated docs

A few outstanding issues:

I would like `mteb.MTEB(benchmark)` to always reproduce the benchmark. Currently this is not possible as MTEB(eng) required the split to be specified. A solution it to allow &34;eval_splits) to be specified when initializing a task and then pass it on to the `load_data()`. This way we can write the following:

`mteb.get_tasks(tasks=[...], eval_splits=[&34;test&34;], ...)`

I would also love the aggregation to be a part of the benchmark (such that it is clear how it should be aggregated). This is especially relevant for MTEB(eng) as it average the CQAD datasets before creating the global average. This way we can also create a result object for the benchmark itself. A complimenting solution for this is to allow nested benchmarks.

* fix error in tests

* format

* Added corrections based on review

* added example and formatted ([`f93154f`](https://github.com/embeddings-benchmark/mteb/commit/f93154f465b99bd9737b2ecfd54b3beb491a996d))

1.14.17

Fix

* fix: Normalize licenses including casing, uses of &34;-&34; etc. (1210)

* fix: Normalize licenses including casing, uses of &34;-&34; etc.

* fix tests ([`768c031`](https://github.com/embeddings-benchmark/mteb/commit/768c031d3e1e29e39edcf20dd4f9f1ea6092db50))

* fix: Normalize licenses including casing, uses of &34;-&34; etc. ([`a8f7d80`](https://github.com/embeddings-benchmark/mteb/commit/a8f7d80e20efd97b0c00ef2c028eba830ce1d308))

1.14.16

Ci

* ci: remove positional argument (1191)

remove positional argument ([`b75cd29`](https://github.com/embeddings-benchmark/mteb/commit/b75cd299f724ef78a2b5951f140b509169f1c784))

Documentation

* docs: Add xhluca to contributor list (1196)

Add Xing Han Lu to contributor list

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`660bd1c`](https://github.com/embeddings-benchmark/mteb/commit/660bd1cc858707cbb037d801c1c64729c7d17474))

* docs: Add affiliation of mrshu (1199)

* Add affiliation details of mrshu to `docs/mteb/points.md`

Signed-off-by: mr.Shu <mrshu.io> ([`75cabc9`](https://github.com/embeddings-benchmark/mteb/commit/75cabc9344c63ccf674689db7970bff17634e3d2))

* docs: Adding contributor details (1195)

Adding contributor details

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`acd631a`](https://github.com/embeddings-benchmark/mteb/commit/acd631a972495fe0b410644fdac1d3eb84ccdb31))

* docs: Added points for ideation and coordination (1194)

* fix: Added points for ideation and coordination

We have added 50 points for primary coordinators, 20 points for secondary coordinators and 20 points for ideation.

I additionally checked for users which have more than 10 point, but where we do not have their author information.

Will just ping you guys here to ensure that you get a chance to add the authorship information (akshita-sukhlecha, loicmagne, mrshu, crystina-z, thakur-nandan, xhluca)

If anyone notice any other authors which haven&39;t yet been credited please let me know!

* added secondary contributor ([`8b0834d`](https://github.com/embeddings-benchmark/mteb/commit/8b0834dc1c2480052faec786c9bcd3067e0e2e0a))

* docs: authorship-info for crystina-z (1198) ([`08c1efe`](https://github.com/embeddings-benchmark/mteb/commit/08c1efe57387c429ddbec3d36bfa717f99879b8b))

* docs: Adding contributor details (1184)

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`0fc93dc`](https://github.com/embeddings-benchmark/mteb/commit/0fc93dca1170866d2236bfee664b82e05f230b2d))

* docs: add reranker / cross encoder to README advanced usage (1186)

* add reranker / cross encoder to README advanced usage

* use preferred way for task selection

* make script runnable ([`aa5479d`](https://github.com/embeddings-benchmark/mteb/commit/aa5479da71a40b545dd339d345101d3a02e688c3))

* docs: Update contributors table (1189)

Add conributor information ([`929733b`](https://github.com/embeddings-benchmark/mteb/commit/929733b4ea172a5d9deeb85ef59af71e5492b863))

Fix

* fix: Ensure STS pearson and spearman does not use the p-value only the correlation (1207)

Fixes 1206 ([`5aa401d`](https://github.com/embeddings-benchmark/mteb/commit/5aa401dcc7ec5bdf6ccbc9cfe1207267a08c4523))

* fix: OpenAI BadRequestError by limiting input dimensions to 2048 elem… (1203)

* fix: OpenAI BadRequestError by limiting input dimensions to 2048 elements (1201)

Fix OpenAI BadRequestError by limiting input dimensions to 2048 elements

- Ensure the &39;sentences&39; list passed to OpenAI API does not exceed 2048 elements
- Reference: OpenAI&39;s Embedding API documentation on input limits

Co-authored-by: Ali Shiraee <ShiraeAbasfad.basf.net>

* fix ruff formatting

* Added minor test fixes to ensure reproducility across systems

* Ensure that tmp.json is not created within repo when running tests

* format

* fixes path issues

* Rerun CI

---------

Co-authored-by: HSILA <a.shiraeegmail.com>
Co-authored-by: Ali Shiraee <ShiraeAbasfad.basf.net> ([`ba562ce`](https://github.com/embeddings-benchmark/mteb/commit/ba562cef8a123f1b760d70b66ad6e1d959c7c3bc))

Unknown

* Update points table ([`16b2220`](https://github.com/embeddings-benchmark/mteb/commit/16b222019c595aea7cdbbb04b506736851c6316b))

* Update points table ([`9733d06`](https://github.com/embeddings-benchmark/mteb/commit/9733d06764f71cddf3755993b49f6f1006609903))

* Update points table ([`a574c76`](https://github.com/embeddings-benchmark/mteb/commit/a574c76c53abebc65a1e3148afdd83ff6cd95d8b))

* Update points table ([`6b50759`](https://github.com/embeddings-benchmark/mteb/commit/6b50759ae6cb45e4b8a622ef386fbece91698e24))

* Update points table ([`b8fb4d3`](https://github.com/embeddings-benchmark/mteb/commit/b8fb4d31fc9b504bbb8cc489fabcaaddc4558f22))

* Update points table ([`1024b24`](https://github.com/embeddings-benchmark/mteb/commit/1024b24cffb46f1a0d98061aa6899a6cb285018f))

* Update points table ([`4c1b6b5`](https://github.com/embeddings-benchmark/mteb/commit/4c1b6b5ffb5ba149e668e2172df2d55b8355966c))

1.14.15

Fix

* fix: Add save prediction cli (1187)

* add save predictions cli option

* add how to save retrieval task predictions to README

* simplify example script

* add cli test

* Update README.md

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com>

---------

Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`826cdf5`](https://github.com/embeddings-benchmark/mteb/commit/826cdf513d233d8a71019bf75fef7f3f76991b5e))

1.14.14

Fix

* fix: Remove test set form eval sets as test labels are unknown (1190) ([`d375ff7`](https://github.com/embeddings-benchmark/mteb/commit/d375ff7b252309492c7f30f0706f4a4d9388d95c))

Page 33 of 82

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.