Mteb

Latest version: v1.19.9

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 58

1.19.9

Fix

* fix: swap touche2020 to maintain compatibility (1469)

swap touche2020 for parity ([`9b2aece`](https://github.com/embeddings-benchmark/mteb/commit/9b2aecebe00e17b9db02d4fd3182df92222d680d))

1.19.8

Fix

* fix: loading pre 1.11.0 (1460)

* small fix

* fix: fix ([`1b920ac`](https://github.com/embeddings-benchmark/mteb/commit/1b920ac06bb83eba9530c3ddd125e09fb146dc95))

Unknown

* WIP: Polishing up leaderboard UI (1461)

* fix: Removed column wrapping on the table, so that it remains readable

* Added disclaimer to figure

* fix: Added links to task info table, switched out license with metric ([`58c459b`](https://github.com/embeddings-benchmark/mteb/commit/58c459bcd3e1ee772624f723e86efb86e40db6cb))

1.19.7

Fix

* fix: Fix load external results with `None` mteb_version (1453)

* fix

* lint ([`14d7523`](https://github.com/embeddings-benchmark/mteb/commit/14d7523850edae97cda2a7264f357da29e0ac867))

1.19.6

Fix

* fix: publish (1452) ([`feb1ab7`](https://github.com/embeddings-benchmark/mteb/commit/feb1ab7652102696a4aa20a03dc98a7240274a20))

Unknown

* Fixed task loading (1451)

* Fixed task result loading from disk

* Fixed task result loading from disk ([`039d010`](https://github.com/embeddings-benchmark/mteb/commit/039d01088f457297a3a1929ff713cc3d55050453))

* Fix: Made data parsing in the leaderboard figure more robust (1450)

Bugfixes with data parsing in main figure ([`4e86cea`](https://github.com/embeddings-benchmark/mteb/commit/4e86ceab8f11d5cacf38e5f959f846c962105e34))

1.19.5

Fix

* fix: update task metadata to allow for null (1448) ([`04ac3f2`](https://github.com/embeddings-benchmark/mteb/commit/04ac3f21139db2ea50fdef4d91c345f61f229d44))

* fix: Count unique texts, data leaks in calculate metrics (1438)

* add more stat

* add more stat

* update statistics ([`dd5d226`](https://github.com/embeddings-benchmark/mteb/commit/dd5d226f6a377fbf3f98f714323921539a418d83))

Unknown

* Update tasks table ([`f6a49fe`](https://github.com/embeddings-benchmark/mteb/commit/f6a49fef74724bed1a7e19d6b895324ed25cff13))

* Leaderboard: Fixed code benchmarks (1441)

* fixed code benchmarks

* fix: Made n_parameters formatting smarter and more robust

* fix: changed jina-embeddings-v3 number of parameters from 572K to 572M

* fix: Fixed use_instuctions typo in model overview

* fix: Fixed sentence-transformer compatibility switch

* Ran linting

* Added all languages, tasks, types and domains to options

* Removed resetting options when a new benchmark is selected

* All results now get displayed, but models that haven&39;t been run on everything get nan values in the table ([`3a1a470`](https://github.com/embeddings-benchmark/mteb/commit/3a1a470c8e0ad7b8bce61c7f73a501d6716fce5a))

* Leaderboard 2.0: added performance x n_parameters plot + more benchmark info (1437)

* Added elementary speed/performance plot

* Refactored table formatting code

* Bumped Gradio version

* Added more general info to benchmark description markdown block

* Adjusted margin an range on plot

* Made hover information easier to read on plot

* Made range scaling dynamic in plot

* Moved citation next to benchmark description

* Made titles in benchmark info bold ([`76c2112`](https://github.com/embeddings-benchmark/mteb/commit/76c21120f27b396fa1900fdf203c5079ad34b0d8))

1.19.4

Fix

* fix: Add missing benchmarks in benchmarks.py (1431)

Fixes 1423 ([`a240ea0`](https://github.com/embeddings-benchmark/mteb/commit/a240ea099aac446702a3f7167fd0921f6eb4e259))

* fix: Add Korean AutoRAGRetrieval (1388)

* feat: add AutoRAG Korean embedding retrieval benchmark

* fix: run --- 🧹 Running linters ---
ruff format . running ruff formatting
716 files left unchanged
ruff check . --fix running ruff linting
All checks passed!

* fix: add metadata for AutoRAGRetrieval

* change link for markers_bm

* add AutoRAGRetrieval to init.py and update metadata

* add precise metadata

* update metadata: description and license

* delete descriptive_stats in AutoRAGRetrieval.py and run calculate_matadata_metrics.py ([`f79d9ba`](https://github.com/embeddings-benchmark/mteb/commit/f79d9ba06c3d7a69c155bc1287c91bba6f41fa62))

* fix: make samples_per_label a task attribute (1419)

make samples_per_label a task attr ([`7f1a1d3`](https://github.com/embeddings-benchmark/mteb/commit/7f1a1d33fdc515f39740d4f15b86b011280f1ee6))

Unknown

* Update tasks table ([`d069aba`](https://github.com/embeddings-benchmark/mteb/commit/d069aba1a597f4a4e033148243abd7df0f62bfb7))

Page 1 of 58

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.