Documentation
* docs: Add point for PR 948 (950)
* add point
* add point ([`34286f2`](https://github.com/embeddings-benchmark/mteb/commit/34286f2a36d8bf11c0bab1160d38c5cae3b95461))
Fix
* fix: Compare Cluster and ClusterFast scores and speedup (892)
* first go at getting spearman corr for e5-base
* add back large
* small and large results
* v3 means downsampling by stratified subsampling + bootstrap to k=max_documents_per_cluster
* v3-1 means swapping values of max_documents_per_cluster and max_documents_to_embed
* v3-2 means increasing max_documents_per_cluster to 65536
* task-wise comparison
* use recommended syntax
* add back no-op changes
* add back no-op changes
* option c is now v2; remove all v3 variants; add back level 0 in results; add test significance&39;
* paraphrase-multilingual-MiniLM-L12-v2 results
* lint script
* cluster without fast should not have levels
* spearman on significant rank
* add more small model results
* 2x max_documents_to_embed to 4096
* max_documents_to_embed=8192
* t
* Added plots
* format
* use 32k samples for bigger cluster datasets
* use 4% n_samples and update task metadata
* make lint
* tests passing
* make lint
* add paraphrase-multilingual-mpnet-base-v2 and e5-large-v2 results
* add e5_eng_base_v2,labse,mxbai_embed_large_v1,bge_base_en_v1.5
* move plot scripts to mmteb srcipts repo
* replace use_dataset_as_is wtih max_document_to_embed and add description in docstrings
* lint
---------
Co-authored-by: Kenneth Enevoldsen <kennethcenevoldsengmail.com> ([`2bb7623`](https://github.com/embeddings-benchmark/mteb/commit/2bb76239368c497efb92d5ae09a914eedd44a66d))
Unknown
* Update tasks table ([`54c5745`](https://github.com/embeddings-benchmark/mteb/commit/54c5745c3b2eb285a4cd1a10e06a516040eec6f1))
* Update points table ([`6aeaff4`](https://github.com/embeddings-benchmark/mteb/commit/6aeaff45a0d657f708677752cae0b96f0fb875a6))