Fix
* fix: update task metadata to allow for null (1448) ([`04ac3f2`](https://github.com/embeddings-benchmark/mteb/commit/04ac3f21139db2ea50fdef4d91c345f61f229d44))
* fix: Count unique texts, data leaks in calculate metrics (1438)
* add more stat
* add more stat
* update statistics ([`dd5d226`](https://github.com/embeddings-benchmark/mteb/commit/dd5d226f6a377fbf3f98f714323921539a418d83))
Unknown
* Update tasks table ([`f6a49fe`](https://github.com/embeddings-benchmark/mteb/commit/f6a49fef74724bed1a7e19d6b895324ed25cff13))
* Leaderboard: Fixed code benchmarks (1441)
* fixed code benchmarks
* fix: Made n_parameters formatting smarter and more robust
* fix: changed jina-embeddings-v3 number of parameters from 572K to 572M
* fix: Fixed use_instuctions typo in model overview
* fix: Fixed sentence-transformer compatibility switch
* Ran linting
* Added all languages, tasks, types and domains to options
* Removed resetting options when a new benchmark is selected
* All results now get displayed, but models that haven&39;t been run on everything get nan values in the table ([`3a1a470`](https://github.com/embeddings-benchmark/mteb/commit/3a1a470c8e0ad7b8bce61c7f73a501d6716fce5a))
* Leaderboard 2.0: added performance x n_parameters plot + more benchmark info (1437)
* Added elementary speed/performance plot
* Refactored table formatting code
* Bumped Gradio version
* Added more general info to benchmark description markdown block
* Adjusted margin an range on plot
* Made hover information easier to read on plot
* Made range scaling dynamic in plot
* Moved citation next to benchmark description
* Made titles in benchmark info bold ([`76c2112`](https://github.com/embeddings-benchmark/mteb/commit/76c21120f27b396fa1900fdf203c5079ad34b0d8))