| `avx512_f32_js_1536d` | 1.127 M/s | 13.84 G/s | 0.001 | 345u |
| `avx512_f16_js_1536d` | 2.139 M/s | 13.14 G/s | 0.070 | 0.020 |
| `avx2_f16_js_1536d` | 0.547 M/s | 3.36 G/s | 0.011 | 0.003 |
Of course, the results will vary depending on the vector size. I generally use 1536 dimensions, matching the size of OpenAI Ada embeddings, standard in NLP workloads. The Jensen Shannon divergence, however, is used broadly in other domains of statistics, bio-informatics, and chem-informatics, so I'm adding it as a new out-of-the-box supported metric into [USearch](https://github.com/unum-cloud/usearch) today 🥳
This further accelerates the k-approximate Nearest Neighbors Search and the [clustering of Billions of different protein sequences](https://github.com/unum-cloud/usearch#clustering) without alignment procedures. [Expect one more "Less Slow" post soon!](https://ashvardanian.com/tags/less-slow/) 🤗