What's new?
Improved [feature extraction pipeline](https://huggingface.co/docs/transformers.js/api/pipelines#pipelinesfeatureextractionpipeline-codepipelinecode) for Embeddings
You can now perform feature extraction on models other than sentence-transformers! All you need to do is target a repo (and/or revision) that was exported with `--task default`. Also be sure to use the correct quantization for your use-case!
**Example**: Run feature extraction with bert-base-uncased (without pooling/normalization).
js
let extractor = await pipeline('feature-extraction', 'Xenova/bert-base-uncased', { revision: 'default' });
let result = await extractor('This is a simple test.');
console.log(result);
// Tensor {
// type: 'float32',
// data: Float32Array [0.05939924716949463, 0.021655935794115067, ...],
// dims: [1, 8, 768]
// }
**Example**: Run feature extraction with bert-base-uncased (with pooling/normalization).
js
let extractor = await pipeline('feature-extraction', 'Xenova/bert-base-uncased', { revision: 'default' });
let result = await extractor('This is a simple test.', { pooling: 'mean', normalize: true });
console.log(result);
// Tensor {
// type: 'float32',
// data: Float32Array [0.03373778983950615, -0.010106077417731285, ...],
// dims: [1, 768]
// }
**Example**: Calculating embeddings with sentence-transformers models.
js
let extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2');
let result = await extractor('This is a simple test.', { pooling: 'mean', normalize: true });
console.log(result);
// Tensor {
// type: 'float32',
// data: Float32Array [0.09094982594251633, -0.014774246141314507, ...],
// dims: [1, 384]
// }
This also means you can do things like semantic search directly in JavaScript/Typescript! Check out [the Pinecone docs]( https://docs.pinecone.io/docs/semantic-search-text-typescript) for an example app which uses Transformers.js!
Over 100 Transformers.js models on the hub!
We now have 109 models to choose from! Check them out at https://huggingface.co/models?other=transformers.js! If you'd like to contribute models (exported with Optimum), you can tag them with `library_name: "transformers.js"`! Let's make ML more web-friendly!
Misc
- Fixed various quantization/exporting issues
**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.0.2...2.1.0