Transformers-js-py

Latest version: v0.19.4

Safety actively analyzes 688087 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 14

3.0.1

What's new?

* Fix Document QA pipeline in https://github.com/huggingface/transformers.js/pull/987. Thanks martinsomm for reporting!
* Next.js 15 ([code](https://github.com/huggingface/transformers.js-examples/tree/main/next-server); [demo](https://huggingface.co/spaces/webml-community/next-server-template)) and SvelteKit 5 ([code](https://github.com/huggingface/transformers.js-examples/tree/main/sveltekit); [demo](https://huggingface.co/spaces/webml-community/sveltekit-server-template)) server-side templates
* Minor documentation fixes

**Full Changelog**: https://github.com/huggingface/transformers.js/compare/3.0.0...3.0.1

3.0.0

Transformers.js v3: WebGPU Support, New Models & Tasks, New Quantizations, Deno & Bun Compatibility, and More…

![thumbnail](https://github.com/user-attachments/assets/e367c685-5f92-42a3-9a20-d3d99375c4e1)


After more than a year of development, we're excited to announce the release of 🤗 Transformers.js v3!

You can get started by installing Transformers.js v3 from [NPM](https://www.npmjs.com/package/huggingface/transformers) using:

bash
npm i huggingface/transformers


Then, importing the library with

js
import { pipeline } from "huggingface/transformers";


or, via a CDN

js
import { pipeline } from "https://cdn.jsdelivr.net/npm/huggingface/transformers3.0.0";


For more information, check out the [documentation](https://hf.co/docs/transformers.js).

⚡ WebGPU support (up to 100x faster than WASM!)

WebGPU is a new web standard for accelerated graphics and compute. The [API](https://developer.mozilla.org/en-US/docs/Web/API/WebGPU_API) enables web developers to use the underlying system's GPU to carry out high-performance computations directly in the browser. WebGPU is the successor to [WebGL](https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API) and provides significantly better performance, because it allows for more direct interaction with modern GPUs. Lastly, it supports general-purpose GPU computations, which makes it just perfect for machine learning!

> [!WARNING]
> As of October 2024, global WebGPU support is around 70% (according to [caniuse.com](https://caniuse.com/webgpu)), meaning some users may not be able to use the API.
>
> If the following demos do not work in your browser, you may need to enable it using a feature flag:
>
> - Firefox: with the `dom.webgpu.enabled` flag (see [here](https://developer.mozilla.org/en-US/docs/Mozilla/Firefox/Experimental_features#:~:text=tested%20by%20Firefox.-,WebGPU%20API,-The%20WebGPU%20API)).
> - Safari: with the `WebGPU` feature flag (see [here](https://webkit.org/blog/14879/webgpu-now-available-for-testing-in-safari-technology-preview/)).
> - Older Chromium browsers (on Windows, macOS, Linux): with the `enable-unsafe-webgpu` flag (see [here](https://developer.chrome.com/docs/web-platform/webgpu/troubleshooting-tips)).

Usage in Transformers.js v3

Thanks to our collaboration with [ONNX Runtime Web](https://www.npmjs.com/package/onnxruntime-web), enabling WebGPU acceleration is as simple as setting `device: 'webgpu'` when loading a model. Let's see some examples!

**Example:** Compute text embeddings on WebGPU ([demo](https://v2.scrimba.com/s06a2smeej))

js
import { pipeline } from "huggingface/transformers";

// Create a feature-extraction pipeline
const extractor = await pipeline(
"feature-extraction",
"mixedbread-ai/mxbai-embed-xsmall-v1",
{ device: "webgpu" },
});

// Compute embeddings
const texts = ["Hello world!", "This is an example sentence."];
const embeddings = await extractor(texts, { pooling: "mean", normalize: true });
console.log(embeddings.tolist());
// [

2.17.2

🚀 What's new?
* Add support for MobileViTv2 in https://github.com/xenova/transformers.js/pull/721
js
import { pipeline } from 'xenova/transformers';

// Create an image classification pipeline
const classifier = await pipeline('image-classification', 'Xenova/mobilevitv2-1.0-imagenet1k-256', {
quantized: false,
});

// Classify an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg';
const output = await classifier(url);
// [{ label: 'tiger, Panthera tigris', score: 0.6491137742996216 }]

See [here](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js&other=fastvit&sort=trending) for the full list of supported models.

* Add support for FastViT in https://github.com/xenova/transformers.js/pull/749

js
import { pipeline } from 'xenova/transformers';

// Create an image classification pipeline
const classifier = await pipeline('image-classification', 'Xenova/fastvit_t12.apple_in1k', {
quantized: false
});

// Classify an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg';
const output = await classifier(url, { topk: 5 });
// [
// { label: 'tiger, Panthera tigris', score: 0.6649345755577087 },
// { label: 'tiger cat', score: 0.12454754114151001 },
// { label: 'lynx, catamount', score: 0.0010689536575227976 },
// { label: 'dhole, Cuon alpinus', score: 0.0010422508930787444 },
// { label: 'silky terrier, Sydney silky', score: 0.0009548701345920563 }
// ]

See [here](https://huggingface.co/models?pipeline_tag=image-classification&library=transformers.js&other=fastvit&sort=trending) for the full list of supported models.

* Optimize FFT in https://github.com/xenova/transformers.js/pull/766
* Auto rotate image by KTibow in https://github.com/xenova/transformers.js/pull/737
* Support reading data from blob URI by hans00 in https://github.com/xenova/transformers.js/pull/645
* Add sequence post processor in https://github.com/xenova/transformers.js/pull/771
* Add model file name by NawarA in https://github.com/xenova/transformers.js/pull/594
* Update pipelines.js to allow for `token_embeddings` as well by NikhilVerma in https://github.com/xenova/transformers.js/pull/770
* Remove old import from `stream/web` for `ReadableStream` in https://github.com/xenova/transformers.js/pull/752
* Update tokenizer playground by xenova in https://github.com/xenova/transformers.js/pull/717
* Use ungated version of mistral tokenizer by xenova in https://github.com/xenova/transformers.js/pull/718
* docs: update vanilla-js.md by eltociear in https://github.com/xenova/transformers.js/pull/738
* Fix CI by in https://github.com/xenova/transformers.js/pull/768
* Update Next.js demos to 14.2.3 in https://github.com/xenova/transformers.js/pull/772

🤗 New contributors
* eltociear made their first contribution in https://github.com/xenova/transformers.js/pull/738
* KTibow made their first contribution in https://github.com/xenova/transformers.js/pull/737
* NawarA made their first contribution in https://github.com/xenova/transformers.js/pull/594
* NikhilVerma made their first contribution in https://github.com/xenova/transformers.js/pull/770

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.17.1...2.17.2

2.17.1

What's new?
* Add `ignore_merges` option to BPE tokenizers in https://github.com/xenova/transformers.js/pull/716


**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.17.0...2.17.1

2.17.0

What's new?


💬 Improved `text-generation` pipeline for conversational models

This version adds support for passing an array of chat messages (with "role" and "content" properties) to the `text-generation` pipeline ([PR](https://github.com/xenova/transformers.js/pull/658)). Check out the list of supported models [here](https://huggingface.co/models?pipeline_tag=text-generation&library=transformers.js&other=conversational&sort=downloads).

**Example:** Chat with `Xenova/Qwen1.5-0.5B-Chat`.

js
import { pipeline } from 'xenova/transformers';

// Create text-generation pipeline
const generator = await pipeline('text-generation', 'Xenova/Qwen1.5-0.5B-Chat');

// Define the list of messages
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Tell me a funny joke.' }
]

// Generate text
const output = await generator(messages, {
max_new_tokens: 128,
do_sample: false,
})
console.log(output[0].generated_text);
// [
// { role: 'system', content: 'You are a helpful assistant.' },
// { role: 'user', content: 'Tell me a funny joke.' },
// { role: 'assistant', content: "Sure, here's one:\n\nWhy was the math book sad?\n\nBecause it had too many problems.\n\nI hope you found that joke amusing! Do you have any other questions or topics you'd like to discuss?" },
// ]


We also added the `return_full_text` parameter, which means if you set `return_full_text=false`, only the newly-generated tokens will be returned (only applicable if passing the raw text prompt to the pipeline).

🔢 Binary embedding quantization support

Transformers.js v2.17 adds two new parameters to the `feature-extraction` pipeline ("quantize" and "precision"), enabling you to generate binary embeddings. These can be used with certain embedding models to shrink the size of the document embeddings for retrieval. This results in reductions in index size/memory usage (for storage) and improvements in retrieval speed. Surprisingly, you can still achieve up to **~95%** of the original performance, but at **32x** storage savings and up to **32x** retrieval speeds! 🤯 Thanks to jonathanpv for this addition in https://github.com/xenova/transformers.js/pull/691!

js
import { pipeline } from 'xenova/transformers';

// Create feature-extraction pipeline
const extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2');

// Compute binary embeddings
const output = await extractor('This is a simple test.', { pooling: 'mean', quantize: true, precision: 'binary' });
// Tensor {
// type: 'int8',
// data: Int8Array [49, 108, 24, ...],
// dims: [1, 48]
// }

As you can see, this produces a **32x smaller** output tensor (a 4x reduction in data type with Float32Array → Int8Array, as well as an 8x reduction in dimensionality from 384 → 48). For more information, check out [this PR](https://github.com/UKPLab/sentence-transformers/pull/2549) in sentence-transformers, which inspired this update!

🛠️ Misc. improvements

* Faster dot product by pulsejet in https://github.com/xenova/transformers.js/pull/667
* Update dependencies in https://github.com/xenova/transformers.js/pull/661, https://github.com/xenova/transformers.js/pull/665, https://github.com/xenova/transformers.js/pull/702, and https://github.com/xenova/transformers.js/pull/704.

🤗 New contributors
* pulsejet made their first contribution in https://github.com/xenova/transformers.js/pull/667
* jonathanpv made their first contribution in https://github.com/xenova/transformers.js/pull/691

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.16.1...2.17.0

2.16.1

What's new?
* Add support for the `image-feature-extraction` pipeline in https://github.com/xenova/transformers.js/pull/650.

**Example:** Perform image feature extraction with `Xenova/vit-base-patch16-224-in21k`.
javascript
const image_feature_extractor = await pipeline('image-feature-extraction', 'Xenova/vit-base-patch16-224-in21k');
const url = 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.png';
const features = await image_feature_extractor(url);
// Tensor {
// dims: [ 1, 197, 768 ],
// type: 'float32',
// data: Float32Array(151296) [ ... ],
// size: 151296
// }


**Example:** Compute image embeddings with `Xenova/clip-vit-base-patch32`.
javascript
const image_feature_extractor = await pipeline('image-feature-extraction', 'Xenova/clip-vit-base-patch32');
const url = 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/cats.png';
const features = await image_feature_extractor(url);
// Tensor {
// dims: [ 1, 512 ],
// type: 'float32',
// data: Float32Array(512) [ ... ],
// size: 512
// }


* Fix channel format when padding non-square images for certain models in https://github.com/xenova/transformers.js/pull/655. This means you can now perform super-resolution for non-square images with [APISR](https://github.com/Kiteretsu77/APISR) models:

**Example:** Upscale an image with `Xenova/4x_APISR_GRL_GAN_generator-onnx`.
js
import { pipeline } from 'xenova/transformers';

// Create image-to-image pipeline
const upscaler = await pipeline('image-to-image', 'Xenova/4x_APISR_GRL_GAN_generator-onnx', {
quantized: false,
});

// Upscale an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/anime.png';
const output = await upscaler(url);
// RawImage {
// data: Uint8Array(16588800) [ ... ],
// width: 2560,
// height: 1920,
// channels: 3
// }

// (Optional) Save the upscaled image
output.save('upscaled.png');


<details>
<summary>See example output</summary>

Input image:
![image](https://github.com/xenova/transformers.js/assets/26504141/b5a0bed5-6348-4c71-8dd8-886a48f4d8fa)

Output image:
![image](https://github.com/xenova/transformers.js/assets/26504141/4d69e6d8-4c02-433c-970a-96bf48c41368)
</details>



* Update tokenizer `apply_chat_template` functionality in https://github.com/xenova/transformers.js/pull/647. This PR added functionality to support the new [C4AI Command-R tokenizer](https://huggingface.co/CohereForAI/c4ai-command-r-v01).

<details>
<summary>See example tool usage</summary>

js
import { AutoTokenizer } from "xenova/transformers";

const tokenizer = await AutoTokenizer.from_pretrained("Xenova/c4ai-command-r-v01-tokenizer")

// define conversation input:
const conversation = [
{ role: "user", content: "Whats the biggest penguin in the world?" }
]
// Define tools available for the model to use:
const tools = [
{
name: "internet_search",
description: "Returns a list of relevant document snippets for a textual query retrieved from the internet",
parameter_definitions: {
query: {
description: "Query to search the internet with",
type: "str",
required: true
}
}
},
{
name: "directly_answer",
description: "Calls a standard (un-augmented) AI chatbot to generate a response given the conversation history",
parameter_definitions: {}
}
]


// render the tool use prompt as a string:
const tool_use_prompt = tokenizer.apply_chat_template(
conversation,
{
chat_template: "tool_use",
tokenize: false,
add_generation_prompt: true,
tools,
}
)
console.log(tool_use_prompt)

</details>

<details>
<summary>See example RAG usage</summary>

js
import { AutoTokenizer } from "xenova/transformers";

const tokenizer = await AutoTokenizer.from_pretrained("Xenova/c4ai-command-r-v01-tokenizer")

// define conversation input:
const conversation = [
{ role: "user", content: "Whats the biggest penguin in the world?" }
]
// define documents to ground on:
const documents = [
{ title: "Tall penguins", text: "Emperor penguins are the tallest growing up to 122 cm in height." },
{ title: "Penguin habitats", text: "Emperor penguins only live in Antarctica." }
]

// render the RAG prompt as a string:
const grounded_generation_prompt = tokenizer.apply_chat_template(
conversation,
{
chat_template: "rag",
tokenize: false,
add_generation_prompt: true,

documents,
citation_mode: "accurate", // or "fast"
}
)
console.log(grounded_generation_prompt);

</details>

* Add support for EfficientNet in https://github.com/xenova/transformers.js/pull/639.

**Example:** Classify images with `chriamue/bird-species-classifier`
js
import { pipeline } from 'xenova/transformers';

// Create image classification pipeline
const classifier = await pipeline('image-classification', 'chriamue/bird-species-classifier', {
quantized: false, // Quantized model doesn't work
revision: 'refs/pr/1', // Needed until the model author merges the PR
});

// Classify an image
const url = 'https://upload.wikimedia.org/wikipedia/commons/7/73/Short_tailed_Albatross1.jpg';
const output = await classifier(url);
console.log(output)
// [{ label: 'ALBATROSS', score: 0.9999023079872131 }]


**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.16.0...2.16.1

Page 2 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.