Transformers-js-py

Latest version: v0.19.4

Safety actively analyzes 688087 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 14

2.0.0alpha.2

2.0.0alpha.1

2.0.0alpha.0

1.46

// ...

1.8b

Yesterday, the [Qwen](https://huggingface.co/Qwen) team (Alibaba Group) released the Qwen1.5 series of chat models. As part of the release, they published several sub-2B-parameter models, including [Qwen/Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) and [Qwen/Qwen1.5-1.8B-Chat](https://huggingface.co/Qwen/Qwen1.5-1.8B-Chat), which both [demonstrate strong performance](https://qwenlm.github.io/blog/qwen1.5/) despite their small sizes. The best part? They can run in the browser with Transformers.js ([PR](https://github.com/xenova/transformers.js/pull/570))! 🚀 See [here](https://huggingface.co/models?library=transformers.js&other=qwen2) for the full list of supported models.

![demo-2x](https://github.com/xenova/transformers.js/assets/26504141/71a11a6e-1756-4375-bc3f-b11eb6b23485)

**Example:** Text generation with `Xenova/Qwen1.5-0.5B-Chat`.

js
import { pipeline } from 'xenova/transformers';

// Create text-generation pipeline
const generator = await pipeline('text-generation', 'Xenova/Qwen1.5-0.5B-Chat');

// Define the prompt and list of messages
const prompt = "Give me a short introduction to large language model."
const messages = [
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": prompt }
]

// Apply chat template
const text = generator.tokenizer.apply_chat_template(messages, {
tokenize: false,
add_generation_prompt: true,
});

// Generate text
const output = await generator(text, {
max_new_tokens: 128,
do_sample: false,
});
console.log(output[0].generated_text);
// 'A large language model is a type of artificial intelligence system that can generate text based on the input provided by users, such as books, articles, or websites. It uses advanced algorithms and techniques to learn from vast amounts of data and improve its performance over time through machine learning and natural language processing (NLP). Large language models have become increasingly popular in recent years due to their ability to handle complex tasks such as generating human-like text quickly and accurately. They have also been used in various fields such as customer service chatbots, virtual assistants, and search engines for information retrieval purposes.'



🧍 MODNet for Portrait Image Matting

Next, we added support for [MODNet](https://github.com/ZHKKKe/MODNet), a small (but powerful) portrait image matting model ([PR](https://github.com/xenova/transformers.js/pull/569)). Thanks to cyio for the suggestion!

![animation](https://github.com/xenova/transformers.js/assets/26504141/a1c1840e-7373-4c4d-8f20-90009f1eb6a1)

**Example:** Perform portrait image matting with `Xenova/modnet`.
js
import { AutoModel, AutoProcessor, RawImage } from 'xenova/transformers';

// Load model and processor
const model = await AutoModel.from_pretrained('Xenova/modnet', { quantized: false });
const processor = await AutoProcessor.from_pretrained('Xenova/modnet');

// Load image from URL
const url = 'https://images.pexels.com/photos/5965592/pexels-photo-5965592.jpeg?auto=compress&cs=tinysrgb&w=1024';
const image = await RawImage.fromURL(url);

// Pre-process image
const { pixel_values } = await processor(image);

// Predict alpha matte
const { output } = await model({ input: pixel_values });

// Save output mask
const mask = await RawImage.fromTensor(output[0].mul(255).to('uint8')).resize(image.width, image.height);
mask.save('mask.png');


| Input image | Output mask |
|--------|--------|
| ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/mhmDJgp5GgnbvQnUc2SVI.png) | ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/H1VBX6dS-xTpg14cl1Zxx.png) |

🧠 New text embedding models

We also added support for several new text embedding models, including:
- [bge-m3](https://huggingface.co/Xenova/bge-m3) by [BAAI](https://huggingface.co/BAAI).
- [nomic-embed-text-v1](https://huggingface.co/nomic-ai/nomic-embed-text-v1) by [Nomic AI](https://huggingface.co/nomic-ai).
- [jina-embeddings-v2-base-de](https://huggingface.co/Xenova/jina-embeddings-v2-base-de) and [jina-embeddings-v2-base-zh](https://huggingface.co/Xenova/jina-embeddings-v2-base-zh) by [Jina AI](https://huggingface.co/jinaai).

Check out the links for example usage.

🛠️ Other improvements
* Fix example links in documentation (https://github.com/xenova/transformers.js/pull/550).
* Improve unknown model warnings (https://github.com/xenova/transformers.js/pull/554).
* Update `jsdoc-to-markdown` dev dependency (https://github.com/xenova/transformers.js/pull/574).


**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.14.2...2.15.0

1.06

Page 10 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.