Transformers-js-py

Latest version: v0.19.4

Safety actively analyzes 688053 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 14

2.13.4

What's new?
* Add support for cross-encoder models (+fix token type ids) (501)

**Example:** Information Retrieval w/ `Xenova/ms-marco-TinyBERT-L-2-v2`.
js
import { AutoTokenizer, AutoModelForSequenceClassification } from 'xenova/transformers';

const model = await AutoModelForSequenceClassification.from_pretrained('Xenova/ms-marco-TinyBERT-L-2-v2');
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/ms-marco-TinyBERT-L-2-v2');

const features = tokenizer(
['How many people live in Berlin?', 'How many people live in Berlin?'],
{
text_pair: [
'Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.',
'New York City is famous for the Metropolitan Museum of Art.',
],
padding: true,
truncation: true,
}
)

const { logits } = await model(features)
console.log(logits.data);
// quantized: [ 7.210887908935547, -11.559350967407227 ]
// unquantized: [ 7.235750675201416, -11.562294006347656 ]



Check out the list of pre-converted models [here](https://huggingface.co/models?library=transformers.js&other=bert&sort=trending&search=ms-marco). We also put out a [demo](https://scrimba.com/scrim/ceGba4A4) for you to try out.

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.13.3...2.13.4

2.13.3

What's new?
* Fix typo in JSDoc in https://github.com/xenova/transformers.js/pull/498
* Fix properties on pipelines in https://github.com/xenova/transformers.js/pull/500. Thanks to wesbos for reporting the issue!


**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.13.2...2.13.3

2.13.2

What's new?
This release is a follow-up to 485, with additional intellisense-focused improvements (see [PR](https://github.com/xenova/transformers.js/pull/496)).

![typing-demo-new](https://github.com/xenova/transformers.js/assets/26504141/abd886a0-f032-4a1b-9abc-3dc3c50993d2)

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.13.1...2.13.2

2.13.1

What's new?

* Improve typing of `pipeline` function in https://github.com/xenova/transformers.js/pull/485. Thanks to wesbos for the suggestion!

![typing-demo](https://github.com/xenova/transformers.js/assets/26504141/4339a3f3-d669-482d-89ac-cbe5504b88ec)

This also means when you hover over the class name, you'll get example code to help you out.
![typing-demo2](https://github.com/xenova/transformers.js/assets/26504141/442b50b4-6640-4246-85f1-4a5ece8e7d4e)



* Add `phi-1_5` model in https://github.com/xenova/transformers.js/pull/493.

<details>

<summary>See example code</summary>

js
import { pipeline } from 'xenova/transformers';

// Create a text-generation pipeline
const generator = await pipeline('text-generation', 'Xenova/phi-1_5_dev');

// Construct prompt
const prompt = `\`\`\`py
import math
def print_prime(n):
"""
Print all primes between 1 and n
"""`;

// Generate text
const result = await generator(prompt, {
max_new_tokens: 100,
});
console.log(result[0].generated_text);


Results in:
py
import math
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)

print_prime(20)


Running the code produces the correct result:

[2, 3, 5, 7, 11, 13, 17, 19]


</details>

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.13.0...2.13.1

2.13.0

What's new?

🎄 7 new architectures!

This release adds support for many new multimodal architectures, bringing the total number of supported architectures to [80](https://huggingface.co/docs/transformers.js/index#models)! 🤯

1. [VITS](https://huggingface.co/docs/transformers/main/en/model_doc/vits) for multilingual text-to-speech across over 1000 languages! (https://github.com/xenova/transformers.js/pull/466)
js
import { pipeline } from 'xenova/transformers';

// Create English text-to-speech pipeline
const synthesizer = await pipeline('text-to-speech', 'Xenova/mms-tts-eng');

// Generate speech
const output = await synthesizer('I love transformers');
// {
// audio: Float32Array(26112) [...],
// sampling_rate: 16000
// }


https://github.com/xenova/transformers.js/assets/26504141/63c1a315-1ad6-44a2-9a2f-6689e2d9d14e

See [here](https://huggingface.co/models?library=transformers.js&other=vits&sort=trending) for the list of available models. To start, we've converted 12 of the [~1140](https://huggingface.co/models?other=mms,vits&sort=trending&search=facebook) models on the Hugging Face Hub. If we haven't added the one you wish to use, you can make it _web-ready_ using our [conversion script](https://huggingface.co/docs/transformers.js/custom_usage#convert-your-models-to-onnx).

2. [CLIPSeg](https://huggingface.co/docs/transformers/main/en/model_doc/clipseg) for zero-shot image segmentation. (https://github.com/xenova/transformers.js/pull/478)

js
import { AutoTokenizer, AutoProcessor, CLIPSegForImageSegmentation, RawImage } from 'xenova/transformers';

// Load tokenizer, processor, and model
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/clipseg-rd64-refined');
const processor = await AutoProcessor.from_pretrained('Xenova/clipseg-rd64-refined');
const model = await CLIPSegForImageSegmentation.from_pretrained('Xenova/clipseg-rd64-refined');

// Run tokenization
const texts = ['a glass', 'something to fill', 'wood', 'a jar'];
const text_inputs = tokenizer(texts, { padding: true, truncation: true });

// Read image and run processor
const image = await RawImage.read('https://github.com/timojl/clipseg/blob/master/example_image.jpg?raw=true');
const image_inputs = await processor(image);

// Run model with both text and pixel inputs
const { logits } = await model({ ...text_inputs, ...image_inputs });
// logits: Tensor {
// dims: [4, 352, 352],
// type: 'float32',
// data: Float32Array(495616)[ ... ],
// size: 495616
// }



You can visualize the predictions as follows:
js
const preds = logits
.unsqueeze_(1)
.sigmoid_()
.mul_(255)
.round_()
.to('uint8');

for (let i = 0; i < preds.dims[0]; ++i) {
const img = RawImage.fromTensor(preds[i]);
img.save(`prediction_${i}.png`);
}


| Original | `"a glass"` | `"something to fill"` | `"wood"` | `"a jar"` |
|--------|--------|--------|--------|--------|
| ![image](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/B4wAIseP3SokRd7Flu1Y9.png) | ![prediction_0](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/s3WBtlA9CyZmm9F5lrOG3.png) | ![prediction_1](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/v4_3JqhAZSfOg60v5x1C2.png) | ![prediction_2](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/MjZLENI9RMaMCGyk6G6V1.png) | ![prediction_3](https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/dIHO76NAPTMt9-677yNkg.png) |


See [here](https://huggingface.co/models?library=transformers.js&other=clipseg&sort=trending) for the list of available models.

3. [SegFormer](https://huggingface.co/docs/transformers/main/en/model_doc/segformer) for semantic segmentation and image classification. (https://github.com/xenova/transformers.js/pull/480)

js
import { pipeline } from 'xenova/transformers';

// Create an image segmentation pipeline
const segmenter = await pipeline('image-segmentation', 'Xenova/segformer_b2_clothes');

// Segment an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/young-man-standing-and-leaning-on-car.jpg';
const output = await segmenter(url);


![image](https://github.com/xenova/transformers.js/assets/26504141/30c9a07f-d6c2-4107-b393-a4ba100c94d3)

<details>

<summary>See output</summary>

js
[
{
score: null,
label: 'Background',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Hair',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Upper-clothes',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Pants',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Left-shoe',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Right-shoe',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Face',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Left-leg',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Right-leg',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Left-arm',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
},
{
score: null,
label: 'Right-arm',
mask: RawImage {
data: [Uint8ClampedArray],
width: 970,
height: 1455,
channels: 1
}
}
]


</details>

See [here](https://huggingface.co/models?library=transformers.js&other=segformer&sort=trending) for the list of available models.


4. [Table Transformer](https://huggingface.co/docs/transformers/main/en/model_doc/table-transformer) for table extraction from unstructured documents. (https://github.com/xenova/transformers.js/pull/477)

js
import { pipeline } from 'xenova/transformers';

// Create an object detection pipeline
const detector = await pipeline('object-detection', 'Xenova/table-transformer-detection', { quantized: false });

// Detect tables in an image
const img = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/invoice-with-table.png';
const output = await detector(img);
// [{ score: 0.9967531561851501, label: 'table', box: { xmin: 52, ymin: 322, xmax: 546, ymax: 525 } }]



<details>
<summary>Show example output</summary>

![image](https://github.com/xenova/transformers.js/assets/26504141/6ca5eea0-928c-4c13-9ccf-16ed62108054)

</details>

See [here](https://huggingface.co/models?library=transformers.js&other=table-transformer&sort=trending) for the list of available models.

5. [DiT](https://huggingface.co/docs/transformers/main/en/model_doc/dit) for document image classification. (https://github.com/xenova/transformers.js/pull/474)

js
import { pipeline } from 'xenova/transformers';

// Create an image classification pipeline
const classifier = await pipeline('image-classification', 'Xenova/dit-base-finetuned-rvlcdip');

// Classify an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/coca_cola_advertisement.png';
const output = await classifier(url);

2.12.1

What's new?
Patch for release 2.12.1, making `huggingface/jinja` a dependency instead of a peer dependency. This also means `apply_chat_template` is now synchronous (and does not lazily load the module). In future, we may want to add this functionality, but for now, it causes issues with lazy loading from a CDN.

![code](https://github.com/xenova/transformers.js/assets/26504141/9f145a8c-1dbf-4794-9afe-34b8e97ed8e0)

<details>

<summary>code</summary>

js
import { AutoTokenizer } from "xenova/transformers";

// Load tokenizer from the Hugging Face Hub
const tokenizer = await AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1");

// Define chat messages
const chat = [
{ role: "user", content: "Hello, how are you?" },
{ role: "assistant", content: "I'm doing great. How can I help you today?" },
{ role: "user", content: "I'd like to show off how chat templating works!" },
]

const text = tokenizer.apply_chat_template(chat, { tokenize: false });
// "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]"

const input_ids = tokenizer.apply_chat_template(chat, { tokenize: true, return_tensor: false });
// [1, 733, 16289, 28793, 22557, 28725, 910, 460, 368, 28804, 733, 28748, 16289, 28793, ...]



</details>



**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.12.0...2.12.1

Page 4 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.