Transformers-js-py

Latest version: v0.19.4

Safety actively analyzes 688053 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 14

2.12.0

What's new?

💬 Chat templates!

This release adds support for **chat templates**, a highly-requested feature that enables users to convert conversations (represented as a list of chat objects) into a single tokenizable string, in the format that the model expects. As you may know, chat templates can vary greatly across model types, so it was important to design a system that: (1) supports complex chat templates; (2) is generalizable, and (3) is easy to use. So, how did we do it? 🤔

This is made possible with [`huggingface/jinja`](https://www.npmjs.com/package/huggingface/jinja), a minimalistic JavaScript implementation of the Jinja templating engine, that we created to align with how [transformers](https://github.com/huggingface/transformers) handles templating. Although it was originally designed for parsing and rendering ChatML templates, we decided to separate out the templating logic into an external (optional) library due to its usefulness in other types of applications. Special thanks to tlaceby for his amazing ["Guide to Interpreters"](https://github.com/tlaceby/guide-to-interpreters-series) series, which provided the basis for our implementation. 🤗

Anyway, let's take a look at an example:

js
import { AutoTokenizer } from "xenova/transformers";

// Load tokenizer from the Hugging Face Hub
const tokenizer = await AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1");

// Define chat messages
const chat = [
{ role: "user", content: "Hello, how are you?" },
{ role: "assistant", content: "I'm doing great. How can I help you today?" },
{ role: "user", content: "I'd like to show off how chat templating works!" },
]

const text = tokenizer.apply_chat_template(chat, { tokenize: false });
// "<s>[INST] Hello, how are you? [/INST]I'm doing great. How can I help you today?</s> [INST] I'd like to show off how chat templating works! [/INST]"


Notice how the entire chat is condensed into a single string. If you would instead like to return the tokenized version (i.e., a list of token IDs), you can use the following:

js
const input_ids = tokenizer.apply_chat_template(chat, { tokenize: true, return_tensor: false });
// [1, 733, 16289, 28793, 22557, 28725, 910, 460, 368, 28804, 733, 28748, 16289, 28793, 28737, 28742, 28719, 2548, 1598, 28723, 1602, 541, 315, 1316, 368, 3154, 28804, 2, 28705, 733, 16289, 28793, 315, 28742, 28715, 737, 298, 1347, 805, 910, 10706, 5752, 1077, 3791, 28808, 733, 28748, 16289, 28793]


For more information about chat templates, check out the [transformers documentation](https://huggingface.co/docs/transformers/main/en/chat_templating).

🐛 Bug fixes
- Incorrect encoding/decoding of whitespace around special characters with Fast Llama tokenizers. These bugs will also soon be fixed in the transformers library. For backwards compatibility reasons, if the tokenizer was exported with the legacy behaviour, it will still act in the same way unless explicitly set otherwise. Newer exports won't be affected. If you wish to override this default, to either still use the legacy behaviour (for backwards compatibility reasons), or to upgrade to the fixed version, you can do so with:
js
// Use the default behaviour (specified in tokenizer_config.json, which in the case is `{legacy: false}`).
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/llama2-tokenizer');
const { input_ids } = tokenizer('<s>\n', { add_special_tokens: false, return_tensor: false });
console.log(input_ids); // [1, 13]

// Use the legacy behaviour
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/llama2-tokenizer', { legacy: true });
const { input_ids } = tokenizer('<s>\n', { add_special_tokens: false, return_tensor: false });
console.log(input_ids); // [1, 29871, 13]


- Strip whitespace around special tokens for wav2vec tokenizers.

🔨 Improvements
- More comprehensive tokenizer test suite: including both static and dynamic tokenizer tests for encoding, decoding, and chat templates.

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.11.0...2.12.0

2.11.0

What's new?

🤯 8 new architectures!

This release adds support for a bunch of new model architectures, covering a wide range of use cases! In total, we now support [73](https://huggingface.co/docs/transformers.js/index#models) different model architectures!

1. [ViTMatte](https://huggingface.co/docs/transformers/main/en/model_doc/vitmatte) for image matting (https://github.com/xenova/transformers.js/pull/448). See [here](https://huggingface.co/models?library=transformers.js&other=vitmatte&sort=trending) for the list of available models.

**Example:** Image matting w/ `Xenova/vitmatte-small-distinctions-646`.
js
import { AutoProcessor, VitMatteForImageMatting, RawImage } from 'xenova/transformers';

// Load processor and model
const processor = await AutoProcessor.from_pretrained('Xenova/vitmatte-small-distinctions-646');
const model = await VitMatteForImageMatting.from_pretrained('Xenova/vitmatte-small-distinctions-646');

// Load image and trimap
const image = await RawImage.fromURL('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/vitmatte_image.png');
const trimap = await RawImage.fromURL('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/vitmatte_trimap.png');

// Prepare image + trimap for the model
const inputs = await processor(image, trimap);

// Predict alpha matte
const { alphas } = await model(inputs);
// Tensor {
// dims: [ 1, 1, 640, 960 ],
// type: 'float32',
// size: 614400,
// data: Float32Array(614400) [ 0.9894027709960938, 0.9970508813858032, ... ]
// }


<details>

<summary>Visualization code</summary>

js
import { Tensor, cat } from 'xenova/transformers';

// Visualize predicted alpha matte
const imageTensor = new Tensor(
'uint8',
new Uint8Array(image.data),
[image.height, image.width, image.channels]
).transpose(2, 0, 1);

// Convert float (0-1) alpha matte to uint8 (0-255)
const alphaChannel = alphas
.squeeze(0)
.mul_(255)
.clamp_(0, 255)
.round_()
.to('uint8');

// Concatenate original image with predicted alpha
const imageData = cat([imageTensor, alphaChannel], 0);

// Save output image
const outputImage = RawImage.fromTensor(imageData);
outputImage.save('output.png');


</details>


Inputs:
| Image| Trimap |
|--------|--------|
| ![vitmatte_image](https://github.com/xenova/transformers.js/assets/26504141/7317539e-c9f6-4a61-9542-4578ea7b6292) | ![vitmatte_trimap](https://github.com/xenova/transformers.js/assets/26504141/663ef260-fe2d-4b23-83cf-8f9a9b7ee593) |

Outputs:
| Quantized | Unquantized |
|--------|--------|
| ![output_quantized](https://github.com/xenova/transformers.js/assets/26504141/00669063-1a7e-447d-947f-1e9e0beaa7c4) | ![output_unquantized](https://github.com/xenova/transformers.js/assets/26504141/437d8ccd-af82-4853-82c4-ae897ac112bf) |

2. [ESM](https://huggingface.co/docs/transformers/main/en/model_doc/esm) for protein sequence feature-extraction, masked language modelling, token classification, and zero-shot classification (https://github.com/xenova/transformers.js/pull/447). See [here](https://huggingface.co/models?library=transformers.js&other=esm&sort=trending) for the list of available models.

**Example:** Protein sequence classification w/ `Xenova/esm2_t6_8M_UR50D_sequence_classifier_v1`.
js
import { pipeline } from 'xenova/transformers';

// Create text classification pipeline
const classifier = await pipeline('text-classification', 'Xenova/esm2_t6_8M_UR50D_sequence_classifier_v1');

// Suppose these are your new sequences that you want to classify
// Additional Family 0: Enzymes
const new_sequences_0 = [ 'ACGYLKTPKLADPPVLRGDSSVTKAICKPDPVLEK', 'GVALDECKALDYLPGKPLPMDGKVCQCGSKTPLRP', 'VLPGYTCGELDCKPGKPLPKCGADKTQVATPFLRG', 'TCGALVQYPSCADPPVLRGSDSSVKACKKLDPQDK', 'GALCEECKLCPGADYKPMDGDRLPAAATSKTRPVG', 'PAVDCKKALVYLPKPLPMDGKVCRGSKTPKTRPYG', 'VLGYTCGALDCKPGKPLPKCGADKTQVATPFLRGA', 'CGALVQYPSCADPPVLRGSDSSVKACKKLDPQDKT', 'ALCEECKLCPGADYKPMDGDRLPAAATSKTRPVGK', 'AVDCKKALVYLPKPLPMDGKVCRGSKTPKTRPYGR' ]

// Additional Family 1: Receptor Proteins
const new_sequences_1 = [ 'VGQRFYGGRQKNRHCELSPLPSACRGSVQGALYTD', 'KDQVLTVPTYACRCCPKMDSKGRVPSTLRVKSARS', 'PLAGVACGRGLDYRCPRKMVPGDLQVTPATQRPYG', 'CGVRLGYPGCADVPLRGRSSFAPRACMKKDPRVTR', 'RKGVAYLYECRKLRCRADYKPRGMDGRRLPKASTT', 'RPTGAVNCKQAKVYRGLPLPMMGKVPRVCRSRRPY', 'RLDGGYTCGQALDCKPGRKPPKMGCADLKSTVATP', 'LGTCRKLVRYPQCADPPVMGRSSFRPKACCRQDPV', 'RVGYAMCSPKLCSCRADYKPPMGDGDRLPKAATSK', 'QPKAVNCRKAMVYRPKPLPMDKGVPVCRSKRPRPY' ]

// Additional Family 2: Structural Proteins
const new_sequences_2 = [ 'VGKGFRYGSSQKRYLHCQKSALPPSCRRGKGQGSAT', 'KDPTVMTVGTYSCQCPKQDSRGSVQPTSRVKTSRSK', 'PLVGKACGRSSDYKCPGQMVSGGSKQTPASQRPSYD', 'CGKKLVGYPSSKADVPLQGRSSFSPKACKKDPQMTS', 'RKGVASLYCSSKLSCKAQYSKGMSDGRSPKASSTTS', 'RPKSAASCEQAKSYRSLSLPSMKGKVPSKCSRSKRP', 'RSDVSYTSCSQSKDCKPSKPPKMSGSKDSSTVATPS', 'LSTCSKKVAYPSSKADPPSSGRSSFSMKACKKQDPPV', 'RVGSASSEPKSSCSVQSYSKPSMSGDSSPKASSTSK', 'QPSASNCEKMSSYRPSLPSMSKGVPSSRSKSSPPYQ' ]

// Merge all sequences
const new_sequences = [...new_sequences_0, ...new_sequences_1, ...new_sequences_2];

// Get the predicted class for each sequence
const predictions = await classifier(new_sequences);

// Output the predicted class for each sequence
for (let i = 0; i < predictions.length; ++i) {
console.log(`Sequence: ${new_sequences[i]}, Predicted class: '${predictions[i].label}'`)
}
// Sequence: ACGYLKTPKLADPPVLRGDSSVTKAICKPDPVLEK, Predicted class: 'Enzymes'
// ... (truncated)
// Sequence: AVDCKKALVYLPKPLPMDGKVCRGSKTPKTRPYGR, Predicted class: 'Enzymes'
// Sequence: VGQRFYGGRQKNRHCELSPLPSACRGSVQGALYTD, Predicted class: 'Receptor Proteins'
// ... (truncated)
// Sequence: QPKAVNCRKAMVYRPKPLPMDKGVPVCRSKRPRPY, Predicted class: 'Receptor Proteins'
// Sequence: VGKGFRYGSSQKRYLHCQKSALPPSCRRGKGQGSAT, Predicted class: 'Structural Proteins'
// ... (truncated)
// Sequence: QPSASNCEKMSSYRPSLPSMSKGVPSSRSKSSPPYQ, Predicted class: 'Structural Proteins'


3. [Hubert](https://huggingface.co/docs/transformers/main/en/model_doc/hubert) for audio classification, and automatic speech recognition (https://github.com/xenova/transformers.js/pull/449). See [here](https://huggingface.co/models?library=transformers.js&other=hubert&sort=trending) for the list of available models.


**Example:** Speech command recognition w/ `Xenova/hubert-base-superb-ks`.

javascript
import { pipeline } from 'xenova/transformers';

// Create audio classification pipeline
const classifier = await pipeline('audio-classification', 'Xenova/hubert-base-superb-ks');

// Classify audio
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/speech-commands_down.wav';
const output = await classifier(url, { topk: 5 });
// [

2.10.1

What's new?

🐛 Bug fixes
* Fix zero-shot-object-detection `{percentage: true}` in https://github.com/xenova/transformers.js/pull/434. Thanks to tobiascornille for reporting the issue!

🛠️ Misc. improvements
* Documentation improvements and new GitHub issues templates in https://github.com/xenova/transformers.js/pull/299
* Standardize `HF_ACCESS_TOKEN` -> `HF_TOKEN` environment variables in https://github.com/xenova/transformers.js/pull/431


**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.10.0...2.10.1

2.10.0

What's new?

🎵 New task: Zero-shot audio classification
The task of classifying audio into classes that are unseen during training. See [here](https://huggingface.co/learn/audio-course/chapter4/classification_models#zero-shot-audio-classification) for more information.

**Example:** Perform zero-shot audio classification with `Xenova/clap-htsat-unfused`.
js
import { pipeline } from 'xenova/transformers';

// Create a zero-shot audio classification pipeline
const classifier = await pipeline('zero-shot-audio-classification', 'Xenova/clap-htsat-unfused');

const audio = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/dog_barking.wav';
const candidate_labels = ['dog', 'vaccum cleaner'];
const scores = await classifier(audio, candidate_labels);
// [
// { score: 0.9993992447853088, label: 'dog' },
// { score: 0.0006007603369653225, label: 'vaccum cleaner' }
// ]


<details>

<summary>Audio used</summary>

[dog_barking.webm](https://github.com/xenova/transformers.js/assets/26504141/5f4fcd70-fa8b-418d-a86d-f31e6e417184)

</details>

💻 New architectures: CLAP, Audio Spectrogram Transformer, ConvNeXT, and ConvNeXT-v2
We added support for 4 new architectures, bringing the total up to [65](https://huggingface.co/docs/transformers.js/index#models)!

1. [CLAP](https://huggingface.co/docs/transformers/main/en/model_doc/clap) for zero-shot audio classification, text embeddings, and audio embeddings (https://github.com/xenova/transformers.js/pull/427). See [here](https://huggingface.co/models?library=transformers.js&other=clap&sort=trending) for the list of available models.
- Zero-shot audio classification (same as above)
- Text embeddings with `Xenova/clap-htsat-unfused`:
js
import { AutoTokenizer, ClapTextModelWithProjection } from 'xenova/transformers';

// Load tokenizer and text model
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/clap-htsat-unfused');
const text_model = await ClapTextModelWithProjection.from_pretrained('Xenova/clap-htsat-unfused');

// Run tokenization
const texts = ['a sound of a cat', 'a sound of a dog'];
const text_inputs = tokenizer(texts, { padding: true, truncation: true });

// Compute embeddings
const { text_embeds } = await text_model(text_inputs);
// Tensor {
// dims: [ 2, 512 ],
// type: 'float32',
// data: Float32Array(1024) [ ... ],
// size: 1024
// }


- Audio embeddings with `Xenova/clap-htsat-unfused`:
js
import { AutoProcessor, ClapAudioModelWithProjection, read_audio } from 'xenova/transformers';

// Load processor and audio model
const processor = await AutoProcessor.from_pretrained('Xenova/clap-htsat-unfused');
const audio_model = await ClapAudioModelWithProjection.from_pretrained('Xenova/clap-htsat-unfused');

// Read audio and run processor
const audio = await read_audio('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cat_meow.wav');
const audio_inputs = await processor(audio);

// Compute embeddings
const { audio_embeds } = await audio_model(audio_inputs);
// Tensor {
// dims: [ 1, 512 ],
// type: 'float32',
// data: Float32Array(512) [ ... ],
// size: 512
// }

1. [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/main/en/model_doc/audio-spectrogram-transformer) for audio classification (https://github.com/xenova/transformers.js/pull/427). See [here](https://huggingface.co/models?library=transformers.js&other=audio-spectrogram-transformer&sort=trending) for the list of available models.
js
import { pipeline } from 'xenova/transformers';

// Create an audio classification pipeline
const classifier = await pipeline('audio-classification', 'Xenova/ast-finetuned-audioset-10-10-0.4593');

// Predict class
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cat_meow.wav';
const output = await classifier(url, { topk: 4 });
// [
// { label: 'Meow', score: 0.5617874264717102 },
// { label: 'Cat', score: 0.22365376353263855 },
// { label: 'Domestic animals, pets', score: 0.1141069084405899 },
// { label: 'Animal', score: 0.08985692262649536 },
// ]


1. [ConvNeXT](https://huggingface.co/docs/transformers/model_doc/convnext) for image classification (https://github.com/xenova/transformers.js/pull/428). See [here](https://huggingface.co/models?library=transformers.js&other=convnext&sort=trending) for the list of available models.

js
import { pipeline } from 'xenova/transformers';

// Create image classification pipeline
const classifier = await pipeline('image-classification', 'Xenova/convnext-tiny-224');

// Classify an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg';
const output = await classifier(url);
// [{ label: 'tiger, Panthera tigris', score: 0.6153212785720825 }]


1. [ConvNeXT-v2](https://huggingface.co/docs/transformers/model_doc/convnextv2) for image classification (https://github.com/xenova/transformers.js/pull/428). See [here](https://huggingface.co/models?library=transformers.js&other=convnextv2&sort=trending) for the list of available models.

js
import { pipeline } from 'xenova/transformers';

// Create image classification pipeline
const classifier = await pipeline('image-classification', 'Xenova/convnextv2-atto-1k-224');

// Classify an image
const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/tiger.jpg';
const output = await classifier(url);
// [{ label: 'tiger, Panthera tigris', score: 0.6391205191612244 }]


🔨 Other improvements
* Support decoding of tensors in https://github.com/xenova/transformers.js/pull/416


**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.9.0...2.10.0

2.9.0

What's new?

😍 Exciting new tasks!

Transformers.js v2.9.0 adds support for three new tasks: (1) Depth estimation, (2) Zero-shot object detection, and (3) Optical document understanding.

🕵️‍♂️ Depth Estimation

The task of predicting the depth of objects present in an image. See [here](https://huggingface.co/tasks/depth-estimation) for more information.

js
import { pipeline } from 'xenova/transformers';

// Create depth estimation pipeline
let depth_estimator = await pipeline('depth-estimation', 'Xenova/dpt-hybrid-midas');

// Predict depth for image
let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/cats.jpg';
let output = await depth_estimator(url);


Input | Output
:-------------------------:|:-------------------------:
![input](https://github.com/xenova/transformers.js/assets/26504141/a7dfce1b-a8c0-4d57-9230-b320e3d2930b) | ![output](https://github.com/xenova/transformers.js/assets/26504141/c60b7dbe-01d0-4544-be70-19ca427ea662)



<details>

<summary>Raw output</summary>

js
// {
// predicted_depth: Tensor {
// dims: [ 384, 384 ],
// type: 'float32',
// data: Float32Array(147456) [ 542.859130859375, 545.2833862304688, 546.1649169921875, ... ],
// size: 147456
// },
// depth: RawImage {
// data: Uint8Array(307200) [ 86, 86, 86, ... ],
// width: 640,
// height: 480,
// channels: 1
// }
// }


</details>

🎯 Zero-shot Object Detection

The task of identifying objects of classes that are unseen during training. See [here](https://huggingface.co/docs/transformers/v4.31.0/tasks/zero_shot_object_detection) for more information.

js
import { pipeline } from 'xenova/transformers';

// Create zero-shot object detection pipeline
let detector = await pipeline('zero-shot-object-detection', 'Xenova/owlvit-base-patch32');

// Predict bounding boxes
let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/astronaut.png';
let candidate_labels = ['human face', 'rocket', 'helmet', 'american flag'];
let output = await detector(url, candidate_labels);

![image](https://github.com/xenova/transformers.js/assets/26504141/2f0e59f7-4c19-418b-b699-b80e7d356cc3)

<details>
<summary>Raw output</summary>

js
// [
// {

2.8.0

What's new?

🖼️ New task: Image-to-image

This release adds support for [image-to-image](https://huggingface.co/tasks/image-to-image) translation (e.g., super-resolution) with [Swin2SR](https://huggingface.co/docs/transformers/main/en/model_doc/swin2sr) models.

Side-by-side (full) | Animated (zoomed)
:-------------------------:|:-------------------------:
![side-by-side](https://github.com/xenova/transformers.js/assets/26504141/1d6df6b1-310c-4b6a-84f8-883e19ac83d8) | ![animated-comparison](https://github.com/xenova/transformers.js/assets/26504141/ae1a6465-ab63-4325-90f8-6cacea4abd5b)

As always, you can get started in just a few lines of code!
js
import { pipeline } from 'xenova/transformers';

let url = 'https://huggingface.co/spaces/jjourney1125/swin2sr/resolve/main/testsets/real-inputs/0855.jpg';
let upscaler = await pipeline('image-to-image', 'Xenova/swin2SR-compressed-sr-x4-48');
let output = await upscaler(url);
// RawImage {
// data: Uint8Array(12582912) [165, 166, 163, ...],
// width: 2048,
// height: 2048,
// channels: 3
// }


💻 New architectures: TrOCR, Swin2SR, Mistral, and Falcon

We also added support for 4 new architectures, bringing the total up to [57](https://huggingface.co/docs/transformers.js/index#models)! 🤯

- [TrOCR](https://huggingface.co/docs/transformers/main/en/model_doc/trocr) for optical character recognition (OCR).

js
import { pipeline } from 'xenova/transformers';

let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/handwriting.jpg';
let captioner = await pipeline('image-to-text', 'Xenova/trocr-small-handwritten');
let output = await captioner(url);
// [{ generated_text: 'Mr. Brown commented icily.' }]


![image](https://github.com/xenova/transformers.js/assets/26504141/b1342baf-9e09-46bb-a389-b93e1c70adc7)


Added in https://github.com/xenova/transformers.js/pull/375. See [here](https://huggingface.co/models?library=transformers.js&other=trocr&sort=trending) for the list of available models.

- [Swin2SR](https://huggingface.co/docs/transformers/main/en/model_doc/swin2sr) for super-resolution and image restoration.
js
import { pipeline } from 'xenova/transformers';

let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/butterfly.jpg';
let upscaler = await pipeline('image-to-image', 'Xenova/swin2SR-classical-sr-x2-64');
let output = await upscaler(url);
// RawImage {
// data: Uint8Array(786432) [ 41, 31, 24, 43, ... ],
// width: 512,
// height: 512,
// channels: 3
// }

Added in https://github.com/xenova/transformers.js/pull/381. See [here](https://huggingface.co/models?library=transformers.js&other=swin2sr&sort=trending) for the list of available models.

- [Mistral](https://huggingface.co/docs/transformers/main/en/model_doc/mistral) and [Falcon](https://huggingface.co/docs/transformers/main/en/model_doc/falcon) for text-generation. Added in https://github.com/xenova/transformers.js/pull/379.
_Note: Other than testing models, we haven't yet converted any of the larger (&ge;7B parameter) models. Stay tuned for more updates on this!_


🐛 Bug fixes:
* By default, do not add special tokens at start of text-generation (see [commit](https://github.com/xenova/transformers.js/pull/379/commits/183624849ea3e497c315b69ca3be7ba11e15b10a))
* Fix Firefox bug when displaying progress events while reading file from browser cache in https://github.com/xenova/transformers.js/pull/374. Thanks to felladrin for reporting this issue!
* Fix `text2text-generation` pipeline output inconsistency w/ python library in https://github.com/xenova/transformers.js/pull/384

🔨 Minor improvements:
* Upgrade typescript dependency version by Kit-p in https://github.com/xenova/transformers.js/pull/368
* Improve docs in https://github.com/xenova/transformers.js/pull/385

🤗 New Contributors
* Kit-p made their first contribution in https://github.com/xenova/transformers.js/pull/368

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.7.0...2.8.0

Page 5 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.