Transformers-js-py

Latest version: v0.19.4

Safety actively analyzes 688053 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 14

2.5.2

What's new?

* Add `audio-classification` with [MMS](https://huggingface.co/docs/transformers/main/en/model_doc/mms) and [Wav2Vec2](https://huggingface.co/docs/transformers/main/en/model_doc/wav2vec2) in https://github.com/xenova/transformers.js/pull/220. Example usage:
js
// npm i xenova/transformers
import { pipeline } from 'xenova/transformers';

// Create audio classification pipeline
let classifier = await pipeline('audio-classification', 'Xenova/mms-lid-4017');

// Run inference
let url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jeanNL.wav';
let output = await classifier(url);
// [
// { label: 'fra', score: 0.9995712041854858 },
// { label: 'hat', score: 0.00003788191679632291 },
// { label: 'lin', score: 0.00002646935718075838 },
// { label: 'hun', score: 0.000015628289474989288 },
// { label: 'bre', score: 0.000007014674793026643 }
// ]

* Adds `automatic-speech-recognition` for Wav2Vec2 models in https://github.com/xenova/transformers.js/pull/220 (MMS coming soon).
* Add support for multi-label classification problem type in https://github.com/xenova/transformers.js/pull/249. Thanks KiterWork for reporting!
* Add M2M100 tokenizer in https://github.com/xenova/transformers.js/pull/250. Thanks AAnirudh07 for the feature request!
* Documentation improvements

New Contributors
* celsodias12 made their first contribution in https://github.com/xenova/transformers.js/pull/247

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.5.1...2.5.2

2.5.1

What's new?
* Add support for Llama/Llama2 models in https://github.com/xenova/transformers.js/pull/232
* Tokenization performance improvements in https://github.com/xenova/transformers.js/pull/234 (+ [The Tokenizer Playground](https://huggingface.co/spaces/Xenova/the-tokenizer-playground) example app)
* Add support for DeBERTa/DeBERTa-v2 models in https://github.com/xenova/transformers.js/pull/244
* Documentation improvements for zero-shot-classification pipeline ([link](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.ZeroShotClassificationPipeline))

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.5.0...2.5.1

2.5.0

What's new?

Support for computing CLIP image and text embeddings separately (https://github.com/xenova/transformers.js/pull/227)

You can now compute CLIP text and vision embeddings separately, allowing for faster inference when you only need to query one of the modalities. We've also released a [demo application for semantic image search](https://huggingface.co/spaces/Xenova/semantic-image-search) to showcase this functionality.
![image](https://github.com/xenova/transformers.js/assets/26504141/80c03318-6daf-4949-a114-5160f6fe0e29)

**Example:** Compute text embeddings with `CLIPTextModelWithProjection`.

javascript
import { AutoTokenizer, CLIPTextModelWithProjection } from 'xenova/transformers';

// Load tokenizer and text model
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/clip-vit-base-patch16');
const text_model = await CLIPTextModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16');

// Run tokenization
let texts = ['a photo of a car', 'a photo of a football match'];
let text_inputs = tokenizer(texts, { padding: true, truncation: true });

// Compute embeddings
const { text_embeds } = await text_model(text_inputs);
// Tensor {
// dims: [ 2, 512 ],
// type: 'float32',
// data: Float32Array(1024) [ ... ],
// size: 1024
// }


**Example:** Compute vision embeddings with `CLIPVisionModelWithProjection`.

javascript
import { AutoProcessor, CLIPVisionModelWithProjection, RawImage} from 'xenova/transformers';

// Load processor and vision model
const processor = await AutoProcessor.from_pretrained('Xenova/clip-vit-base-patch16');
const vision_model = await CLIPVisionModelWithProjection.from_pretrained('Xenova/clip-vit-base-patch16');

// Read image and run processor
let image = await RawImage.read('https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/football-match.jpg');
let image_inputs = await processor(image);

// Compute embeddings
const { image_embeds } = await vision_model(image_inputs);
// Tensor {
// dims: [ 1, 512 ],
// type: 'float32',
// data: Float32Array(512) [ ... ],
// size: 512
// }


Improved browser extension example/template (https://github.com/xenova/transformers.js/pull/196)

We've updated the [source code](https://github.com/xenova/transformers.js/tree/main/examples/extension) for our example browser extension, making the following improvements:
1. Custom model caching - meaning you don't need to ship the weights of the model with the extension. In addition to a smaller bundle size, when the user updates, they won't need to redownload the weights!
2. Use ES6 module syntax (vs. CommonJS) - much cleaner code!
3. Persistent service worker - fixed an issue where the service worker would go to sleep after a portion of inactivity.

Summary of updates since last minor release (2.4.0):
* (2.4.1) Improved documentation
* (2.4.2) Support for private/gated models (https://github.com/xenova/transformers.js/pull/202)
* (2.4.3) Example Next.js applications (https://github.com/xenova/transformers.js/pull/211) + MPNet model support (https://github.com/xenova/transformers.js/pull/221)

2.4.4

What's new?

* New model: [StarCoder](https://huggingface.co/models?library=transformers.js&other=gpt_bigcode&sort=trending) ([Xenova/starcoderbase-1b](https://huggingface.co/Xenova/starcoderbase-1b) and [Xenova/tiny_starcoder_py](https://huggingface.co/Xenova/tiny_starcoder_py))

* In-browser code completion example application ([demo](https://huggingface.co/spaces/Xenova/ai-code-playground) and [source code](https://github.com/xenova/transformers.js/tree/main/examples/code-completion))

![coding-demo-gif](https://github.com/xenova/transformers.js/assets/26504141/df8a1b61-6cc1-474e-b431-fc8c6add5872)



**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.4.3...2.4.4

2.4.3

What's new?

* Example next.js applications in https://github.com/xenova/transformers.js/pull/211
- [Tutorial](https://huggingface.co/docs/transformers.js/tutorials/next)
- Demo: [client-side](https://huggingface.co/spaces/Xenova/next-example-app) or [server-side](https://huggingface.co/spaces/Xenova/next-server-example-app)
- Source code: [client-side](https://github.com/xenova/transformers.js/tree/main/examples/next-client) or [server-side](https://github.com/xenova/transformers.js/tree/main/examples/next-server)

<img src="https://github.com/xenova/transformers.js/assets/26504141/d979e1ce-4235-47d7-95fc-1900c984b641" width=600>

* Add support for `mpnet` models by xenova in https://github.com/xenova/transformers.js/pull/221

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.4.2...2.4.3

2.4.2

What's new?
* Add support for private/gated model access by xenova in https://github.com/xenova/transformers.js/pull/202
* Fix BPE tokenization for weird whitespace characters by xenova in https://github.com/xenova/transformers.js/pull/208
- Thanks to fozziethebeat for reporting and helping to debug
* Minor documentation improvements

**Full Changelog**: https://github.com/xenova/transformers.js/compare/2.4.1...2.4.2

Page 7 of 14

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.