Transformers-visualizer

Latest version: v0.2.2

Safety actively analyzes 691168 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.2.2

- Add the option to hide special tokens on plots
- Add a custom exception when multiple texts are given to a visualizer.

0.2.1

This first release introduces 2 simple ways to plot attention of a Transformer-based model.
- TokenToTokenAttentions allow to plot attention matrices of a specific layer given a HuggingFace `model` and a `tokenizer`.
- TokenToTokenNormalizedAttentions allow to plot attention matrices normalized across head axis given a HuggingFace `model` and a `tokenizer`.

Plots are rendered with `matplotlib`.

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.