Transformers-interpret

Latest version: v0.10.0

Safety actively analyzes 640296 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 3

0.1.5

Small patch:
- Fix windows utf-8 bug in `setup.py`

0.1.4

- Sequence classification support
- Word attribution and visualization of attributions
- Use of layer integrated gradients as attribution method
- \> 90% test coverage

0.0

('This', -0.011571866816239364),
('was', 0.9746020664206717),
('a', 0.06633740353266766),
('really', 0.007891184021722232),
('good', 0.11340512797772889),
('film', -0.1035443669783489),
('I', -0.030966387400513003),
('enjoyed', -0.07312861129345115),
('it', -0.062475007741951326),
('a', 0.05681161636240444),
('lot', 0.04342110477675596),
('</s>', 0.08154160609887448)]


Additional Functionality Added To Base Explainer
To support multiple embedding types for the classification explainer a number of handlers were added to the `BaseExplainer` to allow this functionality to be added easily to future explainers.
- `BaseExplainer` inspects signature of a model's forward function and determines whether it receives `position_ids` and `token_type_ids`. For example Bert models take both as optional parameters whereas distilbert does not.
- From this inspection the available embedding types are set in the `BaseExplainer` rather than in explainers that inherit from it.

Misc

- Updated tests, many of the tests in the suite now test out 3 different architectures **Bert**, **Distilbert** and **GPT2**. This helps iron out any issues with slight variations that these model's have.

Page 3 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.