Transformer-lens

Latest version: v2.15.0

Safety actively analyzes 723685 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 10

2.7.0

Model 3.2 support! There is also a new compatibility added to the function `test_promt` to allow for multiple prompts, as well as a minor typo.

What's Changed
* Typo hooked encoder by bryce13950 in https://github.com/TransformerLensOrg/TransformerLens/pull/732
* `utils.test_prompt` compares multiple prompts by callummcdougall in https://github.com/TransformerLensOrg/TransformerLens/pull/733
* Model llama 3.2 by bryce13950 in https://github.com/TransformerLensOrg/TransformerLens/pull/734


**Full Changelog**: https://github.com/TransformerLensOrg/TransformerLens/compare/v2.6.0...v2.7.0

2.6.0

Another nice little feature update! You now have the ability to ungroup the grouped query attention head component through a new config parameter `ungroup_grouped_query_attention`!

What's Changed
* Ungrouping GQA by hannamw & FlyingPumba in https://github.com/TransformerLensOrg/TransformerLens/pull/713


**Full Changelog**: https://github.com/TransformerLensOrg/TransformerLens/compare/v2.5.0...v2.6.0

2.5.0

Nice little release! This release adds a new parameter named `first_n_layers` that will allow you to specify how many layers of a model you want to load.

What's Changed
* Fix typo in bug issue template by JasonGross in https://github.com/TransformerLensOrg/TransformerLens/pull/715
* HookedTransformerConfig docs string: `weight_init_mode` => `init_mode` by JasonGross in https://github.com/TransformerLensOrg/TransformerLens/pull/716
* Allow loading only first n layers. by joelburget in https://github.com/TransformerLensOrg/TransformerLens/pull/717


**Full Changelog**: https://github.com/TransformerLensOrg/TransformerLens/compare/v2.4.1...v2.5.0

2.4.1

Little update to the code usage, but huge update for memory consumption! TransformerLens now needs almost half the memory it needed previously to boot thanks to a change with how the TransformerLens models are loaded.

What's Changed
* removed einsum causing error when use_atten_result is enabled by oliveradk in https://github.com/TransformerLensOrg/TransformerLens/pull/660
* revised loading to recycle state dict by bryce13950 in https://github.com/TransformerLensOrg/TransformerLens/pull/706

New Contributors
* oliveradk made their first contribution in https://github.com/TransformerLensOrg/TransformerLens/pull/660

**Full Changelog**: https://github.com/TransformerLensOrg/TransformerLens/compare/v2.4.0...v2.4.1

2.4.0

Nice little update! This gives users a little bit more control over attention masks, as well as adds a new demo.

What's Changed
* Improve attention masking by UFO-101 in https://github.com/TransformerLensOrg/TransformerLens/pull/699
* add a demo for Patchscopes and Generation with Patching by HenryCai11 in https://github.com/TransformerLensOrg/TransformerLens/pull/692

New Contributors
* HenryCai11 made their first contribution in https://github.com/TransformerLensOrg/TransformerLens/pull/692

**Full Changelog**: https://github.com/TransformerLensOrg/TransformerLens/compare/v2.3.1...v2.4.0

2.3.1

New Contributors
* mntss made their first contribution in https://github.com/TransformerLensOrg/TransformerLens/pull/694

**Full Changelog**: https://github.com/TransformerLensOrg/TransformerLens/compare/v2.3.0...v2.3.1

Page 3 of 10

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.