Transformer-lens

Latest version: v2.9.0

Safety actively analyzes 688053 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 8

1.6.1

What's Changed
* Add support for left padding by soheeyang in https://github.com/neelnanda-io/TransformerLens/pull/344
* Added gated MLP Hooks by neelnanda-io in https://github.com/neelnanda-io/TransformerLens/pull/374
* added support for pythia 160m seeds by will-hath in https://github.com/neelnanda-io/TransformerLens/pull/377
* Remove lru caching of weights by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/381
* Implement `hook_mlp_in` for parallel attention/MLP models by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/380

New Contributors
* will-hath made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/377

**Full Changelog**: https://github.com/neelnanda-io/TransformerLens/compare/v1.6.0...v1.6.1

1.6.0

What's Changed
* Fix FactoredMatrix bug by callummcdougall in https://github.com/neelnanda-io/TransformerLens/pull/367
* Fix to automatically infer add_special_tokens for tokenizer by soheeyang in https://github.com/neelnanda-io/TransformerLens/pull/370
*
**Full Changelog**: https://github.com/neelnanda-io/TransformerLens/compare/v1.5.0...v1.6.0

(Release requested by callummcdougall for bugfix),

1.5.0

What's Changed
* Fix generate() by adding greedy decoding code for do_sample=False by soheeyang in https://github.com/neelnanda-io/TransformerLens/pull/358
* Updated readme by neelnanda-io in https://github.com/neelnanda-io/TransformerLens/pull/360
* Fix bug in rotary embedding for models other than llama and gpt-neo by soheeyang in https://github.com/neelnanda-io/TransformerLens/pull/365
* Switch to beartype by dkamm in https://github.com/neelnanda-io/TransformerLens/pull/325


**Full Changelog**: https://github.com/neelnanda-io/TransformerLens/compare/v1.4.0...v1.5.0

1.4.0

Note: There is a bug in GPT-J in this version.

What's Changed
* Halve GPU memory when loading by slavachalnev in https://github.com/neelnanda-io/TransformerLens/pull/333
* Update to `hook_mlp_in` by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/316
* `names_filter` bug fix by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/321
* [Ready] Enable Pytorch GPU acceleration for M1 chips by luciaquirke in https://github.com/neelnanda-io/TransformerLens/pull/326
* Introduce Global prepend_bos Attribute to HookedTransformer by soheeyang in https://github.com/neelnanda-io/TransformerLens/pull/343
* Fix hook_result shape comment by ckkissane in https://github.com/neelnanda-io/TransformerLens/pull/347
* Support for reduced precision (104) by glerzing in https://github.com/neelnanda-io/TransformerLens/pull/317
* Added tiny pythia models by neelnanda-io in https://github.com/neelnanda-io/TransformerLens/pull/350
* Add Llama-2 7B and 13B models by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/352
* Fix API docs by Smaug123 in https://github.com/neelnanda-io/TransformerLens/pull/339
* Enhance the API for default_prepend_bos by soheeyang in https://github.com/neelnanda-io/TransformerLens/pull/345
* Integrate StableLM (254) by glerzing in https://github.com/neelnanda-io/TransformerLens/pull/354
* add colab buttons to demos by ckkissane in https://github.com/neelnanda-io/TransformerLens/pull/359
* Remove n_devices assert in config by slavachalnev in https://github.com/neelnanda-io/TransformerLens/pull/357
* Updated readme by neelnanda-io in https://github.com/neelnanda-io/TransformerLens/pull/351
* Scalar multiplication by matthiasdellago in https://github.com/neelnanda-io/TransformerLens/pull/355

New Contributors
* soheeyang made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/343
* Smaug123 made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/339
* matthiasdellago made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/355

**Full Changelog**: https://github.com/neelnanda-io/TransformerLens/compare/v1.3.0...v1.4.0

1.3.0

What's Changed
* fix outdated link in Exploratory Analysis Demo by daspartho in https://github.com/neelnanda-io/TransformerLens/pull/259
* Finish patching docs by ckkissane in https://github.com/neelnanda-io/TransformerLens/pull/261
* Fix `from_pretrained` with `redwood_attn_2l` by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/268
* Added list of demos to tutorial section. by JayBaileyCS in https://github.com/neelnanda-io/TransformerLens/pull/263
* Improving head detector by MatthewBaggins in https://github.com/neelnanda-io/TransformerLens/pull/255
* Optimize imports in HookedTransformer by rusheb in https://github.com/neelnanda-io/TransformerLens/pull/260
* Baidicoot main - Implemented functionality for loading mingpt-style models off HF (e.g. othello-gpt) by jbloomAus in https://github.com/neelnanda-io/TransformerLens/pull/272
* Upgrade to typeguard 3 by dkamm in https://github.com/neelnanda-io/TransformerLens/pull/269
* Install autoformatting tools and add formatting checks to CI by rusheb in https://github.com/neelnanda-io/TransformerLens/pull/270
* Add TransformerLens logo to docs and GitHub by koayon in https://github.com/neelnanda-io/TransformerLens/pull/273
* Wrap docstrings and comments in HookedTransformer by luciaquirke in https://github.com/neelnanda-io/TransformerLens/pull/274
* Format array in test_transformer_lens.py by rusheb in https://github.com/neelnanda-io/TransformerLens/pull/275
* Introducing HookedEncoder by rusheb in https://github.com/neelnanda-io/TransformerLens/pull/276
* Add tests for tokenization methods by Aprillion in https://github.com/neelnanda-io/TransformerLens/pull/280
* Fix broken link in issue template by rusheb in https://github.com/neelnanda-io/TransformerLens/pull/278
* Various memory solutions. Ultimately used gc to "hide" memory issue which should be solved soon. by jbloomAus in https://github.com/neelnanda-io/TransformerLens/pull/296
* FactoredMatrix __getitem__ (224) by glerzing in https://github.com/neelnanda-io/TransformerLens/pull/295
* Add tiny stories by Felhof in https://github.com/neelnanda-io/TransformerLens/pull/292
* from_pretrained custom parameters (288) by glerzing in https://github.com/neelnanda-io/TransformerLens/pull/298
* Add better `__name__` annotation to `full_hook`s by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/302
* Multiple minor corrections by glerzing in https://github.com/neelnanda-io/TransformerLens/pull/301
* Add get_basic_config util function by adamyedidia in https://github.com/neelnanda-io/TransformerLens/pull/294
* Fix bug: HookedEncoder not being moved to GPU by rusheb in https://github.com/neelnanda-io/TransformerLens/pull/307
* Fix tokenization tests on GPU by rusheb in https://github.com/neelnanda-io/TransformerLens/pull/308
* Add prepend option to `model.add_hook` by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/303
* Fix tiny stories model names by Felhof in https://github.com/neelnanda-io/TransformerLens/pull/305
* Add `hook_mlp_in` by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/313
* Ignore some functions in the documentation (310) by glerzing in https://github.com/neelnanda-io/TransformerLens/pull/312
* Add assertion to refactor_factored_attn_matrices by ArthurConmy in https://github.com/neelnanda-io/TransformerLens/pull/320
* Update evals.py to not directly call cuda, instead have default cuda … by dennis-akar in https://github.com/neelnanda-io/TransformerLens/pull/324
* Add SVD interpretability feature to TransformerLens by JayBaileyCS in https://github.com/neelnanda-io/TransformerLens/pull/311
* Fix svd tests on GPU by slavachalnev in https://github.com/neelnanda-io/TransformerLens/pull/330
* Reduce memory use when loading model by slavachalnev in https://github.com/neelnanda-io/TransformerLens/pull/327

New Contributors
* MatthewBaggins made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/255
* koayon made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/273
* luciaquirke made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/274
* Aprillion made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/280
* glerzing made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/295
* Felhof made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/292
* dennis-akar made their first contribution in https://github.com/neelnanda-io/TransformerLens/pull/324

**Full Changelog**: https://github.com/neelnanda-io/TransformerLens/compare/v1.2.2...v1.3.0

1.2.2

What's Changed

Too many commit messages so let's summarise them.

General Features
- Pipeline Parallelism
- Cache now doesn't move tensors across devices unless told to

New Models:
- Redwood 2L
- New Pythia Models
- LLaMA

Analysis Features:
- Add apply_ln to stack_head_results and stack_neuron_results
- Context Manager for Hooks
- Attention Head Detectors

Thanks to all the Contributors!

Many thanks to: rusheb, ckkissane, slavachalnev, JayBaileyCS, zshn-gvg, jbloomAus, adzcai, adamyedidia, ArthurConmy, bryce13950, daspartho, haileyschoelkopf, 0amp

**Full Changelog**: https://github.com/neelnanda-io/TransformerLens/compare/v1.2.1...v1.2.2

Page 7 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.