Attention-sinks

Latest version: v0.4.0

Safety actively analyzes 624524 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.4.0

Added

- Added support for Yi models. ([27](https://github.com/tomaarsen/attention_sinks/pull/27))
- Added support for BTLM models. ([29](https://github.com/tomaarsen/attention_sinks/pull/29))

Fixed

- Prevent crash if large input is provided. ([23](https://github.com/tomaarsen/attention_sinks/pull/23))
- Update QWen to match recent updates to the QWen modeling files. ([33](https://github.com/tomaarsen/attention_sinks/pull/33))

0.3.0

Added

- Added support for Qwen models. ([15](https://github.com/tomaarsen/attention_sinks/pull/15))
- Added support for StableLM_Epoch models. ([20](https://github.com/tomaarsen/attention_sinks/pull/20))

Changed

- Changed how Attention Sinks are injected into models, allows `attention_sinks` to be integrated with architectures that aren't in `transformers` ([16](https://github.com/tomaarsen/attention_sinks/pull/16))

0.2.3

Added

- Added support for GPT-J models. ([13](https://github.com/tomaarsen/attention_sinks/pull/13))

0.2.2

Fixed

- Fix `model.generate` for all model architectures. ([6](https://github.com/tomaarsen/attention_sinks/pull/6))

0.2.1

Fixed

- Implemented parity between `attention_sinks` and `transformers==4.34.0` for Falcon and Llama.

0.2.0

Added

- Added support for Mistral models. ([5](https://github.com/tomaarsen/attention_sinks/pull/5))
- Added support for GPT-NeoX/Pythia models. ([4](https://github.com/tomaarsen/attention_sinks/pull/4))
- Added support for MPT models. ([3](https://github.com/tomaarsen/attention_sinks/pull/3))

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.