Memory-efficient-attention

Latest version: v0.1.3

Safety actively analyzes 625095 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.1.3

0.1.2

What's Changed
This update fixes torch device handling issues in code. GPU and other kinds of tensors can be used safely.
* Update utils.py by yhgon in https://github.com/AminRezaei0x443/memory-efficient-attention/pull/5
* Update attention_torch.py by yhgon in https://github.com/AminRezaei0x443/memory-efficient-attention/pull/6

New Contributors
* yhgon made their first contribution in https://github.com/AminRezaei0x443/memory-efficient-attention/pull/5

**Full Changelog**: https://github.com/AminRezaei0x443/memory-efficient-attention/compare/0.1.1.0...0.1.2

0.1.1.0

Added mask, bias calculation functions for custom and memory efficient chunks computation. So now sublinear memory computation mask, bias are possible.

**Full Changelog**: https://github.com/AminRezaei0x443/memory-efficient-attention/compare/0.1.1...0.1.1.0

0.1.0

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.