What's Changed This update fixes torch device handling issues in code. GPU and other kinds of tensors can be used safely. * Update utils.py by yhgon in https://github.com/AminRezaei0x443/memory-efficient-attention/pull/5 * Update attention_torch.py by yhgon in https://github.com/AminRezaei0x443/memory-efficient-attention/pull/6
New Contributors * yhgon made their first contribution in https://github.com/AminRezaei0x443/memory-efficient-attention/pull/5
Added mask, bias calculation functions for custom and memory efficient chunks computation. So now sublinear memory computation mask, bias are possible.