Xformers

Latest version: v0.0.28.post3

Safety actively analyzes 688215 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 7 of 7

0.0.8

Fixed
- Much faster fused dropout [facebookresearch/xformers164]
- Fused dropout repeatability [facebookresearch/xformers173]

Added
- Embedding weight tying option [facebookresearch/xformers172]

0.0.7

Fixed
- Dropout setting not properly passed in many attentions [facebookresearch/xformers123]

0.0.6

Fixed
- Fix self attention optimization not being triggered, broken residual path [facebookresearch/xformers119]
- Improve speed by not using contiguous Tensors when not needed [facebookresearch/xformers119]

Added
- Attention mask wrapper [facebookresearch/xformers113]
- ViT comparison benchmark [facebookresearch/xformers117]

0.0.4

Fixed
- Homogenizing the masks, additive or bool [facebookresearch/xformers79][facebookresearch/xformers85][facebookresearch/xformers86]
- Fix causality flag not being respected [facebookresearch/xformers103]
- Enabling FusedLayerNorm by default in the factory if Triton is available
- Fixing Favor with fp16
- Fixing Favor trainability

Added
- Fused dropout/bias/activation layer [facebookresearch/xformers58]
- Fused layernorm used by default in the factory [facebookresearch/xformers92]

0.0.3

Fixed
- Nystrom causal attention [facebookresearch/xformers75]

0.0.2

Fixed
- More robust blocksparse [facebookresearch/xformers24]

Added
- Rotary embeddings [facebookresearch/xformers32]
- More flexible layernorm [facebookresearch/xformers50]

Page 7 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.