Xformers

Latest version: v0.0.26.post1

Safety actively analyzes 642295 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 5

0.0.13

Added
- fMHA: Added CUTLASS-based kernel for `xformers.ops.memory_efficient_attention`. This kernel is automatically depending on the inputs, and works on any GPU after P100 [facebookresearch/xformers362]

0.0.12

Fixed
- Removed duplicated biases in the FusedMLP layers [facebookresearch/xformers317]
- Rotary embeddings respecting input types [facebookresearch/xformers326]
- Poolformer style instantiating useless projection layers [facebookresearch/xformers349]
- Fix layer position not being properly tracked, causing extra layernorms for programmatic xformers [facebookresearch/xformers348]
- Pass use_triton flag to LayerNorm module [facebookresearch/xformers336]

Added
- Four blocksparsity layouts from DeepSpeed [facebookresearch/xformers320]
- Support several initialization options [facebookresearch/xformers312]
- Conv2DFeedforward feedforward part [facebookresearch/xformers321]
- VisualAttention [facebookresearch/xformers329]
- Automatic blocksparse for causal attention [facebookresearch/xformers334]
- Better hierarchical transformer generation [facebookresearch/xformers345]
- Fused operations with AOTAutograd/NVFuser, integration into MLP [facebookresearch/xformers357]
- Refactor LRA code to use Pytorch Lightning [facebookresearch/xformers343]

0.0.11

Fixed
- Fix some torchscriptability [facebookresearch/xformers246]
- Fix FourierMix being compatible with AMP [facebookresearch/xformers258]
- Better asserts on QKV dimensions [facebookresearch/xformers264]
- Better perfs for FusedMLP and FusedLinearLayer [facebookresearch/xformers283]
- Deepnorm init missing self-attention [facebookresearch/xformers284]

Added
- Simplicial Embeddings [facebookresearch/xformers259]
- Mem efficient attention, FW pass [facebookresearch/xformers267]
- MHA benchmark
- MLP benchmark
- Move all triton kernels to triton v2 [facebookresearch/xformers272]
- Mem efficient attention, BW pass [facebookresearch/xformers281]
- Metaformer support [facebookresearch/xformers294]

0.0.10

Fixed
- Expose bias flag for feedforwards, same default as Timm [facebookresearch/xformers220]
- Update eps value for layernorm, same default as torch [facebookresearch/xformers221]
- PreNorm bugfix, only one input was normalized [facebookresearch/xformers233]
- Fix bug where embedding dimensions that did not match model dim would lead to a crash [facebookresearch/xformers244]

Added
- Add DeepNet (DeepNorm) residual path and init [facebookresearch/xformers227]

0.0.9

Added
- Compositional Attention [facebookresearch/xformers41]
- Experimental Ragged attention [facebookresearch/xformers189]
- Mixture of Experts [facebookresearch/xformers181]
- BlockSparseTensor [facebookresearch/xformers202]
- Nd-tensor support for triton softmax [facebookresearch/xformers210]

Fixed
- Bugfix Favor, single feature map [facebookresearch/xformers183]
- Sanity check blocksparse settings [facebookresearch/xformers207]
- Fixed some picklability [facebookresearch/xformers204]

0.0.8

Fixed
- Much faster fused dropout [facebookresearch/xformers164]
- Fused dropout repeatability [facebookresearch/xformers173]

Added
- Embedding weight tying option [facebookresearch/xformers172]

Page 4 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.