Linformer-pytorch

Latest version: v0.19.3

Safety actively analyzes 688587 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

0.19.3

Have not pushed up a release in a while, and this is a latest working version after 2 misc bugs have been fixed.

0.16.0

Added intermediate ff dimension

Now, the model dimension can be different in the intermediate layers.
This change applies to the ff module, and only in the encoder. Now, if
the flag `ff_intermediate` is not None, the layers will look like this:


channels -> ff_dim -> ff_intermediate (For layer 1)
ff_intermediate -> ff_dim -> ff_intermediate (For layers 2 to depth-1)
ff_intermediate -> ff_dim -> channels (For layer depth)


As opposed to


channels -> ff_dim -> channels (For all layers)

0.15.0

Now, the linformer supports convolution as a way to downsample the input, instead of relying on linear layers. This may reduce the amount of parameters necessary.

0.14.0

Finished an encoder and a decoder module. Also, causal attention works, when the `causal=True` flag is set. Will update the README shortly...

0.13.1

Added masking to the Linformer. However, this is still a WIP, since masking cannot be done in the traditional sense, like what is done in the attention is all you need paper, because there is an overhead of adding another `(n,n)` matrix, which is infeasable.

0.13.0

The repo now supports an encoder and a decoder.

TODO: Masking

Page 1 of 3

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.