Tensorly-torch

Latest version: v0.4.0

Safety actively analyzes 626325 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

0.0004

[ 0.0007, 0.0003]], grad_fn=<SqueezeBackward0>)


get_tensorized_shape: linear layers can now be automatically tensorized to a convenient shape
Tensorized embeddings : Add factorized embedding layer and tests 10 , thanks to colehawkins

Initialise factorized tensors directly with Pytorch, for initialisations based on normal distribution:

python
from torch.nn import init
import tltorch

cp_tensor = tltorch.FactorizedTensor.new((3, 4, 2), rank=0.9, factorization='cp')
init.kaiming_normal(cp_tensor)


Improvements
===========

TuckerTensor: unsqueezed_modes option
TRL: added init_from_linear
FactorizedConvolutions now have a reset_parameters method and are initialised by default when created from random values
Layers and factorized tensors now accept a device and type as parameter
Tensor dropout now accepts `min_dim` and `min_values`

Bug fixes
=======
Fixed bugs for TT in rank in init_from_tensor, transduction and tensor creation.
Bug fix when creating a factorized conv from a factorization.
Linear layer class method preserve context
Contiguous issue in TuckerTensor thanks to colehawkins, 9
Fixed tensor dropout for p=1
Initialise weights when creating new random layer

0.3.0

======================

TensorLy-Torch just got even easier to use for tensorized deep learning, with indexible factorized tensors, seamless compatibility with torch functions, tensorized embedding layers and more!

New features
==========
Faster general_1D_conv, speeds up CP convolutions
Indexable TensorizedTensors, 7 : Factorized tensors can now be indexed just like regular tensors. The result will still be a factorized tensor whenever possible, and a dense tensor otherwise.
python
>>> import tltorch

>>> cp_tensor = tltorch.FactorizedTensor.new((3, 4, 2), rank=0.9, factorization='cp')

Initialise the tensor with random values

0.02

>>> print(cp_tensor)
CPTensor(shape=(3, 4, 2), rank=2)

>>> cp_tensor[:2, :2]
CPTensor(shape=(2, 2, 2), rank=2)

>>> cp_tensor[2, 3, 1]
tensor(0.0250, grad_fn=<SumBackward0>)

Note how, above, indexing tracks gradients as well!


New BlockTT factorization, generalizes tt-matrices

python
>>> ftt = tltorch.TensorizedTensor.new((5, (2, 2, 2), (3, 3, 3)), rank=0.5, factorization='BlockTT')
>>> ftt
BlockTT, shape=[5, 8, 27], tensorized_shape=(5, (2, 2, 2), (3, 3, 3)), rank=[1, 20, 20, 1])
>>> ftt[2]
BlockTT, shape=[8, 27], tensorized_shape=[(2, 2, 2), (3, 3, 3)], rank=[1, 20, 20, 1])
>>> ftt[0, :2, :2]

0.2.0

A full rewriting of TensorLy-Torch!

Links

Releases

Has known vulnerabilities

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.