E3nn

Latest version: v0.5.5

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 6

0.4.4

Fixed
- Remove `CartesianTensor._rtp`. Instead recompute the `ReducedTensorProduct` everytime. The user can save the `ReducedTensorProduct` to avoid creating it each time.
- `*equivariance_error` no longer keeps around unneeded autograd graphs
- `CartesianTensor` builds `ReducedTensorProduct` with correct device/dtype when called without one

Added
- Created module for reflected imports allowing for nice syntax for creating `irreps`, e.g. `from e3nn.o3.irreps import l3o same as Irreps("o3")`
- Add `uvu<v` mode for `TensorProduct`. Compute only the upper triangular part of the `uv` terms.
- (beta) `TensorSquare`. computes `x \otimes x` and decompose it.
- `*equivariance_error` now tell you which arguments had which error

Changed
- Give up the support of python 3.6, set `python_requires='>=3.7'` in setup
- Optimize a little bit `ReducedTensorProduct`: solve linear system only once per irrep instead of 2L+1 times.
- Do not scale line width by `path_weight` in `TensorProduct.visualize`
- `*equivariance_error` now transforms its inputs in float64 by default, regardless of the dtype used for the calculation itself

0.4.3

Fixed
- `ReducedTensorProduct`: replace QR decomposition by `orthonormalize` the projector `X.T X`.
This keeps `ReducedTensorProduct` deterministic because the projectors and `orthonormalize` are both deterministic.
The output of `orthonormalize` apears also to be highly sparse (luckily).

0.4.2

This release coupled with the release of `opt-einsum-fx=0.1.4` aim to fix slowness in the instantiation of `TensorProduct`.
The two main change that improved the instantiation time are
- Turning off the compilation of `TensorProduct.right` by default
- Replacing actual computation of `torch.einsum` and `torch.tensordot` by prediction of their output shape in the tracer used by `opt-einsum-fx` to collect tensor shapes

Added
- `irrep_normalization` and `path_normalization` for `TensorProduct`
- `compile_right` flag to `TensorProduct`
- Add new global flag `jit_script_fx` to optionally turn off `torch.jit.script` of fx code

0.4.1

Added
- Add `to_cartesian()` to `CartesianTensor`

Fixed
- make it work with `pytorch 1.10.0`

0.4.0

Changed
- Breaking change. normalization constants for `TensorProduct` and `Linear`. Now `Linear(2x0e + 7x0e, 0e)` is equivalent to `Linear(9x0e, 0e)`. Models with inhomogeneous multiplicities will be affected by this change!

Fixed
- remove `profiler.record_function` calls that caused troubles with torchscript
- the home made implementation of `radius_graph` was ignoring the argument `r_max`

0.3.5

Fixed
- `Extract` uses `CodeGenMixin` to avoid strange recursion errors during training
- Add missing call to `normalize` in `axis_angle_to_quaternion`

Page 2 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.