Torchkbnufft

Latest version: v1.5.2

Safety actively analyzes 715033 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

1.2.0.post1

This pure-documentation release fixes an issue with how documentation headers were rendered on the ReadTheDocs website. See PR 29.

1.2.0

This updates `torchkbnufft` for PyTorch version 1.8. It uses a new version of `index_add` that operates natively on complex tensors. The update also fixes a performance regression that arose due to thread management as identified in Issue 25.

Most changes came from PR 27, which has the below list of modifications:

- Update `requirements.txt` and `dev-requirements.txt` to latest packages.
- Remove `calc_split_sizes` - we can now use `tensor_split`.
- Removed some calls to tensor attributes - these can be expensive.
- Removal of `kwarg` usage for some `torch.jit.script` functions - these can behave strangely with scripted functions.
- Removal of `index_put` for accumulation. We now only use `index_add`.

1.1.0

This adds support for a new batched NUFFT, which is substantially faster than using a Python for loop over the batch dimension when applying a NUFFT with many small k-space trajectories. It also updates the documentation and includes a new page for performance tips. See PR 24 and Issue 24 for details and testing.

1.0.1

- Fixes an incredibly weird bug in the autograd for forward NUFFTs caused by an `unsqueeze` operation (PR 23).
- Remove references to batch dimension in notebooks (PR 20).
- Remove unnecessary adjoint NUFFT objects in docs (PR 19).
- Add a test for CPU/GPU forward matching (PR 17).

1.0.0

This release includes a complete package rewrite, featuring complex tensor support, a 4-fold speed-up on the CPU, a 2-fold speed-up on the GPU, an updated API, and rewritten documentation. The release includes many backwards-compatibility-breaking changing, hence the version increment to 1.0.

A summary of changes follows:

- Support for PyTorch complex tensors. The user is now expected to pass in tensors of a shape `[batch_size, num_chans, height, width]` for a 2D imaging problem. It's sill possible to pass in real tensors - just use `[batch_size, num_chans, height, width, 2]`. The backend uses complex values for efficiency.
- A 4-fold speed-up on the CPU and a 2-fold speed-up on the GPU for table interpolation. The primary mechanism is process forking via `torch.jit.fork` - see [interp.py](https://github.com/mmuckley/torchkbnufft/blob/master/torchkbnufft/_nufft/interp.py) for details.
- The backend has been substantially rewritten to a higher code quality, adding type annotations and compiling performant-critical functions with `torch.jit.script` to get rid of the Python GIL.
- A much improved density compensation function, `calc_density_compensation_function`, thanks to a contribution of chaithyagr on the suggestion of zaccharieramzi.
- Simplified utility functions for `calc_toeplitz_kernel` and `calc_tensor_spmatrix`.
- The [documentation](https://torchkbnufft.readthedocs.io/en/stable/) has been completely rewritten, upgrading to the Read the Docs template, an improved table of contents, adding mathematical descriptions of core operators, and having a dedicated basic usage section.
- Dedicated SENSE-NUFFT operators have been removed. Wrapping these with `torch.autograd.Function` didn't give us any benefits, so there's no need to have them. Users will now pass their sensitivity maps into the `forward` function of `KbNufft` and `KbNufftAdjoint` directly.
- Rewritten notebooks and README files.
- New `CONTRIBUTING.md`.
- Removed `mrisensesim.py` as it is not a core part of the package.

0.3.4

This fixes a few compatibility issues that could have possibly arisen in new versions of PyTorch mentioned in Issue 7. Specifically:

- A NumPy array was converted to a tensor without a copy - this has been modified to explicitly copy.
- A new `fft_compatibility.py` file to handle modifications to new versions of `torch.fft` (see [here](https://pytorch.org/docs/stable/fft.html#torch.fft.fftn)). Basically, use of `torch.fft` was going to be deprecated in the future in favor of `torch.fft.fft`. We now check the PyTorch version to make figure out which one to do to make sure the code can still run on older versions of PyTorch.

Page 2 of 4

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.