Functorch

Latest version: v2.0.0

Safety actively analyzes 687990 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

1.13.0

We’re excited to announce that, as a first step towards closer integration with PyTorch, functorch has moved to inside the PyTorch library and no longer requires the installation of a separate functorch package. After installing PyTorch via conda or pip, you’ll be able to `import functorch` in your program.

functorch will no longer have a separate version number (and instead the version number will match PyTorch’s; 1.13 for the current release).

If you're upgrading from an older version of functorch (functorch 0.1.x or 0.2.x), then you may need to uninstall functorch first via ``pip uninstall functorch``.

We've maintained backwards compatibility for ``pip install functorch``: this command works for PyTorch 1.13 and will continue to work for the foreseeable future until we do a proper deprecation. This is helpful if you're maintaining a library that supports multiple versions of PyTorch and/or functorch. The actual mechanics of this is that the functorch pip wheel is just a dummy package that lists torch==1.13 as a dependency.

Please refer to [the PyTorch release notes](https://github.com/pytorch/pytorch/releases/tag/v1.13.0) for a detailed changelog.

0.2.1

We’re excited to present the functorch 0.2.1 minor bug-fix release, compatible with PyTorch 1.12.1. Please see [here](https://pytorch.org/functorch/stable/install.html) for installation instructions.

Changelog
- Previously the functorch package was incompatible with the PyTorch 1.12.0 cu102 package. This is now fixed (functorch 0.2.1 is compatible with all PyTorch 1.12.1 packages).
- Fixed a 25% regression from v0.1.1 to v0.2.0 in computing hessians of fully-connected layers (989)
- Added batching rules for `masked_fill` (946), `searchsorted` (966)
- Batch norm now works with all forms of vmap when training is False (or .eval() is set on the model) (958)

0.2.0

Inspired by [Google JAX](https://github.com/google/jax), functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples of these include:

* [running ensembles of models on a single machine](https://pytorch.org/functorch/stable/notebooks/ensembling.html)
* [efficiently computing Jacobians and Hessians](https://pytorch.org/functorch/stable/notebooks/jacobians_hessians.html)
* [computing per-sample-gradients (or other per-sample quantities)](https://pytorch.org/functorch/stable/notebooks/per_sample_grads.html)

We’re excited to announce functorch 0.2.0 with a number of improvements and new experimental features.

Caveats

functorch's Linux binaries are compatible with all PyTorch 1.12.0 binaries aside from the PyTorch 1.12.0 cu102 binary; functorch will raise an error if it is used with an incompatible PyTorch binary. This is due to a bug in PyTorch (https://github.com/pytorch/pytorch/issues/80489); in previous versions of PyTorch, it is possible to build a single Linux binary for functorch that works with all PyTorch Linux binaries. This will be fixed in the next PyTorch (and functorch) minor release.

Highlights

Significantly improved coverage

We significantly improved coverage for `functorch.jvp` (our forward-mode autodiff API) and other APIs that rely on it `(functorch.{jacfwd, hessian}).`

(Prototype) functorch.experimental.functionalize

Given a function `f`, `functionalize(f)` returns a new function without mutations (with caveats). This is useful for constructing traces of PyTorch functions without in-place operations. For example, you can use `make_fx(functionalize(f))` to construct a mutation-free trace of a pytorch function. To learn more, please see [the documentation](https://pytorch.org/functorch/0.2.0/generated/functorch.experimental.functionalize.html?highlight=functionalize#functorch.experimental.functionalize)

Windows support

There are now official functorch pip wheels for Windows.

Changelog

Note that this is not an exhaustive list of changes, e.g. changes to pytorch/pytorch can fix bugs in functorch or improve our transform coverage. Here we include user-facing changes that were committed to pytorch/functorch.

* Added `functorch.experimental.functionalize` ([236](https://github.com/pytorch/functorch/pull/236), [#720](https://github.com/pytorch/functorch/pull/678), and more)
* Added support for Windows ([696](https://github.com/pytorch/functorch/pull/696))
* Fixed vmap support for `torch.norm` ([708](https://github.com/pytorch/functorch/pull/708/files))
* Added `disable_autograd_tracking` to `make_functional` variants. This is useful if you’re not using `torch.autograd` ([701](https://github.com/pytorch/functorch/pull/701))
* Fixed a bug in the neural tangent kernels tutorial ([788](https://github.com/pytorch/functorch/pull/788))
* Improve vmap over indexing with Tensors ([777](https://github.com/pytorch/functorch/pull/777), [#862](https://github.com/pytorch/functorch/pull/862))
* Fixed vmap over `torch.nn.functional.mse_loss` ([860](https://github.com/pytorch/functorch/pull/860))
* Raise an error on unsupported combinations of `torch.autograd.functional` and functorch transforms ([849](https://github.com/pytorch/functorch/pull/849))
* Improved docs on the limitations of functorch transforms ([879](https://github.com/pytorch/functorch/pull/879))

0.1.1

We’re excited to present the functorch 0.1.1 minor bug-fix release, compatible with PyTorch 1.11. Please see [here](https://pytorch.org/functorch/stable/install.html) for installation instructions.

Changelog

* Fixed a bug when composing `jvp` with `vmap` ([603](https://github.com/pytorch/functorch/pull/603))
* `jvp` now works when called inside `autograd.Function` ([607](https://github.com/pytorch/functorch/pull/607))
* `make_functional` (and variants) now work with models that do parameter sharing (also known as weight tying) ([620](https://github.com/pytorch/functorch/pull/620))
* Added batching rules for `nn.functional.silu`, `nn.functional.prelu`, `nn.functional.glu` ([677](https://github.com/pytorch/functorch/pull/677), [#609](https://github.com/pytorch/functorch/pull/609), [#665](https://github.com/pytorch/functorch/pull/665))
* Fixed `vmap` support for `nn.functional.group_norm`, `binomial`, `torch.multinomial`, `Tensor.to` ([685](https://github.com/pytorch/functorch/pull/685), [#670](https://github.com/pytorch/functorch/pull/670), [#672](https://github.com/pytorch/functorch/pull/672), [#649](https://github.com/pytorch/functorch/pull/649))

0.1.0

We’re excited to announce the first beta release of [functorch](https://github.com/pytorch/functorch). Heavily inspired by [Google JAX](https://github.com/google/jax), functorch is a library that adds composable function transforms to PyTorch. It aims to provide composable vmap (vectorization) and autodiff transforms that work with PyTorch modules and PyTorch autograd with good eager-mode performance.

Composable function transforms can help with a number of use cases that are tricky to do in PyTorch today:
- computing per-sample-gradients (or other per-sample quantities)
- running ensembles of models on a single machine
- efficiently batching together tasks in the inner-loop of MAML
- efficiently computing Jacobians and Hessians as well as batched ones
- Composing vmap (vectorization), vjp (reverse-mode AD), and jvp (forward-mode AD) transforms allows us to effortlessly express the above without designing a separate library for each.

For more details, please see our [documentation](https://pytorch.org/functorch/0.1.0/), [tutorials](https://pytorch.org/functorch/0.1.0/), and [installation instructions](https://pytorch.org/functorch/0.1.0/install.html).

Links

Releases

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.