Pytorch-pfn-extras

Latest version: v0.8.1

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 7

0.4.2

This release includes the following enhancements and bug-fixes:

- Adds support for PyTorch 1.9, with this release the minimum supported PyTorch version is 1.7.
- Adds support for Python 3.9.
- Fixes issues with `LazyModules` when managing the `state_dict`.
- Fixes issues with temporal backup files for snapshots.

See [the list of merged pull-requests](https://github.com/pfnet/pytorch-pfn-extras/pulls?q=is%3Apr+milestone%3Av0.4.2+is%3Aclosed) for the details.

0.4.1

This release includes the following enhancements and bug-fixes:

- Add interoperability functions between CuPy `ndarray` and PyTorch `Tensor`
- `ppe.from_ndarray`, `ppe.as_ndarray`, `ppe.get_xp`, etc. See [docs](https://github.com/pfnet/pytorch-pfn-extras/blob/master/docs/cuda.md) for details.
- Fix `LogReport` invalid output when `json-lines` format is specified

See [the list of merged pull-requests](https://github.com/pfnet/pytorch-pfn-extras/issues?q=is%3Aclosed+milestone%3Av0.4.0) for the details.

0.4.0

This release includes the following enhancements and bug-fixes:

- Support PyTorch 1.8.1
- Fix several bugs in `TabularDataset`
- Fix several bugs with snapshot autoload feature
- `LogReport` now allows appending results using json-lines or yaml file formats
- Add Batch-Normalization aware gradient checkpointing
- Add LRScheduler extension

In this release there are also the following backward breaking changes:

- Drop PyTorch 1.6 & Python 3.5 support
- Remove bundled DataLoader as the minimum supported version is PyTorch 1.7 whose DataLoader supports the same features

We have upstreamed `ppe.nn.LazyLinear` and `ppe.nn.LazyConv[123]d` ([lazy modules](https://github.com/pfnet/pytorch-pfn-extras/blob/master/docs/lazy.md#lazy-modules)), and they are now available in PyTorch 1.8! Use of [`torch.nn.LazyLinear`](https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html) and [`torch.nn.LazyConv[123]d`](https://pytorch.org/docs/stable/generated/torch.nn.LazyConv1d.html) instead of PPE implementations is now recommended. See [`torch.nn.LazyModuleMixin`](https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html) for the details of the PyTorch lazy implementations.

See [the list of merged pull-requests](https://github.com/pfnet/pytorch-pfn-extras/issues?q=is%3Aclosed+milestone%3Av0.4.0) for the details.

0.3.2

This release includes the following enhancements and bug-fixes:

* Support PyTorch 1.7.0
* Add custom `DistributedDataParallel` implementation to handle `torch.util.checkpoint` and dynamic computational graph
* Add metrics option to `Evaluator` extension to run metrics functions for every batch
* Expose `ExtensionManager.models` and `ExtensionManager.optimizers` to be used from extensions
* Add custom types for Optuna in the config system

See [the list of merged pull-requests](https://github.com/pfnet/pytorch-pfn-extras/milestone/5?closed=1) for the details.

0.3.1

This release includes the following enhancements and bug-fixes:

* Add `pytorch_pfn_extras.cuda` APIs which adds interoperability with CuPy
* Add extensions for Jupyter Notebook (`PrintReportNotebook` and `ProgressBarNotebook`)
* Fix error when resuming training using `IgniteExtensionsManager`
* Fix backward breaking changes (`updater` attribute removal in `ExtensionManager`) in v0.3.0 release

See [the list of merged pull-requests](https://github.com/pfnet/pytorch-pfn-extras/milestone/4?closed=1) for the details.

Compatibility Notes

Starting in v0.3.0, the `updater` attribute of the `ExtensionManager`, which is a pseudo interface for compatibility with Chainer's extensions, has been deprecated. Extensions using the attribute to access training statistics (e.g., epoch/iteration number) must be changed to directly use attributes of `ExtensionManager` (e.g., `ExtensionManager().epoch`). Also, if you are using the `updater` in a snapshot filename template, you need to update it too (e.g., from `snapshot_iter_{.updater.iteration}` to `snapshot_iter_{.iteration}`). In this release, accessing the `updater` attribute raises a `DeprecationWarning`.

0.3.0

This release includes the following enhancements and bug-fixes:

* Add `pytorch_pfn_extras.onnx` APIs which is an extension to `torch.onnx`.
* Add `pytorch_pfn_extras.nn.LazyBatchNorm(1,2,3)d`
* Add `pytorch_pfn_extras.dataloaders.DataLoader` which reuses the worker process
* Add `pytorch_pfn_extras.dataset.SharedDataset`
* Add `pytorch_pfn_extras.dataset.TabularDataset`
* Add `pytorch_pfn_extras.writing.TensorBoardWriter`
* Add `step_optimizers` parameter to `ExtensionsManager.run_iteration()`
* Fix memory leak in Reporter

See [the list of merged pull-requests](https://github.com/pfnet/pytorch-pfn-extras/milestone/2?closed=1) for the details.

Compatibility Notes

This release removes the `updater` attribute from the `ExtensionManager` which was a pseudo interface for compatibility with Chainer's extensions. Extensions using the attribute to access training statistics (e.g., epoch/iteration number) must be changed to directly use attributes of `ExtensionManager` (e.g., `ExtensionManager().epoch`).

Page 6 of 7

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.