Pytorch-forecasting

Latest version: v1.3.0

Safety actively analyzes 723882 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 6

0.10.2

Added

- DeepVar network (923)
- Enable quantile loss for N-HiTS (926)
- MQF2 loss (multivariate quantile loss) (949)
- Non-causal attention for TFT (949)
- Tweedie loss (949)
- ImplicitQuantileNetworkDistributionLoss (995)

Fixed

- Fix learning scale schedule (912)
- Fix TFT list/tuple issue at interpretation (924)
- Allowed encoder length down to zero for EncoderNormalizer if transformation is not needed (949)
- Fix Aggregation and CompositeMetric resets (949)

Changed

- Dropping Python 3.6 suppport, adding 3.10 support (479)
- Refactored dataloader sampling - moved samplers to pytorch_forecasting.data.samplers module (479)
- Changed transformation format for Encoders to dict from tuple (949)

Contributors

- jdb78

0.10.1

Fixed

- Fix with creating tensors on correct devices (908)
- Fix with MultiLoss when calculating gradient (908)

Contributors

- jdb78

0.10.0

Added

- Added new `N-HiTS` network that has consistently beaten `N-BEATS` (890)
- Allow using [torchmetrics](https://torchmetrics.readthedocs.io/) as loss metrics (#776)
- Enable fitting `EncoderNormalizer()` with limited data history using `max_length` argument (782)
- More flexible `MultiEmbedding()` with convenience `output_size` and `input_size` properties (829)
- Fix concatentation of attention (902)

Fixed

- Fix pip install via github (798)

Contributors

- jdb78
- christy
- lukemerrick
- Seon82

0.9.2

Added

- Added support for running `lightning.trainer.test` (759)

Fixed

- Fix inattention mutation to `x_cont` (732).
- Compatability with pytorch-lightning 1.5 (758)

Contributors

- eavae
- danielgafni
- jdb78

0.9.1

Added

- Use target name instead of target number for logging metrics (588)
- Optimizer can be initialized by passing string, class or function (602)
- Add support for multiple outputs in Baseline model (603)
- Added Optuna pruner as optional parameter in `TemporalFusionTransformer.optimize_hyperparameters` (619)
- Dropping support for Python 3.6 and starting support for Python 3.9 (639)

Fixed

- Initialization of TemporalFusionTransformer with multiple targets but loss for only one target (550)
- Added missing transformation of prediction for MLP (602)
- Fixed logging hyperparameters (688)
- Ensure MultiNormalizer fit state is detected (681)
- Fix infinite loop in TimeDistributedEmbeddingBag (672)

Contributors

- jdb78
- TKlerx
- chefPony
- eavae
- L0Z1K

0.9.0

Breaking changes

- Removed `dropout_categoricals` parameter from `TimeSeriesDataSet`.
Use `categorical_encoders=dict(<variable_name>=NaNLabelEncoder(add_nan=True)`) instead (518)
- Rename parameter `allow_missings` for `TimeSeriesDataSet` to `allow_missing_timesteps` (518)
- Transparent handling of transformations. Forward methods should now call two new methods (518):

- `transform_output` to explicitly rescale the network outputs into the de-normalized space
- `to_network_output` to create a dict-like named tuple. This allows tracing the modules with PyTorch's JIT. Only `prediction` is still required which is the main network output.

Example:

python
def forward(self, x):
normalized_prediction = self.module(x)
prediction = self.transform_output(prediction=normalized_prediction, target_scale=x["target_scale"])
return self.to_network_output(prediction=prediction)


Fixed

- Fix quantile prediction for tensors on GPUs for distribution losses (491)
- Fix hyperparameter update for RecurrentNetwork.from_dataset method (497)

Added

- Improved validation of input parameters of TimeSeriesDataSet (518)

Page 2 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.