Pytorch-forecasting

Latest version: v1.3.0

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 6

0.5.1

This release has only one purpose: Allow usage of PyTorch Lightning 1.0 - all tests have passed.

---

0.5.0

Added

- Additional checks for `TimeSeriesDataSet` inputs - now flagging if series are lost due to high `min_encoder_length` and ensure parameters are integers
- Enable classification - simply change the target in the `TimeSeriesDataSet` to a non-float variable, use the `CrossEntropy` metric to optimize and output as many classes as you want to predict

Changed

- Ensured PyTorch Lightning 0.10 compatibility
- Using `LearningRateMonitor` instead of `LearningRateLogger`
- Use `EarlyStopping` callback in trainer `callbacks` instead of `early_stopping` argument
- Update metric system `update()` and `compute()` methods
- Use `Tuner(trainer).lr_find()` instead of `trainer.lr_find()` in tutorials and examples
- Update poetry to 1.1.0

---

0.4.1

Fixes

Model

- Removed attention to current datapoint in TFT decoder to generalise better over various sequence lengths
- Allow resuming optuna hyperparamter tuning study

Data

- Fixed inconsistent naming and calculation of `encoder_length`in TimeSeriesDataSet when added as feature

Contributors

- jdb78

---

0.4.0

Added

Models

- Backcast loss for N-BEATS network for better regularisation
- logging_metrics as explicit arguments to models

Metrics

- MASE (Mean absolute scaled error) metric for training and reporting
- Metrics can be composed, e.g. `0.3* metric1 + 0.7 * metric2`
- Aggregation metric that is computed on mean prediction over all samples to reduce mean-bias

Data

- Increased speed of parsing data with missing datapoints. About 2s for 1M data points. If `numba` is installed, 0.2s for 1M data points
- Time-synchronize samples in batches: ensure that all samples in each batch have with same time index in decoder

Breaking changes

- Improved subsequence detection in TimeSeriesDataSet ensures that there exists a subsequence starting and ending on each point in time.
- Fix `min_encoder_length = 0` being ignored and processed as `min_encoder_length = max_encoder_length`

Contributors

- jdb78
- dehoyosb

---

0.3.1

- More tests driving coverage to ~90%
- Performance tweaks for temporal fusion transformer
- Reformatting with sort
- Improve documentation - particularly expand on hyper parameter tuning

Fixed

- Fix PoissonLoss quantiles calculation
- Fix N-Beats visualisations

---

0.3.0

Added

- Calculating partial dependency for a variable
- Improved documentation - in particular added FAQ section and improved tutorial
- Data for examples and tutorials can now be downloaded. Cloning the repo is not a requirement anymore
- Added Ranger Optimizer from `pytorch_ranger` package and fixed its warnings (part of preparations for conda package release)
- Use GPU for tests if available as part of preparation for GPU tests in CI

Changes

- **BREAKING**: Fix typo "add_decoder_length" to "add_encoder_length" in TimeSeriesDataSet

Bugfixes

- Fixing plotting predictions vs actuals by slicing variables

---

Page 5 of 6

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.