Pytorch-forecasting

Latest version: v1.1.1

Safety actively analyzes 681881 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 6

0.10.0

Added

- Added new `N-HiTS` network that has consistently beaten `N-BEATS` (890)
- Allow using [torchmetrics](https://torchmetrics.readthedocs.io/) as loss metrics (#776)
- Enable fitting `EncoderNormalizer()` with limited data history using `max_length` argument (782)
- More flexible `MultiEmbedding()` with convenience `output_size` and `input_size` properties (829)
- Fix concatentation of attention (902)

Fixed

- Fix pip install via github (798)

Contributors

- jdb78
- christy
- lukemerrick
- Seon82

0.9.2

Added

- Added support for running `lightning.trainer.test` (759)

Fixed

- Fix inattention mutation to `x_cont` (732).
- Compatability with pytorch-lightning 1.5 (758)

Contributors

- eavae
- danielgafni
- jdb78

0.9.1

Added

- Use target name instead of target number for logging metrics (588)
- Optimizer can be initialized by passing string, class or function (602)
- Add support for multiple outputs in Baseline model (603)
- Added Optuna pruner as optional parameter in `TemporalFusionTransformer.optimize_hyperparameters` (619)
- Dropping support for Python 3.6 and starting support for Python 3.9 (639)

Fixed

- Initialization of TemporalFusionTransformer with multiple targets but loss for only one target (550)
- Added missing transformation of prediction for MLP (602)
- Fixed logging hyperparameters (688)
- Ensure MultiNormalizer fit state is detected (681)
- Fix infinite loop in TimeDistributedEmbeddingBag (672)

Contributors

- jdb78
- TKlerx
- chefPony
- eavae
- L0Z1K

0.9.0

Breaking changes

- Removed `dropout_categoricals` parameter from `TimeSeriesDataSet`.
Use `categorical_encoders=dict(<variable_name>=NaNLabelEncoder(add_nan=True)`) instead (518)
- Rename parameter `allow_missings` for `TimeSeriesDataSet` to `allow_missing_timesteps` (518)
- Transparent handling of transformations. Forward methods should now call two new methods (518):

- `transform_output` to explicitly rescale the network outputs into the de-normalized space
- `to_network_output` to create a dict-like named tuple. This allows tracing the modules with PyTorch's JIT. Only `prediction` is still required which is the main network output.

Example:

python
def forward(self, x):
normalized_prediction = self.module(x)
prediction = self.transform_output(prediction=normalized_prediction, target_scale=x["target_scale"])
return self.to_network_output(prediction=prediction)


Fixed

- Fix quantile prediction for tensors on GPUs for distribution losses (491)
- Fix hyperparameter update for RecurrentNetwork.from_dataset method (497)

Added

- Improved validation of input parameters of TimeSeriesDataSet (518)

0.8.5

Added

- Allow lists for multiple losses and normalizers (405)
- Warn if normalization is with scale `< 1e-7` (429)
- Allow usage of distribution losses in all settings (434)

Fixed

- Fix issue when predicting and data is on different devices (402)
- Fix non-iterable output (404)
- Fix problem with moving data to CPU for multiple targets (434)

Contributors

- jdb78
- domplexity

0.8.4

Added

- Adding a filter functionality to the timeseries datasset (329)
- Add simple models such as LSTM, GRU and a MLP on the decoder (380)
- Allow usage of any torch optimizer such as SGD (380)

Fixed

- Moving predictions to CPU to avoid running out of memory (329)
- Correct determination of `output_size` for multi-target forecasting with the TemporalFusionTransformer (328)
- Tqdm autonotebook fix to work outside of Jupyter (338)
- Fix issue with yaml serialization for TensorboardLogger (379)

Contributors

- jdb78
- JakeForsey
- vakker

Page 2 of 6

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.