Gluonts

Latest version: v0.15.1

Safety actively analyzes 638741 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 13 of 18

0.7.4

Backporting fixes:
- Fix Settings._inject to check if it can provide the value. (1501)
- Fix indentation (1500)
- Fix anomaly detection example (1515)
- Add constant dummy time features to TFT for yearly data (1518)

0.7.3

Backporting fixes:
- Fix get_lags_for_frequency for minute data in DeepVAR (1455)

0.7.2

Backporting fixes:
- Fixes for MXNet 1.8 (1403)
- Fix train-test split data leakage for m4_yearly and wiki-rolling_nips. (1445)
- Lock the version for mxnet theme to 0.3.15 (1451)
- Fix missing import in gluonts.mx.model.GluonEstimator (1450)

0.7.1

Backporting fixes:
- fix compatibility for pandas < 1.1 in `time_feature/_base.py` (1437)

0.7.0

GluonTS adds improved support for PyTorch-based models, new options for existing models, and general improvements to components and tooling.

Breaking changes

This release comes with a few breaking changes (but for good reasons). In particular, models trained and serialized prior to 0.7.0 may not be de-serializable using 0.7.0.

* Changes in model components and abstractions:
* 1256 and 1206 contain significant changes to the `GluonEstimator` abstract class, as well as `InstanceSplitter` and `InstanceSampler` implementations. You are affected by this change only if you implemented custom models based on `GluonEstimator`. The change makes it easier to define (and understand, in case you're reading the code) how fixed-length instances are to be sampled from the original dataset for training or validation purposes. Furthermore, this PR breaks data transformation into more explicit "pre-processing" steps (deterministic ones, e.g. feature engineering) vs "iteration" steps (possibly random, e.g. random training instance sampling), so that a `cache_data` option is now available in the `train` method to have the pre-processed data cached to memory, and be iterated quicker, whenever it fits.
* 1233 splits normalized/unnormalized time features from `gluonts.time_features` into distinct types.
* 1223 updates the interface of `ISSM` types, making it easier to define custom ones e.g. by having a custom set of seasonality patterns. Related changes to `DeepStateEstimator` enable these customizations when defining a DeepState model.
* Changes in `Trainer`:
* 1178 removes the `input_names` argument from the `__call__` method. Now the provided data loaders are expected to produce batches containing only the fields that the network being trained consumes. This can be easily obtained by transforming the dataset with `SelectFields`.
* Package structure reorg:
* 1183 puts all MXNet-dependant modules under `gluonts.mx`, with some exceptions (`gluonts.model` and `gluonts.nursery`). With the new structure, one is not forced to install MXNet unless they specifically require modules that depend on it.
* 1402 makes the `Evaluator` class lighter, by moving the evaluation metrics to `gluonts.evaluation.metrics` instead of having them as static methods of the class.

New features

PyTorch support:
* PyTorchPredictor serde (1086)
* Add equality operator for PytorchPredictor (1190)
* Allow Pytorch predictor to be trained and loaded on different devices (1244)
* Add distribution-based forecast types for torch, output layers, tests (1266)
* Add more distribution output classes for PyTorch, add tests (1272)
* Add pytorch tutorial notebook (1289)

Distributions:
* Zero Inflated Poisson Distribution (1130)
* GenPareto cdf and quantile functions (1142)
* Added quantile function based on cdf bisection (1145)
* Add AffineTransformedDistribution (1161)

Models:
* add estimator/predictor types for autogluon tabular (1105)
* Added thetaf method to the R predictor (1281)
* Adding neural ode code for lotka volterra and corresponding notebook (1023)
* Added lightgbm support for QRX/Rotbaum (1365)
* Deepar imputation model (1380)
* Initial commit for GMM-TPP (1397)

Datasets & tooling:
* Implemented generate_rolling_datasets (844)
* Add a MinMax scaler (1134)
* introduce functional api for data generation recipes (1153)
* include m3 dataset (1169)
* Improvements for data generation (1195)
* Add most forecasters as entry points. (1351)

0.6.9

Backporting fixes:
- Fix train-test split data leakage for m4_yearly and wiki-rolling_nips. (1445)
- Lock the version for mxnet theme to 0.3.15 (1451)

Page 13 of 18

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.