Neuralhydrology

Latest version: v1.10.0

Safety actively analyzes 622295 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

1.10.0

Added
---------
- `HybridModel` A wrapper class to combine deep learning models and conceptual hydrology models, where the deep learning model parameterizes the conceptual model, which also needs to be implemented in PyTorch. In the current implementation, the deep learning model is always a standard LSTM, as commonly used in literature.
- `BaseConceptualModel` a parent class to facilitate adding new conceptual models into the NeuralHydrology framework.
- `SHM` one implementation of a `BaseConceptualModel` that adds a modified version of the SHM to the modelzoo. See the documentation for details and references about this model.

Fixes
-------
- A solution that removes the circular import error of 157 As a result of that, `load_scaler.py` has been moved to datautils/utils.py
- An update of the documentation to resolve 138
- Some corner cases in the sampling utils related to 154
- Minor changes to the `Tester` class to resolve 158

Huge thanks to eduardoAcunaEspinoza for contributing the HybridModel, BaseConceptualModel and SHM implementation.

1.9.1

- Fix recursive import issue
- Fix typos
- Remove old and broken code.

v.1.9.0
Added
- Option to end an epoch early after a given number of update steps. Use `max_updates_per_epoch` to define a fixed number of update steps per epoch. This can be useful if you train a model with many samples and want to evaluate the model more frequently than just after each full epoch. See also 131
- Redesign of the config argument `experiment_name`. You can now add wildcards to the experiment name using curly brackets and any config argument inside of it. E.g. `my_awesome_run_{batch_size}_{hidden_size}`. Now, if you start training, the wildcards will be replaced with the respective values of the config arguments (here batch size and hidden size), keeping the name of the config argument for easier recognition. To our experience, this makes it easier if you e.g. to some hyper parameter tuning and you don't want to change the experiment name every time but still want to have expressive folder names and run names in Tensorboard. For details, check the documentation.

Fixes
- Some Pandas Futurewarnings


v.1.8.1
Fixes
-------
- Fixed 133 and also fixed an issue where the metrics csv file would be empty for multi-frequency runs.
- Fixes a bug where uncertainty (GMM & CMAL) runs with predict_last_n > 1 would generate incrorrect predictions due to a mixup of dimensions. This was discovered in an MTS-LSTM setting, where the hourly branch has predict_last_n 24. Visually, this resulted in 24-hourly steps in the predictions. UMAL and MCD are unaffected because they sample differently.
- Fixes and issue with uncertainty runs with active negative sample handling where centering would cut off values below the normalized value of zero (i.e., usually the mean) rather than the actual zero. This commit fixes this by calculating the normalized value of zero from the scaler as the cutoff value. Also includes a faster check for negative values (vectorized torch.any instead of any). Relates also to 88

v.1.8.0
New Features
------------
- Option to save all outputs from any model instead of just the target variable (`cfg.save_all_validation_output`)
- Several new forecasting models:
- `HandoffForecastLSTM`: a forecasting model that uses a state-handoff to transition from a hindcast sequence model to a forecast sequence (LSTM) model.
- `MultiheadForecastLSTM`: a forecasting model that runs a sequential (LSTM) model up to the forecast issue time, and then directly predicts a sequence of forecast timesteps without using a recurrent rollout.
- `SequentialForecastLSTM`: a forecasting model that uses a single sequential (LSTM) model that rolls out through both the hindcast and forecast sequences.
- `StackedForecastLSTM`: a forecasting model that uses two stacked sequential (LSTM) models to handle hindcast vs. forecast.
- Option to add a timestep counter for forecasting (`cfg.timestep_counter`)

To use the new forecasting models, there are several new config options, most notably `cfg.forecast_inputs` and `cfg.hindcast_inputs` to specify which inputs are used for forecasting vs. for hindcasting. See the [documentation](https://neuralhydrology.readthedocs.io/en/latest/) for more details.

- Enable non-verbose mode in trainer and tester (124)
- Enable predictions in basins with no observations (121)

Fixes
-------
- Error in FDC slope signatures formula (125)

v.1.7.0
Fixes
-------
- Handling of weekly time resolutions in the basedataset, see https://github.com/neuralhydrology/neuralhydrology/issues/111
- Fix issue with UMAL during validation mode, see https://github.com/neuralhydrology/neuralhydrology/discussions/114

Note that with this release, the `umal_extend_batch` method is moved to `utils.samplingutils`, and `training.utils` and `training.umaltrainer` are removed. The UMAL functionality remains unchanged, though.

v.1.6.0
- Fix environment files
- Fix type annotation that caused https://github.com/neuralhydrology/neuralhydrology/issues/109
- Added options for weighted regularizations
- Add regularization loss terms to tensorboard

v.1.5.1
Fix
----
- Fix in basetrainer.py to resolve problems around finetuning, see 105

v.1.5.0
Fixes
- Updated documentation on Installation and data download (of CAMELS US).

Added
- Dataset class for the [Caravan](https://github.com/kratzert/Caravan) dataset

v.1.4.0
Updates/Fixes
- Adapted CAMELS-CL data loader to new data layout of "Enero 2022" dataset version.
- Resolved np.int deprecation warnings
- In LamaH discharge loader, replace -999 values (invalid observation marker) with np.nan, when loading data.
- Run scheduler now moves processed configs to newly created sub-directory, which makes it easier to continue the scheduler in case it fails at any point.

Additions
- Added new metric that checks if a model misses or hits a flood peak.

Internal, but worth to mention
- Dataset objects now contain the date that corresponds to the target(s).

First time contributors
- SebaDro with 95
- shamshaw with 97

v.1.3.0
New Features:
- Added new dataset classes for the CAMELS-BR, CAMELS-AUS, and LamaH-CE dataset.
- Added the AR-LSTM (as proposed in this [paper](https://hess.copernicus.org/preprints/hess-2021-515/)). This model can be used by setting the config argument `model: arlstm`. Please refer to the [documentation](https://neuralhydrology.readthedocs.io/en/latest/usage/models.html#model-classes) about specific requirements of this model.
- Added an option `random_holdout_from_dynamic_features` to the config that applies dropout on the time series features by sampling two Bernoulli processes with different rate parameters. This is used for training the `arlstm` to simulate missing (autoregressive) inputs already during training, which are then replaced by the model outputs of previous timesteps.

Fixes:
- A bug when writing the metrics.csv file during validation if the first basin (in the basin list) has no validation data.

v.1.2.5
Bugfixes
- fixed `InputLayer` that was misbehaving when using single frequency data that was resampled from a different temporal resolution (from raw data).
- fixed severe bug when using `use_basin_id_encoding: True`. Essentially during training, all samples in a batch had the one-hot vector of the last sample in the batch.

Updates
- Removed `prediction_interval_plots` as it was a duplicate of `percentile_plot`. Also updated docstring of `percentile_plot`

v.1.2.4
Fixes
-------
- Beta-KGE was not in the list of available metrics returned by `neuralhydrology.evaluation.metrics.get_available_metrics()`. Equally, it was not computed in `neuralhydrology.evaluation.metrics.calculate_all_metrics()` and `neuralhydrology.evaluation.metrics.calculate_metrics()` raised an ValueError because of an unknown metric


Other
-------
- Added CITATION.cff for the JOSS paper

v.1.2.3
Updates
- removed broken cuda9.2 environment and added a new cuda11.3 environment
- updated docs of GenericDataset and description in Tutorial 3

v.1.2.2
New
- NeuralHydrology can now be installed from PyPI (we added automatic upload to PyPI for every tagged released). 66

Fixes
- Some typo and type annotation fixes in the Transformer model

Updates
- Updated installation guide to include info about PyPI installation options
- Added missing config argument `allow_subsequent_nan_losses` to config docs

v.1.2.1
Updates and Additions
- Added tutorial that explains the download process for data that is required to run our tutorials locally. 68
- Updated tutorials better highlight the data requirements. 68
- Update tutorials to be easier to run on CPU environments without changes of the config/code 68
- Added templates for opening an issue
- Fixed typos

Fixes
- Fixed problem of loading hourly streamflow data from csv (CAMELS US dataset) 67
- Fixed problems with logging the git hash that made the commit hash appear badly formatted


v.1.2.0
New Feature
- If you use NeuralHydrology as a git repository and have uncommited changes in your local copy, we added an option to include the `git diff` in the run directory. Set ``save_git_diff`` to `True` to make use of this feature.

Fixes
- fixed a bug from v.1.1.0 that caused problems when evaluating basins with all-NaN targets

Other
- Added guide for contributing to NeuralHydrology
- Update the tutorials to include to the underlying jupyter notebooks and removed full paths from some notebooks.
- Updated the quickstart guide to include a better guide for setting up NeuralHydrology

v.1.1.0
New Features

- Besides logging validation metrics, now also the validation loss is logged to tensorboard (always), computed as the weighted (by number of batches) average loss across all basins.

Fixes
- Spelling mistakes

v.1.0.0
A long time has passed since we started this library for internal research use but also since its publication as an open source library. After an extended beta period we are happy to finally bump NeuralHydrology to it's first major release version.

New Features
- We removed the dependency of the `pickle` library for storing the scaler and basin-id-to-integer dictionaries. `pickle` has brought a lot of headaches in the past, because it can get troublesome to run old models after upgrading the Python libraries. This PR removes the pickle dependencies and from now on, the scaler (used or normalization) and basin-id-to-integer dictionaries (in case of using basin-one-hot-encoding) are stored in YAML files. However, the old format is still supported so that old runs can still be used.

Additionally
- Updates of the description of the config argument `metrics` and `clip_targets_to_zero`

1.0.0beta4

In some of the tutorials, we used the updated forcing data of the CAMELS US data set but we missed to link to the corresponding download links. This has been updated and information on the data requirements was added. We also removed the requirement for the updated forcing data on the very first tutorial so that this can be run with only the original CAMELS US data set.

1.0.0beta3

- Small performance improvements while loading CAMELS US forcing data https://github.com/neuralhydrology/neuralhydrology/pull/38
- Fixed the requirement for local installation of Git in the `Logger` class https://github.com/neuralhydrology/neuralhydrology/pull/41

1.0.0beta2

- MTS-LSTM failed with uncertainty heads, due to a missing class attribute.
- Ensemble script failed due to issues with the new datetime logic (30) and type incompatibilities.

1.0.0beta1

With this update we will transition into the beta phase of v.1.0.0 as it includes the last thing we had planned for the first major version release, which is full multi-target, multi-frequency (and the combination of both) regression and uncertainty heads. We will most likely add some tutorial show-casing some of the new features. For everyone who is using the library, we would be grateful to report any bugs you encounter, so we can get rid of them.

New Features/Additions
- Support for multi-frequency and multi-target (and the combination of both) uncertainty models.
- Added tests for frequency handling

Fixes
- Issues around various frequencies and combinations of those. Closes 30

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.