Pydgn

Latest version: v1.5.6

Safety actively analyzes 622315 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 7

1.5.6

Added

- You can now store metrics trend across epochs using `Plotter`. Just pass the argument `store_on_disk=True` in the configuration file of the experiment.

1.5.5

Fixed

- TOML project to comply with latest releases of `macos` and `coverage` package
- `pydgn-train` and `pydgn-dataset` not being found in version `1.5.4`

1.5.4

Added

- Utilities to load model, dataset, data providers, and checkpoints from the experiments folder
- Tutorials on README and documentation on how to use them.

1.5.3

Fixed

- training loss and score not showing on tensorboard

Added

1.5.2

Added

- Implemented a convenient tqdm progress bar in debug mode to track speed of training and evaluation.
- Created a new splitter class, `SameInnerSplitSplitter`, which allows you to average the validation scores of the
same model selection configuration over multiple runs without changing the inner data split. Cannot be combined with a
double/nested CV approach, for which you should use the base `Splitter` class to generate different inner data splits.
- Trying out a helper mechanism to print to terminal information about the experiment that broke (if any)
when you are not in debug mode.

1.5.1

Changed - PLEASE READ

From now on, the default behavior of the training engine is to display **training** loss/scores computed **during the epoch**. In the past, at the end of each epoch we always re-evaluated the trained model on the entire training set, but this is often not interesting as early stopping typically acts on the validation set. The behavior can be enabled again by specifying it in the config file, that is:

engine:
- class_name: pydgn.training.engine.TrainingEngine
args:
eval_training: whether to re-compute epoch loss/scores after training or use those obtained while the model is being trained with mini-batches
- True re-evaluates on training set after each training epoch, might not change much loss/score values and causes overhead

The default value will be `False` from now on to save compute time.

Page 1 of 7

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.