Torchbearer

Latest version: v0.5.5

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 4

0.3.2

Added
Changed
Deprecated
Removed
Fixed
- Fixed a bug where for_steps would sometimes not work as expected if called in the wrong order
- Fixed a bug where torchbearer installed via pip would crash on import

0.3.1

Added
- Added cyclic learning rate finder
- Added on_init callback hook to run at the end of trial init
- Added callbacks for weight initialisation in ``torchbearer.callbacks.init``
- Added ``with_closure`` trial method that allows running of custom closures
- Added ``base_closure`` function to bases that allows creation of standard training loop closures
- Added ``ImagingCallback`` class for callbacks which produce images that can be sent to tensorboard, visdom or a file
- Added ``CachingImagingCallback`` and ``MakeGrid`` callback to make a grid of images
- Added the option to give the ``only_if`` callback decorator a function of self and state rather than just state
- Added Layer-sequential unit-variance (LSUV) initialization
- Added ClassAppearanceModel callback and example page for visualising CNNs
- Added on_checkpoint callback decorator
- Added support for PyTorch 1.1.0
Changed
- `No_grad` and `enable_grad` decorators are now also context managers
Deprecated
Removed
- Removed the fluent decorator, just use return self
- Removed install dependency on `torchvision`, still required for some functionality
Fixed
- Fixed bug where replay errored when train or val steps were None
- Fixed a bug where mock optimser wouldn't call it's closure
- Fixed a bug where the notebook check raised ModuleNotFoundError when IPython not installed
- Fixed a memory leak with metrics that causes issues with very long epochs
- Fixed a bug with the once and once_per_epoch decorators
- Fixed a bug where the test criterion wouldn't accept a function of state
- Fixed a bug where type inference would not work correctly when chaining ``Trial`` methods
- Fixed a bug where checkpointers would error when they couldn't find the old checkpoint to overwrite
- Fixed a bug where the 'test' label would sometimes not populate correctly in the default accuracy metric

0.3.0

Added
- Added torchbearer.variational, a sub-package for implementations of state of the art variational auto-encoders
- Added SimpleUniform and SimpleExponential distributions
- Added a decorator which can be used to cite a research article as part of a doc string
- Added an optional dimension argument to the mean, std and running_mean metric aggregators
- Added a var metric and decorator which can be used to calculate the variance of a metric
- Added an unbiased flag to the std and var metrics to optionally not apply Bessel's correction (consistent with torch.std / torch.var)
- Added support for rounding 1D lists to the Tqdm callback
- Added SimpleWeibull distribution
- Added support for Python 2.7
- Added SimpleWeibullSimpleWeibullKL
- Added SimpleExponentialSimpleExponentialKL
- Added the option for model parameters only saving to Checkpointers.
- Added documentation about serialization.
- Added support for indefinite data loading. Iterators can now be run until complete independent of epochs or iterators can be refreshed during an epoch if complete.
- Added support for batch intervals in interval checkpointer
- Added line magic ``%torchbearer notebook``
- Added 'accuracy' variants of 'acc' default metrics
Changed
- Changed the default behaviour of the std metric to compute the sample std, in line with torch.std
- Tqdm precision argument now rounds to decimal places rather than significant figures
- Trial will now simply infer if the model has an argument called 'state'
- Torchbearer now infers if inside a notebook and will use the appropriate tqdm module if not set
Deprecated
Removed
- Removed the old Model API (deprecated since version 0.2.0)
- Removed the 'pass_state' argument from Trial, this will now be inferred
- Removed the 'std' decorator from the default metrics
Fixed
- Fixed a bug in the weight decay callback which would result in potentially negative decay (now just uses torch.norm)
- Fixed a bug in the cite decorator causing the citation to not show up correctly
- Fixed a memory leak in the mse primitive metric

0.2.6.1

Added
Changed
Deprecated
Removed
Fixed
- Fixed a bug where predictions would multiply when predict was called more than once

0.2.6

Added
Changed
- Y_PRED, Y_TRUE and X can now equivalently be accessed as PREDICTION, TARGET and INPUT respectively
Deprecated
Removed
Fixed
- Fixed a bug where the LiveLossPlot callback would trigger an error if run and evaluate were called separately
- Fixed a bug where state key errors would report to the wrong stack level
- Fixed a bug where the user would wrongly get a state key error in some cases

0.2.5

Added
- Added flag to replay to replay only a single batch per epoch
- Added support for PyTorch 1.0.0 and Python 3.7
- MetricTree can now unpack dictionaries from root, this is useful if you want to get a mean of a metric. However, this should be used with caution as it extracts only the first value in the dict and ignores the rest.
- Added a callback for the livelossplot visualisation tool for notebooks
Changed
- All error / accuracy metrics can now optionally take state keys for predictions and targets as arguments
Deprecated
Removed
Fixed
- Fixed a bug with the EpochLambda metric which required y_true / y_pred to have specific forms

Page 2 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.