Torchbearer

Latest version: v0.5.5

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 4

0.2.4

Added
- Added metric functionality to state keys so that they can be used as metrics if desired
- Added customizable precision to the printer callbacks
- Added threshold to binary accuracy. Now it will appropriately handle any values in \[0, 1\]
Changed
- Changed the default printer precision to 4s.f.
- Tqdm on_epoch now shows metrics immediately when resuming
Deprecated
Removed
Fixed
- Fixed a bug which would incorrectly trigger version warnings when loading in models
- Fixed bugs where the Trial would not fail gracefully if required objects were not in state
- Fixed a bug where none criterion didn't work with the add_to_loss callback
- Fixed a bug where tqdm on_epoch always started at 0

0.2.3

Added
- Added string representation of Trial to give summary
- Added option to log Trial summary to TensorboardText
- Added a callback point ('on_checkpoint') which can be used for model checkpointing after the history ios updated
Changed
- When resuming training checkpointers no longer delete the state file the trial was loaded from
- Changed the metric eval to include a data_key which tells us what data we are evaluating on
Deprecated
Removed
Fixed
- Fixed a bug where callbacks weren't handled correctly in the predict and evaluate methods of Trial
- Fixed a bug where the history wasn't updated when new metrics were calculated with the evaluate method of Trial
- Fixed a bug where tensorboard writers couldn't be reused
- Fixed a bug where the none criterion didn't require gradient
- Fix bug where tqdm wouldn't get correct iterator length when evaluating on test generator
- Fixed a bug where evaluating before training tried to update history before it existed
- Fixed a bug where the metrics would output 'val_acc' even if evaluating on test or train data
- Fixed a bug where roc metric didn't detach y_pred before sending to numpy
- Fixed a bug where resuming from a checkpoint saved with one of the callbacks didn't populate the epoch number correctly

0.2.2

Added
- The default_for_key metric decorator can now be used to pass arguments to the init of the inner metric
- The default metric for the key 'top_10_acc' is now the TopKCategoricalAccuracy metric with k set to 10
- Added global verbose flag for trial that can be overridden by run, evaluate, predict
- Added an LR metric which retrieves the current learning rate from the optimizer, default for key 'lr'
Changed
Deprecated
Removed
Fixed
- Fixed a bug where the DefaultAccuracy metric would not put the inner metric in eval mode if the first call to reset was after the call to eval
- Fixed a bug where trying to load a state dict in a different session to where it was saved didn't work properly
- Fixed a bug where the empty criterion would trigger an error if no Y_TRUE was put in state

0.2.1

Added
- Evaluation and prediction can now be done on any data using data_key keywork arg
- Text tensorboard/visdom logger that writes epoch/batch metrics to text
Changed
- TensorboardX, Numpy, Scikit-learn and Scipy are no longer dependancies and only required if using the tensorboard callbacks or roc metric
Deprecated
Removed
Fixed
- Model class setting generator incorrectly leading to stop iterations.
- Argument ordering is consistent in `Trial.with_generators` and `Trial.__init__`
- Added a state dict for the early stopping callback
- Fixed visdom parameters not getting set in some cases

0.2.0

Added
- Added the ability to pass custom arguments to the tqdm callback
- Added an ignore_index flag to the categorical accuracy metric, similar to nn.CrossEntropyLoss. Usage: ``metrics=[CategoricalAccuracyFactory(ignore_index=0)]``
- Added TopKCategoricalAccuracy metric (default for key: top\_5\_acc)
- Added BinaryAccuracy metric (default for key: binary\_acc)
- Added MeanSquaredError metric (default for key: mse)
- Added DefaultAccuracy metric (use with 'acc' or 'accuracy') - infers accuracy from the criterion
- New Trial api ``torchbearer.Trial`` to replace the Model api. Trial api is more atomic and uses the fluent pattern to allow chaining of methods.
- ``torchbearer.Trial`` has with_x_generator and with_x_data methods to add training/validation/testing generators to the trial. There is a with_generators method to allow passing of all generators in one call.
- ``torchbearer.Trial`` has for_x_steps and for_steps to allow running of trails without explicit generators or data tensors
- ``torchbearer.Trial`` keeps a history of run calls which tracks number of epochs ran and the final metrics at each epoch. This allows seamless resuming of trial running.
- ``torchbearer.Trial.state_dict`` now returns the trial history and callback list state allowing for full resuming of trials
- ``torchbearer.Trial`` has a replay method that can replay training (with callbacks and display) from the history. This is useful when loading trials from state.
- The backward call can now be passed args by setting ``state[torchbearer.BACKWARD_ARGS]``
- ``torchbearer.Trial`` implements the forward pass, loss calculation and backward call as a optimizer closure
- Metrics are now explicitly calculated with no gradient
Changed
- Callback decorators can now be chained to allow construction with multiple methods filled
- Callbacks can now implement ``state_dict`` and ``load_state_dict` to allow callbacks to resume with state
- State dictionary is now accepts StateKey objects which are unique and generated through ``torchbearer.state.get_state``
- State dictionary now warns when accessed with strings as this allows for collisions
- Checkpointer callbacks will now resume from a state dict when resume=True in Trial
Deprecated
- ``torchbearer.Model`` has been deprecated in favour of the new ``torchbearer.Trial`` api
Removed
- Removed the MetricFactory class. Decorators still work in the same way but the Factory is no longer needed.
Fixed

0.1.7

Added
- Added visdom logging support to tensorbard callbacks
- Added option to choose tqdm module (tqdm, tqdm_notebook, ...) to Tqdm callback
- Added some new decorators to simplify custom callbacks that must only run under certain conditions (or even just once).
Changed
- Instantiation of Model will now trigger a warning pending the new Trial API in the next version
- TensorboardX dependancy now version 1.4
Deprecated
Removed
Fixed
- Mean and standard deviation calculations now work correctly for network outputs with many dimensions
- Callback list no longer shared between fit calls, now a new copy is made each fit

Page 3 of 4

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.