- Added option to accumulate gradients over multiple batches - Learning rate scheduler now called after every epoch - Added scheduling functions for learning rate - Added TOML and YAML writers
0.1.6
- Minor change to Pipfile - Minor change to a unit test setup - Added loss reporting to progress bar in training loop - Improved use of strip in path normalization
0.1.5
- No longer delete checkpoint file on instantiation in OnDisk - Use set_to_none=True in optimizer.zero_grad
0.1.4
- Added field type to resolve paths to jsonobject fields - Added filed type to lowercase and strip strings - Refactored PyTorch model trainer - Refactored PyTorch data base classes - Adapted callbacks accordingly
0.1.3
- Make progress bar disappear after each epoch - Fixed PyTorch model loader