Core
Various improvements in the core part of the library:
- Add `epoch_bound` parameter to `RunningAverage` (488)
- Bug fixes with Confusion matrix, new implementation (572) - **BC breaking**
- Added `event_to_attr` in register_events (523)
- Added accumulative single variable metrics (524)
- `should_terminate` is reset between runs (525)
- `to_onehot` returns tensor with uint8 dtype (571) - **may be BC breaking**
- Removable handle returned from `Engine.add_event_handler()` to enable single-shot events (588)
- _New documentation style_ 🎉
Distributed
We removed mnist distrib example as being misleading and ~~provided [distrib](https://github.com/pytorch/ignite/tree/distrib) branch~~(XX/YY/2020: `distrib` branch merged to master) to adapt metrics for distributed computation. Code is working and is under testing. Please, try it in your use-case and leave us a feedback.
Now in Contributions module
- Added mlflow logger (558)
- R-Squared Metric in regression metrics module (496)
- Add tag field to OptimizerParamsHandler (502)
- Improved ProgressBar with TerminateOnNan (506)
- Support for layer freezing with Tensorboard integration (515)
- Improved OutputHandler API (531)
- Improved create_lr_scheduler_with_warmup (556)
- Added "all" option to metric_names in contrib loggers (565)
- Added GPU usage info as metric (569)
- Other bug fixes
Notebook examples
- Added Cycle-GAN notebook (500)
- Finetune EfficientNet-B0 on CIFAR100 (544)
- Added Fashion MNIST jupyter notebook (549)
Updated nighlty builds
From pip:
pip install --pre pytorch-ignite
From conda (this suggests to install pytorch nightly release instead of stable version as dependency):
conda install ignite -c pytorch-nightly
---
Acknowledgments
🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):
ANUBHAVNATANI, Bibonaut, Evpok, Hiroshiba, JeroenDelcour, Mxbonn, anmolsjoshi, asford, bosr, johnstill, marrrcin, vfdev-5, willfrey