Pytorch-ignite

Latest version: v0.5.1

Safety actively analyzes 682471 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 5

0.4rc.0.post1

0.3.0

Core
- Added State repr and input batch as engine.state.batch (641)
- Adapted core metrics only to be used in distributed configuration (635)
- Added fbeta metric as core metric (653)
- Added event filtering feature (e.g. every/once/event filter logic) (656)
- **BC breaking change**: Refactor ModelCheckpoint into Checkpoint + DiskSaver / ModelCheckpoint (673)
- Added option `n_saved=None` to store all checkpoints (703)
- Improved accumulation metrics (681)
- Early stopping min delta (685)
- Droped Python 2.7 support (699)
- Added feature: Metric can accept a dictionary (689)
- Added Dice Coefficient metric (680)
- Added helper method to simplify the setup of class loggers (712)

Engine refactoring (BC breaking change)

Finally solved the issue 62 to resume training from an epoch or iteration

- Engine refactoring + features (640)
- engine checkpointing
- variable epoch lenght defined by `epoch_length`
- two additional events: `GET_BATCH_STARTED` and `GET_BATCH_COMPLETED`
- [cifar10 example](https://github.com/pytorch/ignite/tree/v0.3.0/examples/contrib/cifar10#check-resume-training) with save/resume in distributed conf

Contrib
- Improved `create_lr_scheduler_with_warmup` (646)
- Added helper method to plot param scheduler values with matplotlib (650)
- **BC Breaking change**: with multiple optimizer's param groups (690)
- Added state_dict/load_state_dict (690)
- **BC Breaking change**: Let the user specify tqdm parameters for log_message (695)


Examples
- Added an example of hyperparameters tuning with Ax on CIFAR10 (652)
- Added CIFAR10 distributed example

Reproducible trainings as "References"

Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

- [ImageNet](https://github.com/pytorch/ignite/blob/master/examples/references/classification/imagenet)
- [Pascal VOC2012](https://github.com/pytorch/ignite/blob/master/examples/references/segmentation/pascal_voc2012)

Features:

- Distributed training with mixed precision by nvidia/apex
- Experiments tracking with MLflow or Polyaxon


---
Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

anubhavashok, kagrze, maxfrei750, vfdev-5

0.2.1

Core

Various improvements in the core part of the library:
- Add `epoch_bound` parameter to `RunningAverage` (488)
- Bug fixes with Confusion matrix, new implementation (572) - **BC breaking**
- Added `event_to_attr` in register_events (523)
- Added accumulative single variable metrics (524)
- `should_terminate` is reset between runs (525)
- `to_onehot` returns tensor with uint8 dtype (571) - **may be BC breaking**
- Removable handle returned from `Engine.add_event_handler()` to enable single-shot events (588)

- _New documentation style_ 🎉

Distributed
We removed mnist distrib example as being misleading and ~~provided [distrib](https://github.com/pytorch/ignite/tree/distrib) branch~~(XX/YY/2020: `distrib` branch merged to master) to adapt metrics for distributed computation. Code is working and is under testing. Please, try it in your use-case and leave us a feedback.

Now in Contributions module
- Added mlflow logger (558)
- R-Squared Metric in regression metrics module (496)
- Add tag field to OptimizerParamsHandler (502)
- Improved ProgressBar with TerminateOnNan (506)
- Support for layer freezing with Tensorboard integration (515)
- Improved OutputHandler API (531)
- Improved create_lr_scheduler_with_warmup (556)
- Added "all" option to metric_names in contrib loggers (565)
- Added GPU usage info as metric (569)
- Other bug fixes

Notebook examples
- Added Cycle-GAN notebook (500)
- Finetune EfficientNet-B0 on CIFAR100 (544)
- Added Fashion MNIST jupyter notebook (549)

Updated nighlty builds

From pip:

pip install --pre pytorch-ignite

From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

conda install ignite -c pytorch-nightly


---
Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

ANUBHAVNATANI, Bibonaut, Evpok, Hiroshiba, JeroenDelcour, Mxbonn, anmolsjoshi, asford, bosr, johnstill, marrrcin, vfdev-5, willfrey

0.2.0

Core

- We removed deprecated metric classes `BinaryAccuracy` and `CategoricalAccuracy` and which are replaced by [`Accuracy`](https://github.com/pytorch/ignite/blob/master/ignite/metrics/accuracy.py).

- Multilabel option for `Accuracy`, `Precision`, `Recall` metrics.

- Added other metrics:
- [ConfusionMatrix, IoU, mean IoU](https://github.com/pytorch/ignite/blob/master/ignite/metrics/confusion_matrix.py)

- Operations on metrics: `p = Precision(average=False)`
- apply PyTorch operators: `mean_precision = p.mean()`
- indexing: `precision_no_bg = p[1:]`

- Improved our docs with more examples.
- Added FAQ section with best practices.

- Bug fixes

Now in Contributions module
- added [`TensorboardLogger`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/tensorboard_logger.py)
- added [`VisdomLogger`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/visdom_logger.py)
- added [`PolyaxonLogger`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/polyaxon_logger.py)
- improved [`ProgressBar`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/tqdm_logger.py#L12)
- New [regression metrics](https://github.com/pytorch/ignite/tree/master/ignite/contrib/metrics/regression)
- Median Absolute Error
- Median Relative Absolute Error
- Median Absolute Percentage Error
- Geometric Mean Relative Absolute Error
- Canberra Metric
- Fractional Absolute Error
- Wave Hedges Distance
- Geometric Mean Absolute Error
- added new parameter scheduling classes and improved parameters:
- [PiecewiseLinear](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/param_scheduler.py#L604)
- [LRScheduler](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/param_scheduler.py#L404)
- other helper methods
- added custom events support: [`CustomPeriodicEvent`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/custom_events.py)

[Notebook examples](https://github.com/pytorch/ignite/tree/master/examples/notebooks)
- VAE on MNIST
- CNN for text classification

Nighlty builds with pytorch-nightly as dependency

We also provide `pip/conda` nighlty builds with `pytorch-nightly` as dependency:

pip install pytorch-ignite-nightly

or

conda install -c pytorch ignite-nightly


---
<details>
<summary>
Acknowledgments
</summary>

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

Bibonaut, IlyaOvodov, TheCodez, anmolsjoshi, fabianschilling, maaario, snowyday, vfdev-5, willprice, zasdfgbnm, zippeurfou

vfdev-5 would like also to thank his wife and newborn baby girl Nina for their support while working on this release !

</details>

0.1.2

- Improve and fix bug with binary accuracy, precision, recall
- Metrics arithmetics
- ParamScheduler to support multiple optimizers/multiple parameter groups

Thanks to all our contributors !

0.1.1

What's new in this release:

- Contrib module with
- Parameter schedule
- TQDM ProgressBar
- ROC/AUC, AP, MaxAE metrics
- TBPTT Engine
- New handlers:
- Terminate on Nan
- New metrics:
- RunningAverage
- Merged Categorical/Binary -> Accuracy
- Refactor of examples
- New examples:
- Fast Neural Style
- RL


Thanks to all our contributors !

Page 4 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.