Pytorch-ignite

Latest version: v0.5.2

Safety actively analyzes 723929 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 5

0.4.0

Core

BC breaking changes

- Simplified engine - BC breaking change (940 939 938)
- no more internal patching of torch DataLoader.
- seed argument of `Engine.run` is deprecated.
- previous behaviour can be achieved with `DeterministicEngine`, introduced in 939.
- Make all `Events` be `CallableEventsWithFilter` (788).
- Make ignite compatible only with pytorch >1.0 (1016).
- ignite is tested on the latest and nightly versions of pytorch.
- exact compatibility with previous versions can be checked [here](https://github.com/pytorch/ignite/actions?query=workflow%3A.github%2Fworkflows%2Fpytorch-version-tests.yml).
- Remove deprecated arguments from `BaseLogger` (1051).
- Deprecated `CustomPeriodicEvent` (984).
- `RunningAverage` now computes output quantity average instead of a sum in DDP (991).
- Checkpoint stores now files with `.pt` extension instead of `.pth` (873).
- Arguments `archived` of `Checkpoint` and `ModelCheckpoint` are deprecated (873).
- Now `create_supervised_trainer` and `create_supervised_evaluator` do not move model to device (910).

New Features and bug fixes

Ignite Distributed [Experimental]
- Introduction of `ignite.distributed as idist` module (1045)
- common interface for distributed applications and helper methods, e.g. `get_world_size()`, `get_rank()`, ...
- supports native torch distributed configuration, XLA devices.
- metrics computation works in all supported distributed configurations: GPUs and TPUs.

Engine & Events
- Add flexibility on event handlers by packing triggering events (868).
- `Engine` argument is now optional in event handlers (889, 919).
- We initialize `engine.state` before calling `engine.run` (1028).
- `Engine` can run on dataloader based on `IterableDataset` and without specifying `epoch_length` (1077).
- Added user keys into Engine's state dict (914).
- Bug fixes in `Engine` class (1048, 994).
- Now `epoch_length` argument is optional (985)
- suitable to work with finite-unknown-length iterators.
- Added times in `engine.state` (958).

Metrics
- Add `Frequency` metric for ops/s calculations (760, 783, 976).
- Metrics computation can be customized with introduced `MetricUsage` (979, 1054)
- batch-wise/epoch-wise or customly programmed metric's update and compute methods.
- `Metric` can be detached (827).
- Fixed bug in `RunningAverage` when output is torch tensor (943).
- Improved computation performance of `EpochMetric` (967).
- Fixed average recall value of `ConfusionMatrix` (846).
- Now metrics can be serialized using `dill` (930).
- Added support for nested metric values (968).

Handlers and utils
- Checkpoint : improved filename when score value is Integer (758).
- Checkpoint : fix returning worst model of the saved models. (745).
- Checkpoint : `load_objects` can load single object checkpoints (772).
- Checkpoint : we now save only one checkpoint per priority (847).
- Checkpoint : added kwargs to `Checkpoint.load_objects` (861).
- Checkpoint : now saves `model.module.state_dict()` for DDP and DP (1086).
- Checkpoint and related: other improvements (937).
- Support namedtuple for `convert_tensor` (740).
- Added decorator `one_rank_only` (882).
- Update `common.py` (904).

Contrib

- Added `FastaiLRFinder` (596).

Metrics
- Added Roc Curve and Precision/Recall Curve to the metrics (875).

Parameters scheduling
- Enabled multi params group for `LRScheduler` (1027).
- Parameters scheduling improvements (1072, 859).

Support of experiment tracking systems
- Add `NeptuneLogger` (730, 821, 951, 954).
- Add `TrainsLogger` (1020, 1036, 1043).
- Add `WandbLogger` (926).
- Added `visdom_logger` to common module (796).
- TensorboardX is no longer mandatory if pytorch>=1.2 (858).
- Simplified `BaseLogger` attach APIs (1006).
- Added kwargs to loggers' constructors and respective setup functions (1015).

Time profiling
- Added basic time profiler to `contrib.handlers` (729).

Bug fixes (some of PRs)
- `ProgressBar` output not in sync with epoch counts (773).
- Fixed `ProgressBar.log_message` (768).
- `Progressbar` now accounts for `epoch_length` argument (785).
- Fixed broken `ProgressBar` if data is iterator without epoch length (995).
- Improved `setup_logger` for multiple calls (962).
- Fixed incorrect log position (1099).
- Added missing colon to logging message (1101).

Examples
- Basic example of `FastaiLRFinder` on MNIST (838).
- CycleGAN auto-mixed precision training example with NVidia/Apex or native `torch.cuda.amp` (888).
- Added `setup_logger` to mnist examples (953).
- Added MNIST example on TPU (956).
- Benchmark amp on Cifar100 (917).
- `TrainsLogger` semantic segmentation example (1095).

Housekeeping (some of PRs)
- Documentation updates (711, 727, 734, 736, 742, 743, 759, 798, 780, 808, 817, 826, 867, 877, 908, 909, 911, 928, 942, 986, 989, 1002, 1031, 1035, 1083, 1092).
- Offerings to the CI gods (713, 761, 762, 776, 791, 801, 803, 879, 885, 890, 894, 933, 981, 982, 1010, 1026, 1046, 1084, 1093).
- Test improvements (779, 807, 854, 891, 975, 1021, 1033, 1041, 1058).
- Added `Serializable` in mixins (1000).
- Merge of `EpochMetric` in `_BaseRegressionEpoch` (970).
- Adding typing to ignite (716, 751, 800, 844, 944, 1037).
- Drop Python 2 support finalized (806).
- Dynamic typing (723).
- Splits engine into multiple parts (724).
- Add Python 3.8 to Conda builds (781).
- Black formatted codebase with pre-commit files (792).
- Activate dpl v2 for Travis CI (804).
- AutoPEP8 (805).
- Fixes nightly version bug (809).
- Fixed device conversion method (887).
- Refactored deps installation (931).
- Return handler in helpers (997).
- Fixes 833 (1001).
- Disable propagation of loggers to ancestrors (1013).
- Consistent PEP8-compliant imports layout (901).


---
Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

Crissman, DhDeepLIT, GabrielePicco, InCogNiTo124, ItamarWilf, Joxis, Muhamob, Yevgnen, anmolsjoshi, bendboaz, bmartinn, cajanond, chm90, cqql, czotti, erip, fdlm, hoangmit, isolet, jakubczakon, jkhenning, kai-tub, maxfrei750, michiboo, mkartik, sdesrozis, sisp, vfdev-5, willfrey, xen0f0n, y0ast, ykumards

0.4rc.0.post1

0.3.0

Core
- Added State repr and input batch as engine.state.batch (641)
- Adapted core metrics only to be used in distributed configuration (635)
- Added fbeta metric as core metric (653)
- Added event filtering feature (e.g. every/once/event filter logic) (656)
- **BC breaking change**: Refactor ModelCheckpoint into Checkpoint + DiskSaver / ModelCheckpoint (673)
- Added option `n_saved=None` to store all checkpoints (703)
- Improved accumulation metrics (681)
- Early stopping min delta (685)
- Droped Python 2.7 support (699)
- Added feature: Metric can accept a dictionary (689)
- Added Dice Coefficient metric (680)
- Added helper method to simplify the setup of class loggers (712)

Engine refactoring (BC breaking change)

Finally solved the issue 62 to resume training from an epoch or iteration

- Engine refactoring + features (640)
- engine checkpointing
- variable epoch lenght defined by `epoch_length`
- two additional events: `GET_BATCH_STARTED` and `GET_BATCH_COMPLETED`
- [cifar10 example](https://github.com/pytorch/ignite/tree/v0.3.0/examples/contrib/cifar10#check-resume-training) with save/resume in distributed conf

Contrib
- Improved `create_lr_scheduler_with_warmup` (646)
- Added helper method to plot param scheduler values with matplotlib (650)
- **BC Breaking change**: with multiple optimizer's param groups (690)
- Added state_dict/load_state_dict (690)
- **BC Breaking change**: Let the user specify tqdm parameters for log_message (695)


Examples
- Added an example of hyperparameters tuning with Ax on CIFAR10 (652)
- Added CIFAR10 distributed example

Reproducible trainings as "References"

Inspired by torchvision/references, we provide several reproducible baselines for vision tasks:

- [ImageNet](https://github.com/pytorch/ignite/blob/master/examples/references/classification/imagenet)
- [Pascal VOC2012](https://github.com/pytorch/ignite/blob/master/examples/references/segmentation/pascal_voc2012)

Features:

- Distributed training with mixed precision by nvidia/apex
- Experiments tracking with MLflow or Polyaxon


---
Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

anubhavashok, kagrze, maxfrei750, vfdev-5

0.2.1

Core

Various improvements in the core part of the library:
- Add `epoch_bound` parameter to `RunningAverage` (488)
- Bug fixes with Confusion matrix, new implementation (572) - **BC breaking**
- Added `event_to_attr` in register_events (523)
- Added accumulative single variable metrics (524)
- `should_terminate` is reset between runs (525)
- `to_onehot` returns tensor with uint8 dtype (571) - **may be BC breaking**
- Removable handle returned from `Engine.add_event_handler()` to enable single-shot events (588)

- _New documentation style_ 🎉

Distributed
We removed mnist distrib example as being misleading and ~~provided [distrib](https://github.com/pytorch/ignite/tree/distrib) branch~~(XX/YY/2020: `distrib` branch merged to master) to adapt metrics for distributed computation. Code is working and is under testing. Please, try it in your use-case and leave us a feedback.

Now in Contributions module
- Added mlflow logger (558)
- R-Squared Metric in regression metrics module (496)
- Add tag field to OptimizerParamsHandler (502)
- Improved ProgressBar with TerminateOnNan (506)
- Support for layer freezing with Tensorboard integration (515)
- Improved OutputHandler API (531)
- Improved create_lr_scheduler_with_warmup (556)
- Added "all" option to metric_names in contrib loggers (565)
- Added GPU usage info as metric (569)
- Other bug fixes

Notebook examples
- Added Cycle-GAN notebook (500)
- Finetune EfficientNet-B0 on CIFAR100 (544)
- Added Fashion MNIST jupyter notebook (549)

Updated nighlty builds

From pip:

pip install --pre pytorch-ignite

From conda (this suggests to install pytorch nightly release instead of stable version as dependency):

conda install ignite -c pytorch-nightly


---
Acknowledgments

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

ANUBHAVNATANI, Bibonaut, Evpok, Hiroshiba, JeroenDelcour, Mxbonn, anmolsjoshi, asford, bosr, johnstill, marrrcin, vfdev-5, willfrey

0.2.0

Core

- We removed deprecated metric classes `BinaryAccuracy` and `CategoricalAccuracy` and which are replaced by [`Accuracy`](https://github.com/pytorch/ignite/blob/master/ignite/metrics/accuracy.py).

- Multilabel option for `Accuracy`, `Precision`, `Recall` metrics.

- Added other metrics:
- [ConfusionMatrix, IoU, mean IoU](https://github.com/pytorch/ignite/blob/master/ignite/metrics/confusion_matrix.py)

- Operations on metrics: `p = Precision(average=False)`
- apply PyTorch operators: `mean_precision = p.mean()`
- indexing: `precision_no_bg = p[1:]`

- Improved our docs with more examples.
- Added FAQ section with best practices.

- Bug fixes

Now in Contributions module
- added [`TensorboardLogger`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/tensorboard_logger.py)
- added [`VisdomLogger`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/visdom_logger.py)
- added [`PolyaxonLogger`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/polyaxon_logger.py)
- improved [`ProgressBar`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/tqdm_logger.py#L12)
- New [regression metrics](https://github.com/pytorch/ignite/tree/master/ignite/contrib/metrics/regression)
- Median Absolute Error
- Median Relative Absolute Error
- Median Absolute Percentage Error
- Geometric Mean Relative Absolute Error
- Canberra Metric
- Fractional Absolute Error
- Wave Hedges Distance
- Geometric Mean Absolute Error
- added new parameter scheduling classes and improved parameters:
- [PiecewiseLinear](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/param_scheduler.py#L604)
- [LRScheduler](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/param_scheduler.py#L404)
- other helper methods
- added custom events support: [`CustomPeriodicEvent`](https://github.com/pytorch/ignite/blob/master/ignite/contrib/handlers/custom_events.py)

[Notebook examples](https://github.com/pytorch/ignite/tree/master/examples/notebooks)
- VAE on MNIST
- CNN for text classification

Nighlty builds with pytorch-nightly as dependency

We also provide `pip/conda` nighlty builds with `pytorch-nightly` as dependency:

pip install pytorch-ignite-nightly

or

conda install -c pytorch ignite-nightly


---
<details>
<summary>
Acknowledgments
</summary>

🎉 Thanks to our community and all our contributors for the issues, PRs and 🌟 ⭐️ 🌟 !
💯 We really appreciate your implication into the project (in alphabetical order):

Bibonaut, IlyaOvodov, TheCodez, anmolsjoshi, fabianschilling, maaario, snowyday, vfdev-5, willprice, zasdfgbnm, zippeurfou

vfdev-5 would like also to thank his wife and newborn baby girl Nina for their support while working on this release !

</details>

0.1.2

- Improve and fix bug with binary accuracy, precision, recall
- Metrics arithmetics
- ParamScheduler to support multiple optimizers/multiple parameter groups

Thanks to all our contributors !

Page 4 of 5

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.