Robustbench

Latest version: v1.1

Safety actively analyzes 722491 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

1.1

What's Changed
* Added new model ID's by nmndeep in https://github.com/RobustBench/robustbench/pull/52
* Modas2021_PRIME models added by nmndeep in https://github.com/RobustBench/robustbench/pull/57
* readme correction by nmndeep in https://github.com/RobustBench/robustbench/pull/60
* Remove all `.cuda()` calls by dedeswim in https://github.com/RobustBench/robustbench/pull/61
* Enable using ImageNet benchmarking for new models by dedeswim in https://github.com/RobustBench/robustbench/pull/64
* add Kang2021Stable model by fra31 in https://github.com/RobustBench/robustbench/pull/66
* sort models including external evaluations by fra31 in https://github.com/RobustBench/robustbench/pull/69
* add results from Erichson2022NoisyMix by fra31 in https://github.com/RobustBench/robustbench/pull/70
* Explicit encoding in setup.py to fix 72 by dedeswim in https://github.com/RobustBench/robustbench/pull/74
* Updated results for Sehwag2021Proxy by VSehwag in https://github.com/RobustBench/robustbench/pull/76
* Pang2022Robustness models added by nmndeep in https://github.com/RobustBench/robustbench/pull/77
* additional data flags corrected by nmndeep in https://github.com/RobustBench/robustbench/pull/78
* add models from Jia2022LAS-AT by fra31 in https://github.com/RobustBench/robustbench/pull/80
* remove unnecessary file when loading custom ImageNet by CNOCycle in https://github.com/RobustBench/robustbench/pull/75
* update info of Sridhar2021Robust models by fra31 in https://github.com/RobustBench/robustbench/pull/82
* ImageNet-3DCC and corruption updates by ofkar in https://github.com/RobustBench/robustbench/pull/85
* add models from Addepalli2022Efficient by fra31 in https://github.com/RobustBench/robustbench/pull/91
* throw error when too many images are requested by fra31 in https://github.com/RobustBench/robustbench/pull/96
* Add Debenedetti2022Light and support for timm by dedeswim in https://github.com/RobustBench/robustbench/pull/100

New Contributors
* CNOCycle made their first contribution in https://github.com/RobustBench/robustbench/pull/75
* ofkar made their first contribution in https://github.com/RobustBench/robustbench/pull/85

**Full Changelog**: https://github.com/RobustBench/robustbench/compare/v1.0...v1.1

1.0

Updates:
- New ImageNet leaderboards (Linf and common corruption) and relevant benchmarking functions (including ImageNet evaluation on a fixed subset of 10% of the test set).
- New models and leaderboard entries (now in total: 120+ evaluations, 80+ models).

0.2.1

This minor release improves some internals and fixes some bugs in the model zoo.

Internals improvements:

- Now, when no normalization is applied, the models in the model zoo are loaded with anonymous `lambda` functions instead of full-fledged classes to keep the code more concise and cleaner.

Bug fixes

- The CIFAR-100 version of Heyndrycks2019Using was missing rescaling in `[-1, 1]`, hence leading to poor accuracy.
- The CIFAR-100 version of Rice2020Overfitting was missing a slightly different step from the other PreActResNets in the `forward` method of the PreActBlock.

0.2

This release adds the following new leaderboards:

- CIFAR-100 Linf
- CIFAR-100-C (common corruptions)

Moreover, it adds to the model zoo:

- 7 new models for CIFAR-10 Linf (28 in total)
- 2 new models for CIFAR-10 L2 (11 in total)
- 10 new models for CIFAR-100 Linf (10 in total)
- 2 new models for CIFAR-100-C (2 in total)

It also fixes some bugs and improves some internals:

- The the common corruptions datasets are now downloaded from the original Zenodo repositories, instead of Google Drive
- The `benchmark` function now raises a working if it is run on a model which is not in `.eval()` mode

0.1

This is the first stable release of RobustBench. It includes the following features:

- A Model Zoo and a leaderboard containing up-to-date and State of the Art models trained on CIFAR-10 for robustness to L2 and Linf adversarial perturbations, and common corruptions.
- An updated API that makes it easier to add new datasets and threat models.
- A function to benchmark the robustness of new models on a given dataset and threat model.
- Functions to automatically generate the leaderboards of [our website](https://robustbench.github.io), and to generate LaTeX tables.

Links

Releases

Has known vulnerabilities

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.