Adversarial-robustness-toolbox

Latest version: v1.19.1

Safety actively analyzes 712615 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 11 of 11

0.1

Not secure
This is the initial release of ART. The following features are currently supported:
- `Classifier` interface, supporting a few predefined architectures (CNN, ResNet, MLP) for standard datasets (MNIST, CIFAR10), as well as custom models from users
- `Attack` interface, supporting a few evasion attacks
- FGM & FSGM
- Jacobian saliency map attack
- Carlini & Wagner L_2 attack
- DeepFool
- NewtonFool
- Virtual adversarial method (to be used for virtual adversarial training)
- Universal perturbation
- Defences
- Preprocessing interface, currently implemented by feature squeezing, label smoothing, spatial smoothing
- Adversarial training
- Metrics for measuring robustness: empirical robustness (minimal perturbation), loss sensitivity and CLEVER score
- Utilities for loading datasets, some preprocessing, common maths manipulations
- Scripts for launching some basic pipelines for training, tests and attacking
- Unit tests

Page 11 of 11

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.