Adversarial-robustness-toolbox

Latest version: v1.19.1

Safety actively analyzes 722032 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 11

0.7.0

Not secure
This release contains a **new poison removal method**, as well as some restructuring of features recently added to the library.

Added
- Poisoning fixing method performing retraining as part of the `ActivationDefence` class
- Example script of how to use the poison removal method
- New module `wrappers` containing features that alter the behaviour of a `Classifier`. These are to be used as wrappers for classifiers and to be passed directly to evasion attack instances.

Changed
- `ExpectationOverTransformations` has been moved to the `wrappers` module
- `QueryEfficientBBGradientEstimation` has been moved to the `wrappers` module

Removed
- Attacks no longer take an `expectation` parameter (breaking). This has been replaced by a direct call to the attack with an `ExpectationOverTransformation` instance.

Fixed
- Bug in spatial transformations attack: when attack does not succeed, original samples are returned now (issue 40, fixed in 42, 43)
- Bug in Keras with loss functions that do not take labels in one-hot encoding (issue 41)
- Bug fix in activation defence against poisoning: incorrect test condition
- Bug fix in DeepFool: inverted stop condition when working with batches
- Import problem in `utils.py`: top level imports were forcing users to install all supported ML frameworks

0.6.0

Not secure
Added
- PixelDefend defense
- Query-efficient black-box gradient estimates (NES)
- A general wrapper for classifiers allowing to change their behaviour (see `art/classifiers/wrapper.py`)
- 3D plot in visualization
- Saver for `PyTorchClassifier`
- Pickling for `KerasClassifier`
- Representation for all classifiers

Changed
- We now use pretrained models for unit tests (see `art/utils.py`, functions `get_classifier_pt`, `get_classifier_kr`, `get_classifier_tf`)
- Keras models now accept any loss function

Removed
- `Detector` abstract class. Detectors now directly extend `Classifier`

Thanking also our external contributors!
AkashGanesan

0.5.0

Not secure
This release of ART adds two new evasion attacks, provides some bug fixes, as well as some new features, like access to the learning phase (training/test) through the `Classifier` API, batching in evasion attacks and expectation over transformations.

Added
- Spatial transformations evasion attack (class `art.attacks.SpatialTransformations`)
- Elastic net (EAD) evasion attack (class `art.attacks.ElasticNet`)
- Data generator support for multiple types of TensorFlow iterators
- New function and property to the Classifier API that allow to explicitly control the learning phase (train/test)
- Reports for poisoning module
- Most evasion attacks now support batching, this is specified by the new parameter `batch_size`
- `ExpectationOverTransformations` class, to be used with evasion attacks
- Parameter `expectation` of evasion attacks allows to specify the use of expectation over transformations

Changed
- Update list of attacks supported by universarl perturbation
- PyLint and Travis configs

Fixed
- Indexing error in C&W L_2 attack (issue 29)
- Universal perturbation stop condition: attack was always stopping after one iteration
- Error with data subsampling in `AdversarialTrainer` when the ratio of adversarial samples is 1

0.4.0

Not secure
Added
- Class `art.classifiers.EnsembleClassifier`: support for ensembles under `Classifier` interface
- Module `art.data_generators`: data feeders for dynamic loading and augmentation for all frameworks
- New function `fit_generator` to classifiers and adversarial trainer
- C&W L_inf attack
- Class `art.defences.JpegCompression`: JPEG compression as preprocessing defence
- Class `art.defences.ThermometerEncoding`: thermometer encoding as preprocessing defence
- Class `art.defences.TotalVarMin`: total variance minimization as preprocessing defence
- Function `art.utils.master_seed`: setting master seed for random number generators
- `pylint` for Travis

Changed
- Restructure analyzers from poisoning module

Fixed
- PyTorch classifier support on GPU

0.3.0

Not secure
This release brings many new features to ART, including a poisoning module, an adversarial sample detection module and support for MXNet models.

Added
- Access to layers and model activations through the `Classifier` API
- MXNet support
- Poison detection module, containing the poisoning detection method based on clustering activations
- Jupyter notebook with poisoning attack and detection example on MNIST
- Adversarial samples detection module, containing two detectors: one working based on inputs and one based on activations

Changed
- Optimized JSMA attack (`art.attacks.SaliencyMapMethod`) - can now run on ImageNet data
- Optimized C&W attack (`art.attacks.CarliniL2Method`)
- Improved adversarial trainer, now covering a wide range of setups

Removed
- Hard-coded `config` folder. Config now gets created on the fly when running ART for the first time. Produced config gets stored in home folder `~/.art`

0.2.0

Not secure
This release makes ART framework-independent. The following backends are now supported: TensorFlow, Keras and PyTorch.

Added
- New framework-independent `Classifier` interface
- Backend support for TensorFlow, Keras and PyTorch
- Basic interface for detecting adversarial samples (no concrete method implemented for now)
- Gaussian augmentation

Changed
- All attacks now fit the new `Classifier` interface

Fixed
- `to_categorical` utility function for unsqueezed labels
- Norms in CLEVER score
- Source code folder name to correct PyPI install

Removed
- hard-coded architectures for datasets / model types: CNN, ResNet, MLP

Page 10 of 11

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.