Adversarial-robustness-toolbox

Latest version: v1.18.2

Safety actively analyzes 681812 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 10

0.5.0

Not secure
This release of ART adds two new evasion attacks, provides some bug fixes, as well as some new features, like access to the learning phase (training/test) through the `Classifier` API, batching in evasion attacks and expectation over transformations.

Added
- Spatial transformations evasion attack (class `art.attacks.SpatialTransformations`)
- Elastic net (EAD) evasion attack (class `art.attacks.ElasticNet`)
- Data generator support for multiple types of TensorFlow iterators
- New function and property to the Classifier API that allow to explicitly control the learning phase (train/test)
- Reports for poisoning module
- Most evasion attacks now support batching, this is specified by the new parameter `batch_size`
- `ExpectationOverTransformations` class, to be used with evasion attacks
- Parameter `expectation` of evasion attacks allows to specify the use of expectation over transformations

Changed
- Update list of attacks supported by universarl perturbation
- PyLint and Travis configs

Fixed
- Indexing error in C&W L_2 attack (issue 29)
- Universal perturbation stop condition: attack was always stopping after one iteration
- Error with data subsampling in `AdversarialTrainer` when the ratio of adversarial samples is 1

0.4.0

Not secure
Added
- Class `art.classifiers.EnsembleClassifier`: support for ensembles under `Classifier` interface
- Module `art.data_generators`: data feeders for dynamic loading and augmentation for all frameworks
- New function `fit_generator` to classifiers and adversarial trainer
- C&W L_inf attack
- Class `art.defences.JpegCompression`: JPEG compression as preprocessing defence
- Class `art.defences.ThermometerEncoding`: thermometer encoding as preprocessing defence
- Class `art.defences.TotalVarMin`: total variance minimization as preprocessing defence
- Function `art.utils.master_seed`: setting master seed for random number generators
- `pylint` for Travis

Changed
- Restructure analyzers from poisoning module

Fixed
- PyTorch classifier support on GPU

0.3.0

Not secure
This release brings many new features to ART, including a poisoning module, an adversarial sample detection module and support for MXNet models.

Added
- Access to layers and model activations through the `Classifier` API
- MXNet support
- Poison detection module, containing the poisoning detection method based on clustering activations
- Jupyter notebook with poisoning attack and detection example on MNIST
- Adversarial samples detection module, containing two detectors: one working based on inputs and one based on activations

Changed
- Optimized JSMA attack (`art.attacks.SaliencyMapMethod`) - can now run on ImageNet data
- Optimized C&W attack (`art.attacks.CarliniL2Method`)
- Improved adversarial trainer, now covering a wide range of setups

Removed
- Hard-coded `config` folder. Config now gets created on the fly when running ART for the first time. Produced config gets stored in home folder `~/.art`

0.2.0

Not secure
This release makes ART framework-independent. The following backends are now supported: TensorFlow, Keras and PyTorch.

Added
- New framework-independent `Classifier` interface
- Backend support for TensorFlow, Keras and PyTorch
- Basic interface for detecting adversarial samples (no concrete method implemented for now)
- Gaussian augmentation

Changed
- All attacks now fit the new `Classifier` interface

Fixed
- `to_categorical` utility function for unsqueezed labels
- Norms in CLEVER score
- Source code folder name to correct PyPI install

Removed
- hard-coded architectures for datasets / model types: CNN, ResNet, MLP

0.1

Not secure
This is the initial release of ART. The following features are currently supported:
- `Classifier` interface, supporting a few predefined architectures (CNN, ResNet, MLP) for standard datasets (MNIST, CIFAR10), as well as custom models from users
- `Attack` interface, supporting a few evasion attacks
- FGM & FSGM
- Jacobian saliency map attack
- Carlini & Wagner L_2 attack
- DeepFool
- NewtonFool
- Virtual adversarial method (to be used for virtual adversarial training)
- Universal perturbation
- Defences
- Preprocessing interface, currently implemented by feature squeezing, label smoothing, spatial smoothing
- Adversarial training
- Metrics for measuring robustness: empirical robustness (minimal perturbation), loss sensitivity and CLEVER score
- Utilities for loading datasets, some preprocessing, common maths manipulations
- Scripts for launching some basic pipelines for training, tests and attacking
- Unit tests

Page 10 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.