Adversarial-robustness-toolbox

Latest version: v1.18.2

Safety actively analyzes 682387 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 10

1.0.0

Not secure
This is the first major release of the Adversarial Robustness 360 Toolbox (ART v1.0)!

This release generalises ART to support all possible classifier models, in addition to its existing support for neural networks. Furthermore, it generalises the label format, to accept index labels as well as one-hot encoded labels, and the input shape, to accept, for example, tabular data as input features. This release also adds new model-specific white-box and poisoning attacks and provides new methods to certify and verify the adversarial robustness of neural networks and decision tree ensembles.

Added

- Add support for all classifiers and pipelines of scikit-learn including but not limited to `LogisticRegression`, `SVC`, `LinearSVC`, `DecisionTreeClassifier`, `AdaBoostClassifier`, `BaggingClassifier`, `ExtraTreesClassifier`, `GradientBoostingClassifier`, `RandomForestClassifier`, and `Pipeline`. (47)

- Add support for gradient boosted tree classifier models of `XGBoost`, `LightGBM` and `CatBoost`.

- Add support for TensorFlow v2 (rc0) by introducing a new classifier `TensorFlowV2Classifier` providing support for eager execution and accepting callable models. `KerasClassifier` has been extended to provide support for TensorFlow v2 `tensorflow.keras` Models without eager execution. (66)

- Add support for models of the Gaussian Process framework GPy. (116)

- Add the High-Confidence-Low-Uncertainty (HCLU) adversarial example formulation as an attack on Gaussian Processes. (116)

- Add the Decision Tree attack as a white-box attack for decision tree classifiers (115)

- Add support for white-box attacks on scikit-learn’s `LogisticRegression`, `SVC`, `LinerSVC`, and `DecisionTreeClassifier`, as well as `GPy` and black-box attacks on all scikit-learn classifiers and XGBoost, LightGBM and CatBoost models.

- Add Randomized Smoothing as wrapper class for neural network classifiers to provide certified adversarial robustness under the L2 norm. (114)

- Add the Clique Method Robustness Verification method for decision-tree-ensemble classifiers and extend it for models of XGBoost, LightGBM, and scikit-learn's `ExtraTreesClassifier`, `GradientBoostingClassifier`, `RandomForestClassifier`. (124)

- Add `BlackBoxClassifier` expecting only a single Python function as interface to the classifier predictions. This is the most general and versatile classifier of ART. New tutorial notebooks demonstrate `BlackBoxClassifier` testing the adversarial robustness of remote, deployed classifier models and of the Optical Character Recognition (OCR) engine Tesseract. (123, 152)

- Add the Poisoning Attack for Support Vector Machines with linear, polynomial or radial basis function kernels. (155)

Changed

- Introduce a new flexible API for all classifiers with an abstract base class for basic classifiers (minimal functionality to support black-box attacks), and mixins for neural networks, gradient-providing classifiers (to support white-box attacks), and decision-tree-based classifiers.

- Update, extend and introduce new get started examples and notebook tutorials for all supported frameworks. (47, 140)

- Extend label format to accept index labels in addition to the already supported one-hot-encoded labels. Internally ART continues to treat labels as one-hot-encoded. This feature allows users of ART to use the label format preferred by their machine learning framework and datasets. (126)

- Change the order of the preprocessing steps of applying defences and standardisation/normalisation in classifiers. So far the classifiers first applied standardisation followed by defences. With this release the defences will be applied first followed by standardisation to enable comparable defence parameters across classifiers with different standardisation/normalisation parameters. (84)

- Use the `batch_size` of an attack as argument to the method `predict` of its classifier to reduce out-of-memory errors for large models. (105 )

- Generalize the classifiers of TensorFlow, Keras, PyTorch, and MXNet by removing assumptions on their output (logits or probabilities). The Boolean parameter `logits` has been removed from Classifier API in methods `predict` and `class_gradient`. The predictions and gradients are now computed at the output of the model without any modifications. (50, 75, 106, 150)

- Rename `TFClassifier` to `TensorFlowClassifier` and keep `TFClassifier` for backward compatibility.

Removed

- Sunset support for Python 2 in preparation for its retirement on Jan 1, 2020. We have stopped running unittests with Python 2 and do not require new contributions to run with Python 2. We keep existing compatibility code for Python 2 and 3 where possible. (83)

Fixed

- Improve `VirtualAdversarialMethod` by making the computation of the L2 data normalisation more reliable and raising an exception if it is used with a model providing logits as output. Currently, `VirtualAdversarialMethod` is expecting probabilities as output. (120, 157)

0.10.0

Not secure
This release contains contains new black-box attacks, detectors, updated attacks and several bug fixes.

Added
* Added HopSkipJump attack, a powerful new black-box attack (80)
* Added new example script demonstrating the perturbation of a neural network layer between input and output (92)
* Added a notebook demonstrating `BoundaryAttack`
* Added a detector based on Fast Generalized Subset Scanning (100)

Changed
* Changed Basic Iterative Method (BIM) attack to be a special case of Projected Gradient Descent attack with `norm=np.inf` and without random initialisation (90)
* Reduced calls to method predict in attacks `FastGradientMethod` and `BasicIterativeMethod` to improve performance (70)
* Updated pretrained models in notebooks with on-demand downloads of the pretrained models (63, 88)
* Added batch processing to `AdversarialPatch` attack (96)
* Increased Tensorflow versions in unit testing on Travis CI to 1.12.3, 1.13.1, and 1.14.0 (94)
* Attacks are now accepting the argument `batch_size` which is used in calls to `classifier.predict` within the attack replacing the default batch_size=128 of `classifier.predict` (105)
* Change order of preprocessing defences and standardisation in classifiers, now defences are applied on the provided input data and standardisation (preprocessing argument of classifier) is applied after the defences (84
* Update all defences to account for clip_values (84)

Removed
* Removed pretrained models in directory `models` used in notebooks and replaced with ondemand downloads (63, 88)
* Removed argument `patch_shape` from attack `AdversarialPatch` (77)
* Stopped unit testing for Python 2 on Travis CI (83)

Fixed
* Fixed all Pylint and LGTM alerts and warnings (110)
* Fixed broken links in notebooks (63, 88)
* Fixed broken links to imagenet data in notebook `attack_defense_imagenet` (109)
* Fixed calculation of attack budget `eps` by accounting for initial benign sample in projection to eps-ball for random initialisation in `FastGradientMethod` and `BasicIterativeMethod` (85)

0.9.0

Not secure
This release contains breaking changes to attacks and defences with regards to setting attributes, removes restrictions on input shapes which enables the use of feature vectors and several bug fixes.


Added

- implement pickle for classifiers `tensorflow` and `pytorch` (39)
- added example `data_augmentation.py` demonstrating the use of data generators

Changed

- renamed and moved tests (58)
- change input shape restrictions, classifiers accept now any input shape, for example feature vectors; attacks requiring spatial inputs are raising exceptions (49)
- clipping of data ranges becomes optional in classifiers which allows attacks to accept unbounded data ranges (49)
- [Breaking changes] class attributes in attacks can no longer be changed with method `generate`, changing attributes is only possible with methods `__init__` and `set_params`
- [Breaking changes] class attributes in defenses can no longer be changed with method `generate`, changing attributes is only possible with methods `__call__` and `set_params`
- resolved inconsistency in PGD random_init with Madry's version

Removed

- deprecated static adversarial trainer `StaticAdversarialTrainer`


Fixed

- Fixed bug in attack ZOO (60)

0.8.0

Not secure
This release includes **new evasion attacks**, like ZOO, boundary attack and the adversarial patch, as well as the capacity to break non-differentiable defences.

Added
* ZOO black-box attack (class `ZooAttack`)
* Decision boundary black-box attack (class `BoundaryAttack`)
* Adversarial patch (class `AdversarialPatch`)
* Function to estimate gradients in `Preprocessor` API, along with its implementation for all concrete instances.
This allows to break non-differentiable defences.
* Attributes `apply_fit` and `apply_predict` in `Preprocessor` API that indicate if a defence should be used at training and/or test time
* Classifiers are now capable of running a full backward pass through defences
* `save` function for TensorFlow models
* New notebook with usage example for the adversarial patch
* New notebook showing how to synthesize an adversarially robust architecture (see ICLR SafeML Workshop 2019: **Evolutionary Search for Adversarially Robust Neural Network** by M. Sinn, M. Wistuba, B. Buesser, M.-I. Nicolae, M.N. Tran)

Changed
* [Breaking change] Defences in classifiers are now to be specified as `Preprocessor` instances instead of strings
* [Breaking change] Parameter `random_init` in `FastGradientMethod`, `ProjectedGradientDescent` and `BasicIterativeMethod` has been renamed to `num_random_init` and allows now to specify the number of random initialization to run before choosing the best attack
* Possibility to specify batch size when calling `get_activations` from `Classifier` API

0.7.0

Not secure
This release contains a **new poison removal method**, as well as some restructuring of features recently added to the library.

Added
- Poisoning fixing method performing retraining as part of the `ActivationDefence` class
- Example script of how to use the poison removal method
- New module `wrappers` containing features that alter the behaviour of a `Classifier`. These are to be used as wrappers for classifiers and to be passed directly to evasion attack instances.

Changed
- `ExpectationOverTransformations` has been moved to the `wrappers` module
- `QueryEfficientBBGradientEstimation` has been moved to the `wrappers` module

Removed
- Attacks no longer take an `expectation` parameter (breaking). This has been replaced by a direct call to the attack with an `ExpectationOverTransformation` instance.

Fixed
- Bug in spatial transformations attack: when attack does not succeed, original samples are returned now (issue 40, fixed in 42, 43)
- Bug in Keras with loss functions that do not take labels in one-hot encoding (issue 41)
- Bug fix in activation defence against poisoning: incorrect test condition
- Bug fix in DeepFool: inverted stop condition when working with batches
- Import problem in `utils.py`: top level imports were forcing users to install all supported ML frameworks

0.6.0

Not secure
Added
- PixelDefend defense
- Query-efficient black-box gradient estimates (NES)
- A general wrapper for classifiers allowing to change their behaviour (see `art/classifiers/wrapper.py`)
- 3D plot in visualization
- Saver for `PyTorchClassifier`
- Pickling for `KerasClassifier`
- Representation for all classifiers

Changed
- We now use pretrained models for unit tests (see `art/utils.py`, functions `get_classifier_pt`, `get_classifier_kr`, `get_classifier_tf`)
- Keras models now accept any loss function

Removed
- `Detector` abstract class. Detectors now directly extend `Classifier`

Thanking also our external contributors!
AkashGanesan

Page 9 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.