Xplique

Latest version: v1.4.0

Safety actively analyzes 682361 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

1.2.0

New features

Semantic Segmentation

See related [documentation](https://deel-ai.github.io/xplique/latest/api/attributions/semantic_segmentation/) and [tutorial](https://colab.research.google.com/drive/1AHg7KO1fCOX5nZLGZfxkZ2-DLPPdSfbX).

python
explainer = Method(model, operator=xplique.Tasks.SEMANTIC_SEGMENTATION)


A new operator was designed to treat the semantic segmentation task, with the relative documentation and tutorial. It is used similarly to classification and regression, as shown in the example above. But the `model` specification changes and `targets` parameter definition differs (to design them, the user should use `xplique.utils_functions.segmentation` set of functions).


Object Detection

See related [documentation](https://deel-ai.github.io/xplique/latest/api/attributions/object_detection/) and [tutorial](https://colab.research.google.com/drive/1X3Yq7BduMKqTA0XEheoVIpOo3IvOrzWL).

python
explainer = Method(model, operator=xplique.Tasks.OBJECT_DETECTION)


The object detection API was adapted to the operator API, hence an object detection operator was designed to enable white box methods for object detection. Furthermore, the relative documentation and tutorials were introduced. Here also, `targets` and `model` specifications differ from classification ones.

Therefore, the `BoundingBoxExplainer` is now deprecated.



Documentation

Merge model, operator, and API description page into one

As fel-thomas highlighted in 132 remarks, the documentation was too divided, furthermore, a lot of information was redundant between those pages and they were interdependent. Hence the choice was made to merge the model, operator, and API description page into one. We believe it will simplify the use of the library.

Create task-related pages

As aforementioned, two tasks (Object Detection and Semantic Segmentation) were introduced in the documentation, their complexity induced a specific documentation page. However, it was not consistent to have documentation pages only for those two tasks. Therefore information about Classification and Regression was extracted from the common API page to create two other new task-specific pages. Finally, four task-specific were introduced to the documentation


Bug fixes

Regression

The regression operator was set to the MAE function in the previous release to allow the explanation of multi-output regression. However, such a function is not differentiable in zero, thus gradient-based methods were not working.

Hence, the behavior was set back to the previous behavior (a sum of the targeted outputs). Nonetheless, this operator is limited to single-output explanations, hence for multi-output regression, each output should be explained individually.

1.1.0

New Features

MaCO

Introduction of a recent method for scaling up feature visualization on state-of-the-art deep models: **MaCo**. This method is described in the following arXiv paper: [https://arxiv.org/pdf/2306.06805.pdf](https://arxiv.org/pdf/2306.06805.pdf). It involves fixing the amplitude in the Fourier spectrum and only optimizing the phase during the optimization of a neuron/channel/layer.

It comes with the associated documentation, tests and notebook

FORGrad

Introduction of FORGrad (paper here: [https://arxiv.org/pdf/2307.09591.pdf](https://arxiv.org/pdf/2307.09591.pdf)). All in all, this method consists in filtering the noise in the explanations to make them more interpretable.

It comes with the associated documentation, tests and notebook

PyTorch Wrapper

Provide within Xplique a convenient wrapper for Pytorch's model that works for most attribution methods and is compatible with metrics.

It comes with the associated documentation, tests and notebook. It also introduce its own pipeline for CI challenging the cross-version between TF and PyTorch.

Introduce a Tasks Enum

Add the `Tasks` enum which includes the operators for classification and regression tasks. The possibility to choose from the existing `operator` by their name was added.

Add an activation parameter for metrics

While we recommend using the logits to generate explanations, it might be more relevant to look at the probability (after a softmax or sigmoid layer) of a prediction when computing metrics for instance if it measures 'a drop in probability for the classification of an input occluded in the more relevant part'. Thus, we introduce this option when you build a metric. `activation` can be either None, 'softmax' or 'sigmoid'.

Bug fixes

`regression_operator`
The operator was a sum instead of a mean (for MAE). It has been fixed.

HSIC attribution
- doc of **call** of HSICEstimator
- Add tf.function
Documentation
- Enhance overall the documentation
- Add documetation for the `operator`
- Add explanation concerning the model, the API
- Add a pipeline for CI of the documentation

1.0.0

This release introduces operators for attribution methods. It allows one to apply attribution methods to a larger variety of use-cases. For more detail, one can refer to PRs 124 and 125

What changes in the use of Xplique?

For regular users, this release should be transparent. However, if you want to apply attribution methods to non-classification tasks, il will now be easier.

What is an operator?

The idea is as follows: to define an attribution method, we need any function that takes in the model (`f`), a series of inputs (`x`), and labels (`y`) and returns a scalar in `R`.

**`g(f, x, y) -> R`**

This function called an operator, can be defined by the user (or by us) and then provides a common interface for all attribution methods that will call it (or calculate its gradient). As you can see, the goal is for attribution methods to have this function as an attribute (in more detail, this will give `self.inference_function = operator` at some point).

Some Examples of Operators

1) The most trivial operator is perhaps the **classification** one, it consists of taking a particular logit to explain a class. In the case where the model `f: R^n -> R^c` with `c` being the number of classes and `y` being one-hot vectors, then our operator simply boils down to:

python
def g(f, x, y):
return tf.reduce_sum(f(x) * y, -1)


2) Regarding **regression**, with a model `f: R^n -> R^m` with `m` and `targets` being the initial prediction of the model, the operator will be:

python
def g(f, x, y):
return tf.reduce_sum(tf.abs(model(inputs) - targets), axis=-1)


3) Regarding **bounding-box**, an operator has already been defined in the literature with the D-RISE article. It consists of using the three IOU, objectness, and box classification scores to form... *a scalar*!

4) To explain **concepts**, for example with a model `f = c ⚬ g(x)`, with `a = g(x)` and a factorizer that allows interpreting `a` in a reduced dimension space `u = factorizer(a)`, we can very well define the following operator:

python
def g(c, u, y):
a = factorizer.inverse(u)
y_pred = c(a)
return tf.reduce_sum(y_pred * y, -1)


As you can see, many cases can be handled in this manner!

Implementation

Regarding implementation, there is a series of operators available in the file in `commons/operators` and the most important part -- the operator plug -- is located in the `attributions/base.py` file. As discussed with fel-thomas, AntoninPoche, and lucashervier, the PyTorch implementation is not far and would be located here!

Once this is done, we simply added the argument to all the attribution methods defined in the library, some related metrics naturally inherited the parameter.

Finally, the two metrics InsertionTS and DeletionTS, were deleted as they are now redundant. Indeed, with the new implementation, metrics are not limited to classification.

0.4.3

0.2alpha

0.1alpha

Page 2 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.