This release introduces operators for attribution methods. It allows one to apply attribution methods to a larger variety of use-cases. For more detail, one can refer to PRs 124 and 125
What changes in the use of Xplique?
For regular users, this release should be transparent. However, if you want to apply attribution methods to non-classification tasks, il will now be easier.
What is an operator?
The idea is as follows: to define an attribution method, we need any function that takes in the model (`f`), a series of inputs (`x`), and labels (`y`) and returns a scalar in `R`.
**`g(f, x, y) -> R`**
This function called an operator, can be defined by the user (or by us) and then provides a common interface for all attribution methods that will call it (or calculate its gradient). As you can see, the goal is for attribution methods to have this function as an attribute (in more detail, this will give `self.inference_function = operator` at some point).
Some Examples of Operators
1) The most trivial operator is perhaps the **classification** one, it consists of taking a particular logit to explain a class. In the case where the model `f: R^n -> R^c` with `c` being the number of classes and `y` being one-hot vectors, then our operator simply boils down to:
python
def g(f, x, y):
return tf.reduce_sum(f(x) * y, -1)
2) Regarding **regression**, with a model `f: R^n -> R^m` with `m` and `targets` being the initial prediction of the model, the operator will be:
python
def g(f, x, y):
return tf.reduce_sum(tf.abs(model(inputs) - targets), axis=-1)
3) Regarding **bounding-box**, an operator has already been defined in the literature with the D-RISE article. It consists of using the three IOU, objectness, and box classification scores to form... *a scalar*!
4) To explain **concepts**, for example with a model `f = c ⚬ g(x)`, with `a = g(x)` and a factorizer that allows interpreting `a` in a reduced dimension space `u = factorizer(a)`, we can very well define the following operator:
python
def g(c, u, y):
a = factorizer.inverse(u)
y_pred = c(a)
return tf.reduce_sum(y_pred * y, -1)
As you can see, many cases can be handled in this manner!
Implementation
Regarding implementation, there is a series of operators available in the file in `commons/operators` and the most important part -- the operator plug -- is located in the `attributions/base.py` file. As discussed with fel-thomas, AntoninPoche, and lucashervier, the PyTorch implementation is not far and would be located here!
Once this is done, we simply added the argument to all the attribution methods defined in the library, some related metrics naturally inherited the parameter.
Finally, the two metrics InsertionTS and DeletionTS, were deleted as they are now redundant. Indeed, with the new implementation, metrics are not limited to classification.