Data type coverage
The first part of this release concerns Xplique data type coverage extension. It first modifies some methods to extend the coverage.
Non-square images
`SobolAttributionMethod` and `HsicAttributionImage` now support non-square images.
Image explanation shape harmonization
For image explanation, depending on the method, the explanation shape could be either $(n, h, w)$, $(n, h, w, 1)$, or $(n, h, w, 3)$. It was decided to harmonize it to $(n, h, w, 1)$.
Reducer for gradient-based methods
For images, most gradient-based provide a value for each channel, however, for consistency, it was decided that for images, explanations will have the shape $(n, h, w, 1)$. Therefore, gradient-based methods need to reduce the channel dimension of their image explanations and the `reducer` parameter chooses how to do it among {`"mean"`, `"min"`, `"max"`, `"sum"`, `None`}. In the case `None` is given, the channel dimension is not reduced. The default value is `"mean"` for methods except `Saliency` which is `"max"` to comply with the paper and `GradCAM` and `GradCAMPP` which are not concerned.
Time series
Xplique was initially designed for images but it also supports attribution methods for tabular data and now time series data.
Xplique conciders data with:
- 4 dimensions as images.
- 3 dimensions as time series.
- 2 dimensions as tabular data.
Tutorial
To show how to use Xplique on time series a new tutorial was designed: [Attributions: Time Series and Regression](https://colab.research.google.com/drive/1h0lThbcP5d2VKtRxwLG8z7KC8PExcVIA).
Plot
The function `xplique.plots.plot_timeseries_attributions` was modified to match `xplique.plots.plot_attributions` API. Here is an example from the tutorial on temperature forecasting for the next 24 hours based on weather data from the last 48 hours:
<img width="521" alt="image" src="https://github.com/deel-ai/xplique/assets/90199266/f4ad2060-378a-4138-8609-33ce95078215">
Methods
`Rise`, `Lime`, and `Kernelshap` now support time series natively.
Overview of covered data types and tasks
| **Attribution Method** | Type of Model | Images | Time Series and Tabular Data |
| :--------------------- | :----------------------- | :------------: | :--------------------------: |
| Deconvolution | TF | C✔️ OD❌ SS❌ | C✔️ R✔️ |
| Grad-CAM | TF | C✔️ OD❌ SS❌ | ❌ |
| Grad-CAM++ | TF | C✔️ OD❌ SS❌ | ❌ |
| Gradient Input | TF, PyTorch** | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| Guided Backprop | TF | C✔️ OD❌ SS❌ | C✔️ R✔️ |
| Integrated Gradients | TF, PyTorch** | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| Kernel SHAP | TF, PyTorch**, Callable* | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| Lime | TF, PyTorch**, Callable* | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| Occlusion | TF, PyTorch**, Callable* | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| Rise | TF, PyTorch**, Callable* | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| Saliency | TF, PyTorch** | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| SmoothGrad | TF, PyTorch** | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| SquareGrad | TF, PyTorch** | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| VarGrad | TF, PyTorch** | C✔️ OD✔️ SS✔️ | C✔️ R✔️ |
| Sobol Attribution | TF, PyTorch** | C✔️ OD✔️ SS✔️ | 🔵 |
| Hsic Attribution | TF, PyTorch** | C✔️ OD✔️ SS✔️ | 🔵 |
| FORGrad enhancement | TF, PyTorch** | C✔️ OD✔️ SS✔️ | ❌ |
TF : Tensorflow compatible
C : [Classification](https://deel-ai.github.io/xplique/latest/api/attributions/classification/) | R : [Regression](https://deel-ai.github.io/xplique/latest/api/attributions/regression/) |
OD : [Object Detection](https://deel-ai.github.io/xplique/latest/api/attributions/object_detection/) | SS : [Semantic Segmentation (SS)](https://deel-ai.github.io/xplique/latest/api/attributions/semantic_segmentation/)
\* : See the [Callable documentation](https://deel-ai.github.io/xplique/latest/api/attributions/callable/)
** : See the [Xplique for PyTorch documentation](https://deel-ai.github.io/xplique/latest/api/attributions/pytorch/), and the [**PyTorch models**: Getting started](https://colab.research.google.com/drive/1bMlO29_0K3YnTQBbbyKQyRfo8YjvDbhe) notebook.
✔️ : Supported by Xplique | ❌ : Not applicable | 🔵 : Work in Progress
Metrics
Naturally, metrics now support Time series too.
---
Bugs correction
The second part of this release is to solve pending issues: 102, 123, 127, 128, 131, and 137.
Memories problem
Indeed, among the reported issues several concerned memory management.
SmoothGrad, VarGrad, and SquareGrad issue 137
`SmoothGrad`, `VarGrad`, and `SquareGrad` now use online statistics to compute explanations, which allows to make batch inferences. Furthermore, their implementation was refactorized with a `GradientStatistic` abstraction. It does not modify usage.
MuFidelity issue 137
The metric MuFidelity had the same problem as the three previous methods, it was also solved.
HsicAttributionMethod
This method had a different memory problem the `batch_size` for the model was used correctly, however, when computing the estimator a tensor of size `grid_size**2 * nb_design**2` was created. However, for big images and/or small objects in images, the `grid_size` needs to be increased, furthermore, for the estimator to converge, `nb_design` should also be increased accordingly. Which creates out-of-memory errors.
Thus an `estimator_batch_size` (different from the initial `batch_size`) was introduced to batch over the `grid_size**2` dimension. The default value is `None`, thus conserving the default behavior of the method, but when an out-of-memory occurs, setting an `estimator_batch_size` smaller than `grid_size**2` will reduce the memory cost of the method.
Other issues
Metrics input types issues 102 and 128
Now inputs and targets are sanitized to numpy arrays
Feature visualization latent dtype issue 131
In issue 131, there was a conflict in dtype between the model internal dtype and Xplique dtype. We made sure that the dtype used for the conflicting computation was the model's internal dtype.
Other corrections
Naturally, other problems were reported to us outside of issues or discovered by the team, we also addressed these.
Some refactorization
`Lime` was refactorized but it does not impact usage.
Small fixes
In `HsicAttributionMethod` and `SobolAttributionMethod` there was a difference between the documentation of the `perturbation_function` and the actual code.
For Craft, there were some remaining prints, but they may be useful, thus Craft's methods with print now take a `verbose` parameter.