4. **Continued Fraction Nets (CoFrNets):** The CoFrNet explainer is a directly interpretable model which is inspired by continued fractions and is particularly suitable for tabular and text data.
[Docs](https://aix360.readthedocs.io/en/latest/die.html#aix360.algorithms.cofrnet.CoFrNet.CoFrNet_Explainer) [Notebook](https://github.com/Trusted-AI/AIX360/blob/master/examples/cofrnet/cofrnet_example.ipynb) [Paper](https://proceedings.neurips.cc/paper_files/paper/2021/hash/b538f279cb2ca36268b23f557a831508-Abstract.html)
5. **Nearest Neighbor Contrastive Explainer:** Nearest Neighbor Contrastive Explainer is a model-agnostic explanation method that provides exemplar based feasible or realizable contrastive instances for tabular data. For a given model, exemplar/representative dataset, and query point, it computes the closest point within the representative dataset, which has a different prediction compared to the query point (with respect to the model). The closeness metric is defined using an AutoEncoder and ensures a robust and faithful neighbourhood even in the case of high-dimensional feature space or noisy datasets. This explanation method can also be used in the model-free usecase, where the model predictions are replaced by (user provided) ground truth.
[Docs](https://aix360.readthedocs.io/en/latest/lbbe.html#aix360.algorithms.nncontrastive.nncontrastive.NearestNeighborContrastiveExplainer) [Notebook](https://github.com/Trusted-AI/AIX360/blob/master/examples/nncontrastive/nncontrastive_demo.ipynb)
6. **Grouped CE Explainer:** GroupedCE is a local, model-agnostic explainer that generates grouped Conditional Expectation (CE) plots for a given instance and a set of features (extension of classical individual conditional expectation to higher dimensions). The set of features can be either a subset of the input covariates defined by the user or the top K features based on the importance provided by a global explainer. The explainer produces 3D plots, containing the model output when pairs of features vary simultaneously. If a single feature is provided the explainer produces the standard 2D ICE plots, where only one feature is perturbed at a time.
[Docs](https://aix360.readthedocs.io/en/latest/lbbe.html#aix360.algorithms.gce.gce.GroupedCEExplainer) [Notebook](https://github.com/Trusted-AI/AIX360/blob/master/examples/gce/gce_demo.ipynb)
7. **Time Series Explainability Algorithms**
The current version of the toolkit has been expanded to support time series data which occurs in numerous application domains such as asset management and monitoring, supply chain, finance, and IoT. The toolkit includes the following new time series explainability algorithms: TSSaliencyExplainer, TSLimeExplainer, and TSICEExplainer.
- **TSSaliencyExplainer TimeSeries Saliency (TSSaliency) Explainer:** TSSaliency implements a model agnostic integrated gradient method for time series prediction models. An integrated gradient map is an axiomatic saliency measure obtained by integrating model sensitivity (gradient) over a path from a base signal to the target signal. In the time series context, the base signal is a constant signal with average strength (for each variate). The sample paths are generated using a convex (affine) combination of the base signal and the target signal. The gradient computation uses Zeroth order Monte Carlo sampling.
[Docs](https://aix360.readthedocs.io/en/latest/tslbbe.html#aix360.algorithms.tssaliency.tssaliency.TSSaliencyExplainer) [Notebook (univariate & multivariate)](https://github.com/Trusted-AI/AIX360/blob/master/examples/tssaliency/)
- **TSICEExplainer:** TSICE generalises the ICE (Individual Conditional Expectation) algorithm for time series data. The traditional ICE algorithm uses independent feature variations (varying one feature while fixing others) to analyze the effect of a feature on the model's predictions. The independence assumption does not hold true for time series data. TSICE uses derived features which are computed from a group of observations over a contiguous time range. Rather than an independent exploration of features, TSICE explores the feature space via structured time series perturbations that do not violate the correlational structure within the data. These perturbations results in multiple instances of the time series data on which forecasts are produced. TSICE produces two explanations: (1) explanation using the perturbations around the selected time window and variation in the forecast. 2) explanation using the derived features, and variation in the model response from the base response.
[Docs](https://aix360.readthedocs.io/en/latest/tslbbe.html#aix360.algorithms.tsice.tsice.TSICEExplainer) [Notebook](https://github.com/Trusted-AI/AIX360/blob/master/examples/tsice/tsice_demo.ipynb)
- **TimeSeries Local Interpretable Model Agnostic (TSLime) Explainer:** TSLIME is a generalisation of the popular LIME explainability algorithm and computes local model-agnostic explanations for predictions based on time series data. TSLime utilizes time series perturbations techniques and explains the behaviour of a model with respect to a time series sample by fitting a linear surrogate model on those perturbations.
[Docs](https://aix360.readthedocs.io/en/latest/tslbbe.html#aix360.algorithms.tslime.tslime.TSLimeExplainer) [Notebook (univariate and multivariate)](https://github.com/Trusted-AI/AIX360/tree/master/examples/tslime)
**Selective Installation of Explainability Algorithms and upgrade to Python 3.10**
To expedite the addition of new algorithms and avoid conflicts of package dependencies across algorithms, the toolkit now supports selective installation of algorithms. The installation instructions are available [here](https://github.com/Trusted-AI/AIX360/tree/master#installation). As an example, after cloning the repository, one can install a subset of algorithms as `pip install -e .[rbm,dipvae,tssaliency]` . Most algorithms are now compatible with Python 3.10.