Captum

Latest version: v0.7.0

Safety actively analyzes 681844 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

0.3.0

* LayerActivation and LayerGradientXActivation now support computing attributions for multiple layers simultaneously. (PR 456).
* Neuron attribution methods now support providing a callable to select or aggregate multiple neurons for attribution, as well as slices to select a range of neurons. (PR 490, 495). The parameter name neuron_index has been deprecated and is replaced by neuron_selector, which supports either indices or a callable.
* Feature ablation and feature permutation now allow attribution with respect to multiple batch-aggregate scalars (e.g.loss) simultaneously (PR 425).
* Most attribution methods now support a multiply_by_inputs argument. For attribution methods which include a multiplier of inputs or inputs - baselines, this argument selects whether these multipliers should be incorporated or left out to obtain marginal attributions. (PR 432)
* Methods accepting internal batch size were updated to generate batches lazily rather than splitting an expanded input tensor, eliminating memory constraints when experimenting with a large number of steps. (PR 333).

*Captum Insights*
* New attribution methods in Captum Insights:
* Feature Ablation (PR 319)
* Occlusion (PR 369)

Bug Fixes

* Providing target as a list with inputs on CUDA devices now works appropriately. (Issue 316, PR 317)
* DeepLift issues with DataParallel models, particularly when providing additional forward args or multiple targets, have been fixed. (PR 335)
* Hooks added within an attribution method were previously not being removed if the attribution method encountered an exception before removing the hook. All hooks are now removed even if an exception is raised during attribution. (PR 340)
* LayerDeepLift was fixed to avoid applying hooks on the target layer when attributing layer output, which caused incorrect results or errors with some non-linearities (Issue 382, PR 390, 415).
* Non-leaf tensor gradient warning when using NoiseTunnel with Saliency has been fixed. (Issue 421, PR 426)
* Text visualization helpers now have option to display legend. (Issue 401, PR 403)
* Image visualization helpers fixed to normalize even if outlier threshold is close to 0 (Issue 393, PR 458).

0.2.0

The second release, v0.2.0, of Captum adds a variety of new attribution algorithms as well as additional tutorials, type hints, and Google Colab support for Captum Insights.

New Attribution Algorithms

The following new attribution algorithms are provided, which can be applied to any type of PyTorch model, including DataParallel models. While the first release focused primarily on gradient-based attribution methods such as Integrated Gradients, the new algorithms include perturbation-based methods, marked by ^ below. We also add new attribution methods designed primarily for convolution networks, denoted by * below. All attribution methods share a consistent API structure to make it easy to switch between attribution methods.

Attribution of model output with respect to the input features

1. Guided Backprop *
2. Deconvolution *
3. Guided GradCAM *
4. Feature Ablation ^
5. Feature Permutation ^
6. Occlusion ^
7. Shapley Value Sampling ^

Attribution of model output with respect to the layers of the model

1. Layer GradCAM
2. Layer Integrated Gradients
3. Layer DeepLIFT
4. Layer DeepLIFT SHAP
5. Layer Gradient SHAP
6. Layer Feature Ablation ^

Attribution of neurons with respect to the input features

1. Neuron DeepLIFT
2. Neuron DeepLIFT SHAP
3. Neuron Gradient SHAP
4. Neuron Guided Backprop *
5. Neuron Deconvolution *
6. Neuron Feature Ablation ^

^ Denotes Perturbation-Based Algorithm. These methods compute attribution by evaluating the model on perturbed versions of the input as opposed to using gradient information.
\* Denotes attribution method designed primarily for convolutional networks.


New Tutorials

We have added new tutorials to demonstrate Captum on BERT models, regression cases, and using perturbation-based methods. These tutorials include:

* Interpreting question answering with BERT
* Interpreting regression models using Boston House Prices Dataset
* Feature Ablation on Images

![Screen Shot 2020-03-04 at 3 35 25 PM](https://user-images.githubusercontent.com/11067177/75950339-b6333480-5e5d-11ea-8161-ee05c743331c.png)

Type Hints

The Captum code base is now fully typed with Python type hints and type checked using mypy. Users can now accurately type-check code using Captum.

Bug Fixes and Minor Features

* All Captum methods now support in-place modules and operations. (Issue 156)
* Computing convergence delta was fixed to work appropriately on CUDA devices. (Issue 163)
* A ReLU flag was added to Layer GradCAM to optionally apply a ReLU operation to the returned attributions. (Issue 179)
* All layer and neuron attribution methods now support attribution with respect to either input or output of a module, based on the `attribute_to_layer_input` and `attribute_to_neuron_input` flags.
* All layer attribution methods now support modules with multiple outputs.

**Captum Insights**

* Captum Insights now works on Google Colab. (Issue 116)
* Captum Insights can also be launched as a Jupyter Notebook widget.
* New attribution methods in Captum Insights:
* Deconvolution
* Deep Lift
* Guided Backprop
* Input X Gradient
* Saliency

0.1.0

We just released our first version of the PyTorch Captum library for model interpretability!

Highlights

This first release, v0.1.0, supports a number of gradient-based attribution algorithms as well as Captum Insights, a visualization tool for model debugging and understanding.

Attribution Algorithms

The following general purpose gradient-based attribution algorithms are provided. These can be applied to any type of PyTorch model and input features, including image, text, and multimodal.

1. Attribution of output of the model with respect to the input features
1. **Saliency**
2. **InputXGradient**
3. **IntegratedGradient**
4. **DeepLift**
5. **DeepLiftShap**
6. **GradientShap**

1. Attribution of output of the model with respect to the layers of the model
1. **LayerActivation**
2. **LayerGradientXActivation**
3. **LayerConductance**
4. **InternalInfluence**

1. Attribution of neurons with respect to the input features
1. **NeuronGradient**
2. **NeuronIntegratedGradients**
3. **NeuronConductance**
2. Attribution Algorithm + noisy sampling
1. **NoiseTunnel**
NoiseTunnel helps to reduce the noise in the attributions that are assigned by attribution algorithms by using different noise tunnel techniques such as smoothgrad, smoothgrad_sq and vargrad.

**Batch and Data Parallel Optimizations**

Since some of the algorithms, like integrated gradients, expand input tensors internally, we want to make sure we can scale those tensors and our forward/backward computations efficiently. For that reason, we developed a feature that chunks tensors internally into `internal_batch_size` pieces, an argument which can be passed as input to `attribute` methods, which will make the library run forward and backward passes for each tensor batch separately and ultimately combine those after computing gradients.

The algorithms that support batched optimization are:

1. IntegratedGradients
2. LayerConductance
3. InternalInfluence
4. NeuronConductance


PyTorch data parallel models are also supported across all Captum algorithms, allowing users to take advantage of multiple GPUs when applying interpretability algorithms.

More details on these algorithms can be found on our website at [captum.ai/docs/algorithms](https://www.captum.ai/docs/algorithms)

Captum Insights

Captum Insights provides these algorithms in an interactive Jupyter notebook-based tool for model debugging and understanding. It can be used embedded within a notebook or run as a standalone application.

**Features:**

1. Visualize attribution across sampled data for classification models
2. Multimodal support with text, image and general features into a single model
3. Filtering and debugging specific sets of classes and misclassified examples
4. Jupyter notebook support for easy model and dataset modification

Insights is built with standard web technologies including JavaScript, CSS, React, Yarn and Flask.

Page 2 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.