Captum

Latest version: v0.7.0

Safety actively analyzes 681775 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 2

0.7.0

- Multi-task attribution for Shapley Values and Shapley Value Sampling is now supported, allowing users to get attributions for multiple target outputs simultaneously (PR 1173)
- LayerGradCam now supports returning attributions for each channel independently without summing across channels (PR 1086, thanks to dzenanz for this contribution)

Bug Fixes

- Visualization utilities were updated to use the new keyword argument visible to ensure compatibility with Matplotlib 3.7 (PR 1118)
- The default visualization mode in visualize_timeseries_attr has been fixed to appropriately utilize overlay_individual (PR 1152, thanks to teddykoker for this contribution)

0.6.0

The Captum v0.6.0 release introduces a new feature `StochasticGates`. This release also enhances Influential Examples and includes a series of other improvements & bug fixes.

Stochastic Gates
Stochastic Gates is a technique to enforce sparsity by approximating L0 regularization. It can be used for network pruning and feature selection. As directly optimizing L0 is a non-differentiable combinatorial problem, Stochastic Gates approximates it by using certain continuous probability distributions (e.g., Concrete, Gaussian) as smoothed Bernoulli distributions. So the optimization can be reparameterized into the distributions parameters. Check the following papers for more details:

- [Learning Sparse Neural Networks through L0 Regularization](https://arxiv.org/abs/1712.01312)
- [Feature Selection using Stochastic Gates](https://arxiv.org/abs/1810.04247)

Captum provides two Stochastic Gates implementations using different distributions as smoothed Bernoulli, `BinaryConcreteStochasticGates` and `GaussianStochasticGates`. They are available under `captum.module`, a new subpackage collecting neural network building blocks that are useful for model understanding. A usage example:

py
from captum.module import GaussianStochasticGates

n_gates = 5 number of gates
stg = GaussianStochasticGates(n_gates, reg_weight=0.01)

inputs = torch.randn(3, n_gates) mock inputs with batch size of 3

gated_inputs, reg = stg(mock_inputs) gate the inputs
loss = model(gated_inputs) use gated inputs in the downstream network

optimize sparsity regularization together with the model loss
loss += reg

...

verify the learned gate values to see how model is using the inputs
print(stg.get_gate_values())


Influential Examples
Influential Examples is a new function pillar enabled in the last version. This new release continues to focus on it and introduces many improvements upon the existing `TracInCP` family. Some of the changes are incompatible with the previous version. Below is the list of details:

- Support loss function with reduction of `mean` in `TracInCPFast` and `TracInCPFastRandProj` (https://github.com/pytorch/captum/pull/913)
- `TracInCP` classes add a new argument `show_progress` to optionally display progress bars for the compuation (https://github.com/pytorch/captum/pull/898, https://github.com/pytorch/captum/pull/1046)
- `TracInCP` provides a new public method `self_influence` which computes the self influence scores among the examples in the given data. `influence` can no longer compute self_influence scores and the argument `inputs` cannot be `None` (https://github.com/pytorch/captum/pull/994, https://github.com/pytorch/captum/pull/1069, https://github.com/pytorch/captum/pull/1087, https://github.com/pytorch/captum/pull/1072)
- Previous constructor argument `influence_src_dataset` in `TracInCP` is renamed to `train_dataset` (https://github.com/pytorch/captum/pull/994)
- Add GPU support to `TracInCPFast` and `TracInCPFastRandProj` (https://github.com/pytorch/captum/pull/969)
- `TracInCP` and `TracInCPFastRandProj` provides a new public method `compute_intermediate_quantities` which computes “embedding” vectors for examples in a the given data (https://github.com/pytorch/captum/pull/1068)
- `TracInCP` classes supports a new optional argument `test_loss_fn` for use cases where different losses are used for training and testing examples (https://github.com/pytorch/captum/pull/1073)
- Revised the interface of the method `influence`. Removed the arguments `unpack_inputs` and `target`. Now, the `inputs` argument must be a `tuple` where the last element is the label (https://github.com/pytorch/captum/pull/1072)

Notable Changes
- LRP now will throw error when it detects the model ruses any modules (https://github.com/pytorch/captum/pull/911)
- Fixed the bug that the concept order changes in `TCAV`’s output (https://github.com/pytorch/captum/pull/915, https://github.com/pytorch/captum/issues/909)
- Fixed the data type issue of using Captum’s built-in SGD linear models in `Lime` (https://github.com/pytorch/captum/pull/938, https://github.com/pytorch/captum/issues/910)
- All submodules are now accessible under the top-level `captum` module, so users can `import captum` and access everything underneath it, e.g., `captum.attr` (https://github.com/pytorch/captum/pull/912, https://github.com/pytorch/captum/pull/992, https://github.com/pytorch/captum/issues/680)
- Added a new attribution visualization utility for time series data (https://github.com/pytorch/captum/pull/980)
- Improved version detection to fix some compatibility issues caused by dependencies’ versions (https://github.com/pytorch/captum/pull/940, https://github.com/pytorch/captum/pull/999, )
- Fixed an index bug in the tutorial Interpret regression models using Boston House Prices Dataset (https://github.com/pytorch/captum/pull/1014, https://github.com/pytorch/captum/issues/1012)
- Refactored `FeatureAblation` and `FeaturePermutation` to verify the output type of `forward_func` and its shape when `perturbation_per_eval > 1` (https://github.com/pytorch/captum/pull/1047, https://github.com/pytorch/captum/pull/1049, https://github.com/pytorch/captum/pull/1091)
- Changed [Housing Regression tutorial](https://captum.ai/tutorials/House_Prices_Regression_Interpret) with California housing dataset (https://github.com/pytorch/captum/pull/1041)
- Improved the error message of invalid input types when the required data type is `tensor` or `tuple[tensor]` (https://github.com/pytorch/captum/pull/1083)
- Switched to tensor `forward_hook` from module `backward_hook` for many attribution algorithms that need tensor gradients, like `DeepLift` and `LayerLRP`. So those modules can now support models with in-place modules (https://github.com/pytorch/captum/pull/979, https://github.com/pytorch/captum/issues/914)
- Added an optional `mask` argument to `FGSM` and `PGD` adversarial attacks under `captum.robust` to specify which elements are perturbed (https://github.com/pytorch/captum/pull/1043)

0.5.0

The Captum v0.5.0 release introduces a new function pillar, Influential Examples, with a few code improvements and bug fixes.

Influential Examples

Influential Examples implements the method [TracInCP](https://arxiv.org/abs/2002.08484). It calculates the influence score of a given training example on a given test example, which approximately answers the question “if the given training example were removed from the training data, how much would the loss on the model change?”. TracInCP can be used for:
- identifying **proponents/opponents**, which are the training examples with the most positive/negative influence on a given test example
- identifying mis-labelled data

Captum currently offers the following specific variant implementings of TracInCP:
* `TracInCP` - Computes influence scores using gradients at all specified layers. Can be used for identifying proponents/opponents, and identifying mis-labelled data. Both computations take time linear in training data size.
* `TracInCPFast` - Like TracInCP, but computes influence scores using only gradients in the last fully-connected layer, and is expedited using a computational trick.
* `TracInCPFastRandProj` - Version of TracInCPFast which is specialized for computing proponents/opponents. In particular, pre-processing enables computation of proponents / opponents in constant time. The tradeoff is the linear time and memory required for pre-processing. Random projections can be used to reduce memory usage. This class should not be used for identifying mis-labelled data.

A tutorial is made to demonstrate the usage https://captum.ai/tutorials/TracInCP_Tutorial
<img width="768" alt="influential example" src="https://user-images.githubusercontent.com/5113450/156647765-7b3c72a8-ea76-4d99-b735-4e73ba44efb5.png">

Notable Changes

* Minimum required PyTorch version becomes **v1.6.0** (876)
* Enabled argument `model_id` in `TCAV` and removed `AV` from public concept module (PR 811)
* Add new configurable argument `attribute_to_layer_input` in `TCAV` to set for both layer activation and attribution (864)
* Rename the argument `raw_input` to `raw_input_ids` in visualization util `VisualizationDataRecord` (PR 804)
* Support configurable `eps` argument in `DeepLift` (PR 835)
* Captum now leverages `register_full_backward_hook` introduced in PyTorch v1.8.0. Attribution to neuron output in `NeuronDeepLift`, `NeuronGuidedBackprop`, and `NeuronDeconvolution` are deprecated and will be removed in the next major release v0.6.0 (PR 837)
* Fix the issue that Lime and KernelShap fail to handle empty tensor input like `tensor([[],[],[]])` (PR 812)
* Fix the bug that `visualization_transform` of `ImageFeature` in Captum Insight is not applied (PR 871)

0.4.1

The Captum v0.4.1 release includes three new tutorials, a few code improvements and bug fixes.

New Tutorials

Robustness tutorial:

* Applying robustness attacks and metrics to CIFAR model and dataset

Concept tutorials:

* TCAV for image classification for googlenet model
* TCAV for NLP sentiment analysis model

Improvements

* Reduced unnecessary reliance on `Numpy` across the codebase by replacing such usages with `PyTorch` equivalents when possible (PR 714 755 760)
* Enhanced the error message for missing modules rules in LRP (PR 727)
* Switched linter to `ufmt` from previous `black` + `isort` and reformatted the code accordingly (PR 739)
* Generalized implementation of `captum._utils.av` for TCAV to use and refactored TCAV to simplify the creation of datasets used to train concept models (PR 747)

Bug Fixes

* Fixed the device error when using TCAV on cuda (Issue 719 720 721 , PR 725)
* Captum Insight now cache a subset of batches from dataset for recycle to fix the issue of not showing data after iterating all batches (PR 728)
* Corrected the loading of reference word embedding in tutorial “Interpreting Bert Part 1” (PR 743)
* Renamed the util `save_div`’s argument `default_value` to `default_denom` and unified its behaviors for different denominator types (Issue 654 , PR 751)

0.4.0

* Neuron conductance now supports a selector function (in addition to providing a neuron index) to select the target neuron for attribution, which enables support for layers with input / output as a tuple of tensors (PR 602).
* Lime now supports a generator to be returned by the perturbation function, rather than only a single sample, to better support enumeration of perturbations for interpretable model training (PR 619).
* KernelSHAP has been improved to perform weighted sampling of vectors for interpretable model training, rather than uniformly sampling vectors and weighting only when training. This change scales better with larger numbers of features, since weights for larger numbers of features were previously leading to arithmetic underflow (PR 619).
* A new option show_progress has been added to all perturbation-based attribution methods, which shows a progress bar to help users track progress of attribution computation (Issue 630 , PR 581).
* A new option / flag normalize has been added to infidelity evaluation metric that normalizes and scales infidelity score based on an input flag normalize (Issue: 613, PR: 639 )
* All perturbation-based attribution methods now support boolean input tensors (PR 666).
* Lime’s default regularization for Lasso regression has been reduced from 1.0 to 0.01 to avoid frequent issues with attribution results being 0 (Issue 679, PR 689).

Bug Fixes

* Gradient-based attribution methods have been fixed to not zero previously stored grads, which avoids warnings related to accessing grad of non-leaf tensors (Issue 421, 491, PR 597).
* Captum tests were previously included in Captum distributions unnecessarily; tests are no longer packaged with Captum releases (Issue 629 , PR 635).
* Captum’s dependency on matplotlib in Conda environments has been changed to matplotlib-base, since pyqt is not used in Captum (Issue 644, PR 648).
* Layer attribution methods now set gradient requirements only starting at the target layer rather than at the inputs, which ensures support for models with int or boolean input tensors (PR 647, 643).
* Lime and Kernel SHAP int overflow issues (with sklearn interpretable model training) have been resolved, and all interpretable model inputs / outputs are converted to floats prior to training (PR 649).
* Original parameter names which were renamed in v0.3 for NoiseTunnel, Kernel Shap, and Lime no longer lead to deprecation warnings and were removed in 0.4.0 (PR 558).

0.3.1

* LayerIntegratedGradients now supports computing attributions for multiple layers simultaneously. (PR 532).
* NoiseTunnel now supports an internal batch size to split noised inputs into batches and appropriately aggregate results (PR 555).
* visualize_text now has an option return_html to export the visualization as HTML code (PR 548).
* A utility wrapper was added to allow computing attributions for intermediate layers and inputs simultaneously (PR 534).

Captum Insights

* Attributions for multiple models can be compared in Captum Insights (PR 551).
* Various improvements to reduce package size of Captum Insights (PR 556 and 562).

![image](https://user-images.githubusercontent.com/11067177/105610180-5cd93a00-5d7c-11eb-868c-1254e9436a74.png)


Bug Fixes

* Some parameter names were renamed in NoiseTunnel, Kernel Shap, and Lime to avoid conflicting names when combining Noise Tunnel or metrics with attribution methods. Deprecated arguments now raise warnings and will be removed in 0.4.0 (PR 558).
* Feature Ablation now supports cases where the output may be on a different device than the input, which may occur in model-parallel setups (528).
* Lime (and KernelShap) were fixed to appropriately handle int or long input types (570).

Page 1 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.