Interpret-community

Latest version: v0.31.0

Safety actively analyzes 688126 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 4 of 12

0.18.1

- includes fix to implement sparse case for methods:
- get_ranked_local_values
- get_ranked_local_names
- get_local_importance_rank

and compress local importance values to dense format based on whether it will be more optimal storage when converting an engineered explanation to a raw explanation
- remove another spurious cuML warning message on library import.

0.18.0

- upgrade shap to 0.39.0, which deprecated python 3.5 and added python 3.8 support
- Add cuML GPU SHAP KernelExplainer
- update abstract classes to use ABC instead of ABCMeta
- remove warning on mimic explainer relating to categorical features
- Add test case of serializing pandas timestamp
- add sparse feature importance support for lightgbm surrogate model
- add support for drop parameter in one hot encoder

0.17.2

- Adds missing 'intercept' field on local explanation 'mli' data. Resolves issues: microsoft/responsible-ai-widgets367 260
- Fix sphinx doc warnings
- Update readme with latest package description
- Fix model type parameter being called and defined incorrectly in mimic explainer. Resolves issue: 390
- Removed LIME warning, made it an exception inside the explainer if package is missing

0.17.1

patch release 0.17.1 of interpret-community SDK
- fix sphinx documentation issue with constants params
- remove private methods as these are not being called from anywhere in the interpret-community SDK:
- _transform_data
- _unsort_2d

0.17.0

- upgrade setuptools to latest to fix pypi upload error
- add setuptools upgrade to env setup script prior to pip install command
- update to interpret-core 0.2.4
- Support y_pred as pandas DataFrame for surrogate model predictions
- add deprecation message for old dashboard, add new raiwidgets dashboard to notebooks
- fix load explanation methods for numpy based variables

0.16.0

- Add _serialize_json_safe to utils and add unit tests
- various memory optimizations in interpret-community done as part of integration with another product
1.) where possible use _local_importance_values since local_importance_values converts from numpy to list
2.) avoid typed_dataset in some paths since that creates a new pandas dataframe
3.) add clear_references parameter to DatasetWrapper such that it can be cleared after use, which should reduce memory usage - note this is only for users who know what they are doing
4.) added test for dense and wide data which can be used for future performance testing as well
- Pickling and Unpickling MimicExplainer and Surrogate models
- fix categorical handling for scikit-learn 0.24 during one hot encoding for failing test
- Add unit test for error_handling.py
- Use ndcg from metrics in test_validate_explanations.py
- Consolidate MimicExplainer serialization tests
- Add support for inverse soft logit for binary classification scenarios
- Add replication metric computation in MimicExplainer
- various memory optimizations to the explanation-related APIs, particularly explain_global, including:
1.) preventing matrix multiply if matrix is identity in engineered to raw mapping (which happens very often) - this significantly reduces memory usage
2.) differentiating between call from explain_global vs explain_local when entering _explain_local which allows us to skip some duplicate computations which lead to higher memory usage
3.) refactoring out into functions (eg _explain_local_helper) which allows GC of various temporary variables (I could have also used del but it looks much uglier)

Page 4 of 12

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.