Interpret

Latest version: v0.6.8

Safety actively analyzes 688600 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 9

0.6.2

Added
- pass optional kwargs to DecisionTreeClassifier in PR 537 by busFred
- support for multiclass purification
- support for higher dimensional purification
- allow higher levels of purification than would be supported via the tolerance parameter
Changed
- numpy 2.0 support for EBMs
- update documentation regarding monotonicity in PR 531 by Krzys25
- moved purification utility from "interpret/glassbox/_ebm/_research" to "interpret.utils"
Fixed
- possible fix for issue 543 where merge_ebms was creating unexpected NaN values

0.6.1

Fixed
- added compatibility with numpy 2.0 thanks to DerWeh in PR 525
- fixed bug that was preventing SIMD from being used in python
- removed approximate division in SIMD since the approximation was too inaccurate
Changed
- EBM fitting time reduced

0.6.0

Added
- Documentation on recommended hyperparameters to help users optimize their models.
- Support for monotone_constraints during model fitting, although post-processed monotonization is still suggested/preferred.
- The EBMModel class now includes _more_tags for better integration with the scikit-learn API, thanks to contributions from DerWeh.
Changed
- Default max_rounds parameter increased from 5,000 to 25,000, for improved model accuracy.
- Numerous code simplifications, additional tests, and enhancements for scikit-learn compatibility, thanks to DerWeh.
- The greedy boosting algorithm has been updated to support variable-length greedy sections, offering more flexibility during model training.
- Full compatibility with Python 3.12.
- Removal of the DecisionListClassifier from our documentation, as the skope-rules package seems to no longer be actively maintained.
Fixed
- The sweep function now properly returns self, correcting an oversight identified by alvanli.
- Default exclude parameter set to None, aligning with scikit-learn's expected defaults, fixed by DerWeh.
- A potential bug when converting features from categorical to continuous values has been addressed.
- Updated to handle the new return format for TreeShap in the SHAP 0.45.0 release.
Breaking Changes
- replaced the greediness \_\_init\_\_ parameter with greedy_ratio and cyclic_progress parameters for better control of the boosting process
(see documentation for notes on greedy_ratio and cyclic_progress)
- replaced breakpoint_iteration_ with best_iteration_, which now contains the number of boosting steps rather than the number of boosting rounds

0.5.1

Added
- Added new init parameter: interaction_smoothing_rounds
- Added new init parameter: min_hessian
- synthetic dataset generator (make_synthetic) for testing GAMs and for documentation
Changed
- default parameters have been modified to improve the accuracy of EBMs
- changed boosting internals to use LogitBoost to improve accuracy
- changed interaction detection to use hessians to improve interaction selection
- enabled smoothing_rounds by default to improve the smoothness of EBMs
- added the ability to specify interactions via feature names or negative indexing
- improved the speed of Morris sensitivity and partial dependence
- python 3.12 support for core EBMs. Some of our optional dependencies do not yet support python 3.12 though
- made early stopping more consistent and changed the early_stopping_tolerance to be a percentage
Fixed
- avoid displaying a scroll bar by default in jupyter notebook cells
- removed the dependency on deprecated distutils
Breaking Changes
- changed the internal representation for classifiers that have just 1 class

0.5.0

Added
- added support for AVX-512 in PyPI installations to improve fitting speed
- introduced an option to disable SIMD optimizations through the debug_mode function in python
- exposed public utils.link_func and utils.inv_link functions
Changed
- the interpret-core package now installs the dependencies required to build and predict EBMs
by default without needing to specify the [required] pip install flag
- experimental/private support for OVR multiclass EBMs
- added bagged_intercept_ attribute to store the intercepts for the bagged models
Fixed
- resolved an issue in merge_ebms where the merge would fail if all EBMs in the
merge contained features with only one bin (issue 485)
- resolved multiple future warnings from other packages
Breaking Changes
- changed how monoclassification (degenerate classification with 1 class) is expressed
- replaced predict_and_contrib function with simpler eval_terms function that returns
only the per-term contribution values. If you need both the contributions and predictions use:
interpret.utils.inv_link(ebm.eval_terms(X).sum(axis=1) + ebm.intercept_, ebm.link_)
- separate to_json into to_jsonable (for python objects) and to_json (for files) functions
- create a new link function string for multiclass that is separate from binary classification
- for better scikit-learn compliance, removed the decision_function from the ExplainableBoostingRegressor

0.4.4

Added
- added the following model editing functions: copy, remove_terms, remove_features, sweep, scale
- added experimental support for a JSON exporter function: to_json

Page 2 of 9

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.