Aplr

Latest version: v10.7.3

Safety actively analyzes 679296 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 5 of 20

9.10.0

Added a method, _calculate_local_contribution_from_selected_terms_, to increase the interpretability of interactions (also works for main effects). Updated the documentation.

9.9.0

Provided an option to increase interpretability at the expense of predictiveness by setting the _max_terms_ hyperparameter. See the API references for more information.

9.8.0

Now it is possible to optionally provide the following for each predictor:
- Learning rate.
- Penalty for non-linearity.
- Penalty for interactions.

9.7.0

- Speeded up the algorithm when penalty_for_non_linearity=1.0 or penalty_for_interactions=1.0.
- Automatically rounding penalty_for_non_linearity and penalty_for_interactions to the nearest boundary in the [0.0, 1.0] range if the user specifies something outside of this range.

9.6.0

Added two constructor parameters to help controlling the interpretability versus predictiveness trade-off:
- penalty_for_non_linearity (default = 0.0). Specifies a penalty in the range [0.0, 1.0] on terms that are not linear effects. A higher value increases model interpretability but can hurt predictiveness.
- penalty_for_interactions (default = 0.0). Specifies a penalty in the range [0.0, 1.0] on interaction terms. A higher value increases model interpretability but can hurt predictiveness.

9.5.0

Added the possibility to use linear effects only for a custom number of initial boosting steps. This can be used for example to increase interpretability by building models that place more weight on linear effects.

Page 5 of 20

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.