Aplr

Latest version: v10.7.3

Safety actively analyzes 679296 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 10 of 20

7.3.0

Added the hyperparameter boosting_steps_before_interactions_are_allowed. It specifies how many boosting steps to wait before searching for interactions. If for example 800, then the algorithm will be forced to only fit main effects in the first 800 boosting steps, after which it is allowed to search for interactions (given that other hyperparameters that control interactions also allow this). The motivation for fitting main effects first may be 1) to get a cleaner looking model that puts more emphasis on main effects and 2) to speed up the algorithm since looking for interactions is computationally more demanding. The default value of 0 gives a similar model fit to the one from version 7.2.0.

7.2.0

Added the possibility to send more data, in the form of a numpy matrix, to custom loss, negative gradient and validation error functions, through the fit() method.

7.1.0

Changed the behaviour of "boosting_steps_before_pruning_is_done". Its default value is now 0. When 0, pruning is not done. Positive values work in the same manner as in the previous version. The reason why pruning is not done by default is that pruning can significantly increase training time on larger datasets and when the model gets many terms. Pruning may increase model predictiveness (usually slightly).

7.0.1

Fixed a bug that unnecessarily reduced the computational speed of pruning.

7.0.0

- Added a pruning mechanism to prune terms as long as this reduces the training error.
- Improved the possibility to set interaction constraints. Now this works similarly to the implementation in for example LightGBM.
- Improved readability of interaction terms by preventing the formation of unnecessarily complex interactions.

6.5.2

Fixed a bug that unnecessarily increased model training time.

Page 10 of 20

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.