Forust

Latest version: v0.4.8

Safety actively analyzes 666166 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 8

0.2.10

- Added the branch difference method for calculating contributions.
- Normalized logloss metric, by diving it by the sun of the sample weight.

0.2.9

Hot fix to add default values for GOSS parameters to booster.

0.2.8

This release add the GOSS (Gradient Based One Side Sampling) sampling method to the package. Additionally small performance improvements, and moved the application of the learning_rate till after the monotonicity bounds are generated.

0.2.7

This release introduces the ability to record metrics on data while fitting, by passing in data to the `evaluation_data` parameter. Additionally `early_stopping_rounds` are now supported so training will be cut short, if there is no improvement seen in performance for a specified number of iterations.

0.2.6

Made the partial dependence predictions faster, and improvements to python API.

0.2.5

This release adds the `method` parameter to the `predict_contributions` metric. This allows either "average" (the method xgboost uses to calculate the contribution matrix when `approx_contribs` is True), or "weight" to be specified. The "average" method averages the leaf weights across all nodes, and then uses this for calculating the contributions. The "weight" method instead uses the internal leaf weights, to determine how a feature impacts the final score. Both methods result in a contribution matrix, that if summed by each row, is equivalent to the output of the `predict` method.

Page 6 of 8

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.