Pgbm

Latest version: v2.3.0

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 5

1.7.1

* Fixed bug in Manifest.in

1.7

* Fixed bug in scikit-learn wrapper where eval_set was not correctly passed to PGBM model.
* Fixed bug in `lognormal` distribution where empirical mean and variance was not correctly fitted to the output distribution.

1.5

* Added documentation

1.4

* Pytorch version complete code rewrite improving speed by up to 3x on GPU.
* Replaced boston_housing by california housing as key example due to ethical concerns regarding its features (see here: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_boston.html).
* PyTorch: distributed version now separate of the vanilla version; this improves speed of the vanilla version. (Hopefully temporary solution until TorchScript will support distributed functions).
* Removed experimental TPU support for now.
* Parameters are now attributes of the learner instead of part of a dictionary `param`.
* Renamed the regularization parameter `lambda` to `reg_lambda` to avoid confusion with Python's `lambda` function.
* Rewrote splitting procedure on all versions, removing bugs observed in hyperparameter tuning.

1.3

* Added `monotone_constraints` as parameter to initialization of `PGBMRegressor` rather than as part of `fit`.
* Speed improvements of both Numba and PyTorch version.

1.2

* Fixed a bug in `monotone_constraints` calculation.
* Added a sklearn wrapper for both backends - `PGBMRegressor` is now available as a sklearn estimator.
* Renamed `levels_train` attribute in `train` function to `sample_weight` and `levels_valid` to `eval_sample_weight`, such that it is easier to understand what these parameters to.
* Added `sample_weight` and `eval_sample_weight` to Numba backend.
* Added stability constant epsilon to variance calculation to prevent division by zero (mostly happened on Numba backend, due to its higher precision in case there is a zero gradient mean in a leaf)
* Fixed bug that caused error for `min_data_in_leaf`, was caused by too low precision (BFloat16 of split count array in CUDA kernel). Set default `min_data_in_leaf` back to `2`.

Page 2 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.