Pgbm

Latest version: v2.3.0

Safety actively analyzes 682416 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 5

1.1

* Fixed a bug in bin calculation of Torch version that caused incorrect results on most outer quantiles of feature values.
* Added `monotone_constraints` as a parameter. This allows to force the algorithm to maintain an positive or negative monotonic relationship of the output with respect to the input features.
* Included automatic type conversion to `float64` in Numba version.
* Set minimum for `min_data_in_leaf`to `3`. There were some stability issues with the setting at `2` which led to division by zero in rare cases, and this resolves it.

1.0

* Fixed bug where it was not possible to use `feature_fraction<1` on gpu because random number generator was cpu-based.
* Added possibility to output learned mean and variance when using `predict_dist` function.

0.9

* Experimental TPU support for Google Cloud.
* Python 3.7 compatibility.
* Jupyter Notebook examples.

0.8

* Added `studentt` distribution to Numba backend (with `df=3`).
* Added variance clipping to normal distribution of Numba backend.
* Some Numba backend code rewriting.
* JIT'ed `crps_ensemble` in Numba backend.
* Fixed bug where Torch-backend could not read Numba-backend trained models.
* Simpler bin calculation in Torch backend using torch.quantile.
* Completely rewrote distributed training.
* Changed default seed.
* Bagging and feature subsampling is now only done in case these parameters are set different from their default values. This offers slight speedup for larger datasets.
* Fixed bug with `min_data_in_leaf`.
* Set default `tree_correlation` parameter to `log_10(n_samples_train) / 100` as per our paper.
* Added checkpointing, allowing users to continue training a model.

As of this version, the following is deprecated:
* The hyperparameter `gpu_device_ids` is replaced by a hyperparameter `gpu_device_id`.
* The vanilla `pgbm` package no longer offers parallel training; to perform parallel training `pgbm_dist` should be used.
* The hyperparameter `output_device` has been deprecated. All training is always performed on the chosen `device`. For parallelization, use `pgbm_dist`.

0.7

* Added `optimize_distribution` function to fit best distribution more easily.
* Fixed bug in Numba backend Poisson distribution.
* Improved speed of Numba backend version.
* Parallelized pre-computing split decisions on numba backend. Changed dtype to int16 instead of int32.
* Reduced integer size of CUDA kernel to short int.
* Split examples to support example for both backends.

0.6.1

* Fixed bug in Numba feature importance calculation.

Page 3 of 5

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.