Gpboost

Latest version: v1.5.5

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 3

0.7.7

- Reduce memory usage for Vecchia approximation
- [R-package] add function for creating interaction partial dependence plots
- Add function ‘predict_training_data_random_effects’ for predicting (=‘estimating’) training data random effects
- [R-package][python-package] predict function: rename ‘raw_score’ argument to ‘pred_latent’ and unify handling of Gaussian and non-Gaussian data
- (G)LMMs: better initialization of intercept, change internal scaling of covariates, change default value of ‘lr_coef’ to 0.1
- Add ‘adam’ as optimizer option
- allow for grouped random coefficients without random intercept effects
- [R-package][python-package] nicer summary function

0.7.1

- make predictions faster and more memory efficient when having multiple grouped random effects
- set “nelder_mead” as automatic fallback option if problems in optimization occur
- (generalized) linear mixed effects models: scale covariate data for linear predictor internally for optimization using gradient descent
- add “bfgs” as optimizer option

0.6.7

- add Grabit model / Tobit objective function
- support calculation of approximate standard deviations of fixed effects coefficients in GLMMs
- [R package] added function for creating partial dependence plots (gpb.plot.partial.dependence)
- [R package] use R’s internal .Call function, correct function registration, use R’s internal error function, use R standard routines to access data in C++, move more finalizer logic into C++ side, fix PROTECT/UNPROTECT issues, limit exported symbols in DLL,
- [Python package] Fix bug in scikit-learn wrapper for classification
- change in initialization and checking of convergence criterion for mode finding algorithm for Laplace approximation for non Gaussian data

0.6.0

* add support for Wendland covariance function and covariance tapering
* add Nelder-Mead as covariance parameter optimizer option
* change calculation of gradient for GPBoost algorithm and use permutations for Cholesky factors for non-Gaussian data
* use permutations for Cholesky factors for Gaussian data when having sparse matrices
* make “gradient_descent” the default optimizer option also for Gaussian data

0.5.0

- add function in R and Python packages that allows for choosing tuning parameters using deterministic or random grid search
- faster training and prediction for grouped random effects models for non-Gaussian data when there is only one grouping variable
- faster training and prediction for Gaussian process models for non-Gaussian data when there are duplicate locations
- faster prediction for grouped random effects models for Gaussian data when there is only one grouping variable
- support pandas DataFrame and Series in Python package
- fix bug in initialization of score for the GPBoost algorithm for non-Gaussian data
- add lightweight option for saving booster models with gp_models by not saving the raw data (this is the new default)
- update eigen to newest version (commit b271110788827f77192d38acac536eb6fb617a0d)

0.4.0

- update LightGBM part to version 3.1.1.99 (git commit 42d1633aebe124821cff42c728a42551db715168)
- add support for scikit-learn wrapper interface for GPBoost
- change initialization of score (=tree ensemble) for non-Gaussian data for GPBoost algorithm
- add support for saving and loading models from file in R and Python packages

Page 2 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.