Gpboost

Latest version: v1.5.5

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 3

1.5.5

- reduce memory footprint of ‘fitc’ and ‘full_scale_tapering’ GP approximations
- add ‘t’ and ‘t_fix_df’ likelihoods
- add ‘num_parallel_threads’ parameter so that users can control the number of parallel CPU threads
- [R-package] make 'gpb.grid.search.tune.parameters' and 'tune.pars.bayesian.optimization' more robust
- [python-package] make 'tune_pars_TPE_algorithm_optuna' more robust
- [R-package][python-package] fix bug in handling of 'line_search_step_length' parameter in hyper parameter tuning functions
- add option gp_approx = 'vecchia_latent'

1.5.4

- support Matern covariance functions with general shape parameters (‘cov_fct_shape’), not just 0.5, 1.5, 2.5
- reduce memory footprint of ‘vecchia’ approximation
- "make 'fitc' and 'full_scale_taperig' approximations numerically more stable
- [python-package] add function ‘tune_pars_TPE_algorithm_optuna’ for choosing tuning parameters with the Tree-structured Parzen estimator algorithm
- [R-package] add function ‘tune.pars.bayesian.optimization’ for choosing tuning parameters using Bayesian optimization and the ‘mlrMBO’ R package
- change default initial values for marginal variances when having multiple random effects
- [R-package] fix bug in ‘gpb.grid.search.tune.parameters’ for ‘metric’ parameter for metrics for which higher is better (auc, average_precision)
- other bug fixes

1.4.0

- Support space-time (‘matern_space_time’) and anisotropic ARD (‘matern_ard’, ‘gaussian_ard’) covariance functions
- support ‘negative_binomial’ likelihood
- support FITC aka modified predictive process approximation (‘fitc’) and full scale approximation with tapering (‘full_scale_tapering’) with ‘cholesky’ decomposition and ‘iterative’ methods
- add optimizer_cov option 'lbfgs', and make this the default for (generalized) linear effects models
- faster prediction for multiple grouped random effects and non-Gaussian likelihoods
- allow for duplicate locations / coordinates for Vecchia approximation for non-Gaussian likelihoods
- support vecchia approximation for space-time and ARD covariance functions with correlation-based neighbor selection
- support offset in GLMMs
- add safeguard against too large step sizes for linear regression coefficients
- change default initial values for (i) (marginal) variance and error variance to var(y)/2 for Gaussian likelihoods and (ii) range parameters such that the effective range is half the average distance
- add backtracking line search for mode finding in Laplace approximation
- add option ‘reuse_learning_rates_gp_model’ for GPBoost algorithm -> faster learning
- add option ‘line_search_step_length’ for GPBoost algorithm. This corresponds to the optimal choice of boosting learning rate as in e.g. Friedman (2001)
- support optimzer_coef = ‘wls’ when optimizer_cov = ‘lbfgs’ for Gaussian likelihood, make this the default

1.2.5

- support iterative methods for Vecchia-Laplace approximation (non-Gaussian data and gp_approx=”vecchia”)
- faster model construction and prediction for compactly supported covariance functions
- add metric 'test_neg_log_likelihood'
- change handling of 'objective' parameter for GPBoost algorithm: only ‘likelihood’ in ‘GPModel()’ needs to be set
- change API for parameters ‘vecchia_pred_type’ and ‘num_neighbors_pred’

1.0.1

- faster gradient calculation for
1. Multiple / multilevel grouped random effects for non-Gaussian likelihoods
2. GPs with Vecchia approximation for non-Gaussian likelihoods
3. GPs with compactly supported covariance functions / tapering

- enable estimation of shape parameter in gamma likelihood
- predict_training_data_random_effects: enable for Vecchia approximation and enable calculation of variances
- change API for Vecchia approximation and tapering
- correction in nearest neighbor search for Vecchia approximation
- show GPModel parameters on original and not transformed scale when trace = true
- change initial intercept for bernoulli_probit, gamma, and poisson likelihood
- change default value for ‘delta_rel_conv’ to 1e-8 for nelder_mead
- avoid unrealistically large learning rates for gradient descent

0.8.0

- cap too large gradient descent steps on log-scale for covariance parameters, GLMMs: reset small learning rates for covariance parameters and regression parameters if the other parameters change
- add gaussian_neg_log_likelihood as validation metric
- add function ‘get_nested_categories‘ for nested grouped random effects
- prediction: remove nugget variance from predictive (co)variances when predict_response = false for Gaussian likelihoods
- set default value for predict_response to true in prediction function of GPModel
- NA’s and Inf’s are not allowed in label
- correct prediction if Vecchia approximation for non-Gaussian likelihoods

Page 1 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.