Gpboost

Latest version: v1.5.5

Safety actively analyzes 693883 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 3

0.3.0

- Add support for non-Gaussian data (other loss functions than L2 loss). Currently supported: binary data, Poisson data, gamma distributed data
- Changed the default value for 'use_gp_model_for_validation' from False to True
- Covariance parameter estimation: add safeguard against too large steps also when using Nesterov acceleration
- Changed the default value for 'use_nesterov_acc' from False to True. This is only relevant for gradient descent based covariance parameter estimation. For Gaussian data (=everything the library could handle so far before version 0.3.0), Fisher scoring (aka natural gradient descent) is used by default and this is not relevant for Fisher scoring
- Change default values for gradient descent based covariance parameter estimation: 'lr_cov=0.1' (before 0.01), 'lr_coef=0.1' (before 0.01), 'acc_rate_coef =0.5' (before 0.1). This is only relevant for gradient descent based covariance parameter estimation. For Gaussian data (=everything the library could handle so far before version 0.3.0), Fisher scoring (aka natural gradient descent) is used by default and this is not relevant for Fisher scoring
- Change parameter 'std_dev' from being a single parameter in 'fit' of a GPModel function to being a part of the 'params' parameter of the 'fit' function
- Removed the boosting parameter 'has_gp_model' (not visible to most users)
- Removed storage of the optimizer paramters 'optimizer_cov' and 'init_cov_pars' from R/Python to C++ only (not visible to user)

0.2.0

- GPModel : change default convergence criterion to relative change in negative log-likelihood for model fitting
- GPModel : add safeguard against too large steps (step halving) for gradient descent and Fisher scoring (without Nesterov acceleration) when doing model fitting
- Add support for R version 4.0
- GPModel: faster initialization of GPModel for grouped data with grouping data that is not ordered
- GPModel: faster model fitting for grouped data due to changes in the use of the Woodburry identity

0.1.0

Major changes

* use Woodbury identity for grouped random effects models -> faster inference for grouped random effects / mixed effects models
* no profiling out of error variance for Fisher scoring -> faster learning of covariance paramters
* add functionality that negative log-likelihood can be evaluated

Page 3 of 3

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.