Xgboost

Latest version: v2.0.3

Safety actively analyzes 638466 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 6 of 10

0.71

* This is a minor release, mainly motivated by issues concerning `pip install`, e.g. 2426, 3189, 3118, and 3194.
With this release, users of Linux and MacOS will be able to run `pip install` for the most part.
* Refactored linear booster class (`gblinear`), so as to support multiple coordinate descent updaters (3103, 3134). See BREAKING CHANGES below.
* Fix slow training for multiclass classification with high number of classes (3109)
* Fix a corner case in approximate quantile sketch (3167). Applicable for 'hist' and 'gpu_hist' algorithms
* Fix memory leak in DMatrix (3182)
* New functionality
- Better linear booster class (3103, 3134)
- Pairwise SHAP interaction effects (3043)
- Cox loss (3043)
- AUC-PR metric for ranking task (3172)
- Monotonic constraints for 'hist' algorithm (3085)
* GPU support
- Create an abstract 1D vector class that moves data seamlessly between the main and GPU memory (2935, 3116, 3068). This eliminates unnecessary PCIe data transfer during training time.
- Fix minor bugs (3051, 3217)
- Fix compatibility error for CUDA 9.1 (3218)
* Python package:
- Correctly handle parameter `verbose_eval=0` (3115)
* R package:
- Eliminate segmentation fault on 32-bit Windows platform (2994)
* JVM packages
- Fix a memory bug involving double-freeing Booster objects (3005, 3011)
- Handle empty partition in predict (3014)
- Update docs and unify terminology (3024)
- Delete cache files after job finishes (3022)
- Compatibility fixes for latest Spark versions (3062, 3093)
* BREAKING CHANGES: Updated linear modelling algorithms. In particular L1/L2 regularisation penalties are now normalised to number of training examples. This makes the implementation consistent with sklearn/glmnet. L2 regularisation has also been removed from the intercept. To produce linear models with the old regularisation behaviour, the alpha/lambda regularisation parameters can be manually scaled by dividing them by the number of training examples.

0.47

* Changes in R library
- fixed possible problem of poisson regression.
- switched from 0 to NA for missing values.
- exposed access to additional model parameters.
* Changes in Python library
- throws exception instead of crash terminal when a parameter error happens.
- has importance plot and tree plot functions.
- accepts different learning rates for each boosting round.
- allows model training continuation from previously saved model.
- allows early stopping in CV.
- allows feval to return a list of tuples.
- allows eval_metric to handle additional format.
- improved compatibility in sklearn module.
- additional parameters added for sklearn wrapper.
- added pip installation functionality.
- supports more Pandas DataFrame dtypes.
- added best_ntree_limit attribute, in addition to best_score and best_iteration.
* Java api is ready for use
* Added more test cases and continuous integration to make each build more robust.

0.8.0

- Updated XGBoost to 2.0.0
- Dropped support for Ruby < 3

0.7.3

- Fixed error with `dup` and `clone`

0.7.2

- Updated XGBoost to 1.7.5
- Added musl shared library for Linux
- Improved error message for invalid matrix

0.7.1

- Updated XGBoost to 1.7.0

Page 6 of 10

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.