Catboost

Latest version: v1.2.7

Safety actively analyzes 688365 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 15

1.0.3

Not secure
CatBoost for Apache Spark
* Fix Linux `so` in deployed Maven artifacts (no code changes)

1.0.2

Not secure
CatBoost for Apache Spark
* PySpark: Fix python -> JVM `datetime.timedelta` conversion.
* Fix: proper handling of constant categorical features. 1867
* Fix SIGSEGV for for Multiclassification with Ctrs. 1886

New features.
* Add `is_min_optimal`, `is_max_optimal` for `BuiltinMetrics`. 1890

R package
* Use `libcatboostr-darwin.dylib` instead of `libcatboostr-darwin.so` on macOS. 1834

Bugfixes
* Fix `CatBoostError: (No such file or directory) bad new file name` when using `grid_search`. 1893

1.0.1

Not secure
> :warning: **PySpark support is broken in this release.**: Please use release 1.0.3 instead.

CatBoost for Apache Spark
* More robust handling of CatBoost Master and Workers failures, avoid freezes.
* Fix for empty partitions. 1687
* Fix use-after-free. 1759 and other random errors.
* Support Spark 3.1.

Python package
* Support python 3.10. 1575

Breaking changes
* Use group weight for generated pairs in pairwise losses

Bugfixes
* Switch to mimalloc allocator on Linux and macOS to avoid problems with static TLS.
* Fix SEGFAULTs on macOS. 1877
* Fix: Distributed training: do not fail if worker contains only learn or test data
* Fix SEGFAULT on CPU with Depthwise training and `rsm` < 1.
* Fix `calc_feature_statistics` for cat features. 1882
* Fix result of cv for metric_period case
* fix `eval_metric` for Multitarget training

1.0.0

Not secure
In this release we decided to increment major version as we think that CatBoost is ready for production usage. We know, that CatBoost is used a lot in many different companies and individual projects, and not it's not only a "psychological" maturity - we think, that all the features we added in the last year and in current release is worth to update major version. And of course, as many programmers we love magic of binary numbers and we want to celebrate 100₂ anniversary since CatBoost first release on github :)
New losses
* We've implemented multi label multiclass loss function, that allows to predict multiple lables for each object 1420
* Added LogCosh loss implementation 844

Fully distributed CatBoost for Apache Spark
* In this release we our Apache Spark package became truly distributed - in previouse version CatBoost stored test datasets in controller process memory. And now test datasets are splitted evenly by workers.

Major speedup on CPU
* Speedup training on numeric datasets (480K rows, 60 features, 100 trees, binclass, 20% speedup on 16 cores Intel CPU 3.7s -> 2.9s)

R package
* Update C++ handles by reference to avoid redundant copies by david-cortes
* Avoid calculating groupwise feature importance: do not calculate feature importance for groupwise metrics by default
* R tests clear environment after runs so they won't find temporary data from previous runs
* Fixed ignored features in R fail whet single feature were ignored
* Fix feature_count attribute with ignored_features

CV improvements
* Added support for text features and embeddings in crossvalidation mode
* We've changed the way crossvalidation works - previously, CatBoost was training a small batch of trees on each fold and then switched to next fold or next batch of trees. In 1.0.0 we changed this behaviour and now CatBoost trains full model on each fold. That allows us to reduce memory and time overhead of starting new batch - only one CPU to GPU memory copy is needed per fold, not per each batch of trees. Mean metric interactive plot became unavailable until the end of training on all folds.
* **Important change** From now on `use_best_model` and early stopping works independently on each fold, as we are trying to make single fold trainig as close to regular training as possible. If one model stops at iteration `i` we use it's last value in mean score plot for points with `[i+1; last iteration)`.

GPU improvements
* Fixed distributed training performance on Ethernet networks ~2x training time speedup. For 2 hosts, 8 v100/host, 10gigabit eth, 300 factors, 150m samples, 200 trees, 3300s -> 1700s
* We've found a bug in model-size-reg implementation in GPU that leaded to worse quality of resulting model, especially in comparison to model trained on CPU with equal parameters

Rust
* Enabled load model from buffer for rust by manavsah

Bugfixes
* Fix for model predictions with text and embedding features
* Switch to TBB local executor to limit TLS size and avoid memory leakage 1835
* Switch to tcmalloc under linux x86_64 to avoid memory fragmentation bug in LFAlloc
* Fix for case of ignored text feature
* Fixed application of baseline in C++ code. Moved addition of that before application of activation functions and determining labels of objects.
* Fixes for scikit-learn compatibility validation 1783 and 1785
* Fix for thread_count = -1 in set_params(). Issue 1800
* Fix potential sigsegv in evaluator. Fixes 1809
* Fix slow (u)int8 & (u)int16 parsing as catfeatures. Fixes 718
* Adjust boost from overage option before auto learning rate
* Fix embeddings with CrossEntropy mode 1654
* Fix object importance 1820
* Fix data provider without target 1827

0.26.1

Not secure
R package
* Supported text features in R package, thanks to
* Supported virtual Ensembles in R

New features
* Thank gmrandazzo for adding multiregression with missing values on targets - `MultiRMSEWithMissingValues` loss function
* Supported multiclass prediction in C++ wrapper for model inference C API

Bugfixes
* Renamed keyword parameter in `predict_proba` function from `X` to `data`, fixes 1785
* R feature importances: remove pool argument, fix 1438 and 1772
* Fix CUDA training on Windows, multiple issues. main issue with details 1735
* Issue 1728: don't dereference pointers when there is no features
* Fixed empty tree processing in feature strength calculation
* Fixed missing loss graph points in select_features, 1775
* Sort csr matrix indices, fixes 1749
* Fix error "active CatBoost worker is already present in the current process" after previous training interruption or failure. 1795.
* Fixed erroneous warnings from models validation after training with custom loss or custom error function. Fixes 873 Fixes 1169

0.26

Not secure
New features
* 972. Add model evaluation on GPU. Thanks to rakalexandra.
* Support Langevin on GPU
* Save class labels to models in cross validation
* 1524. Return models after CV. Thanks to vklyukin
* [Python] 766. Add CatBoostRanker & pool.get_group_id_hash() for ranking. Thanks to AnnaAraslanova
* 262. Make CatBoost widget work in jupyter lab. Thanks to Dm17r1y
* [GPU only] Allow to add exponent to score aggregation function
* Allow to specify threshold parameter for binary classification model. Thanks to Keksozavr.
* [C Model API] 503. Allow to specify prediction type.
* [C Model API] 1201. Get predictions for a specific class.

Breaking changes
* Use CUDA 11 by default. CatBoost GPU now requires Linux x86_64 Driver Version >= 450.51.06 Windows x86_64 Driver Version >= 451.82.

Losses and metrics
* Add MRR and ERR metrics on CPU.
* Add [LambdaMart](https://www.microsoft.com/en-us/research/publication/from-ranknet-to-lambdarank-to-lambdamart-an-overview/) loss.
* 1557. Add survivalAFT base logic. Thanks to blatr.
* 1286. Add Cox Proportional Hazards Loss. Thanks to fibersel.
* 1595. Provide object-oriented interface for setting up metric parameters. Thanks to ks-korovina.
* Change default YetiRank decay to 0.85 for better quality.

Python package
* 1372. Custom logging stream in python package. Thanks to DianaArapova.
* 1304. Callback after iteration functionality. Thanks to qoter.

R package
* 251. Train parameter synonyms. Thanks to ebalukova.
* 252. Add `eval_metrics`. Thanks to ebalukova.

Speedups
* [Python] Speed up custom metrics and objectives with `numba` (if available)
* [Python] 1710. Large speedup for cv dataset splitting by sklearn splitter

Other
* Use Exact leaves estimation method as default on GPU
* [Spark] 1632. Update version of Scala 2.11 for security reasons.
* [Python] 1695. Explicitly specify WHEEL 'Root-Is-Purelib' value

Bugfixes
* Fix default projection dimension for embeddings
* Fix `use_weights` for some eval_metrics on GPU - `use_weights=False` is always respected now
* [Spark] 1649. The earlyStoppingRounds parameter is not recognized
* [Spark] 1650. Error when using the autoClassWeights parameter
* [Spark] 1651. Error about "Auto-stop PValue" when using odType "Iter" and odWait
* Fix usage of pairlogit weights for CPU fallback metrics when training on GPU

Page 3 of 15

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.