Tune-sklearn

Latest version: v0.5.0

Safety actively analyzes 688843 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 2

0.4.1

Releasing a new version of tune-sklearn! This version should be compatible with both Ray master and Ray 1.6.

This is a maintenance release without any public-facing changes.

Changelog:
* Refactor `TuneBaseSearchCV.fit` (https://github.com/ray-project/tune-sklearn/pull/215)
* Use Tune search spaces for Optuna (https://github.com/ray-project/tune-sklearn/pull/216)

0.4.0

Releasing a new version of tune-sklearn! This version should be compatible with both Ray master and Ray 1.4.

Changelog:
* You can now pass any `tune.run` params in `fit` using the `tune_params` argument (https://github.com/ray-project/tune-sklearn/pull/212)
* Fixed an exception in `_is_param_distributions_all_tune_domains` (https://github.com/ray-project/tune-sklearn/pull/209)

0.3.0

Releasing a new version of tune-sklearn! This version should be compatible with both Ray master and Ray 1.3.

Changelog:
* Allow any Ray Tune searcher to be passed in (198) (thanks Yard1 !)
* You can now specify a `name` to your run args (200) (thanks, rspeare!)
* Optimization: Estimators can be passed via object store IDs (196)
* Change BaseEstimator to BaseSearchCV (192) (thanks timvink!)
* Fix groups for cross validation (191)
* Random search with tune search spaces previously failed to produce multiple trials (180)
* Update code snippets in README to showcase ray.tune SearchSpaces (176) (thanks mkretsch327 !)
* Positional arg to keyword in pipeline partial fit (173)

0.2.1

* Adds support for scikit-learn 0.24
* Fixes an issue with gradient boosted models and pipelines

0.2.0

New Features:
* tune-sklearn now supports sampling with Optuna! (136, 132)
* You can now do deadline-based hyperparameter tuning with the new `time_budget_s` parameter (134)
* Custom logging can be done by passing in loggers as strings (`TuneSearchCV(loggers=["json", "tensorboard"])`) (100)
* Reproducible experiments can be set with a `seed` parameter to make initial configuration sampling deterministic (140)
* Custom stopping (such as stopping a hyperparameter search upon plateau) is now supported (156)

Improvements:
* Support for Tune search spaces (128)
* Use fractional GPUs for a Ray cluster (145)
* Bring API in line with sklearn `best_params` accessible without `refit=True`, (114)
* Early stopping support for sklearn Pipelines, LightGBM and CatBoost (103, 109)
* Implement resource step for early stopping (121)
* Raise Errors on trial failures instead of logging them (130)
* Remove unnecessary dependencies (152)


Bug fixes:
* Refactor early stopping case handling in `_train` (97)
* Fix Warm start errors (106)
* Fix hyperopt loguniform params (104)
* Fix of multi_metric scoring issue (111)
* BOHB sanity checks (133)
* Avoid Loky Pickle Error (150)


Special thanks to: krfricke, amogkam, Yard1, richardliaw, inventormc, mattKretschmer

0.1.0

See the most up-to-date version of the documentation in https://docs.ray.io/en/master/tune/api_docs/sklearn.html (corresponding to the master branch).

Highlights

These release notes contain all updates since tune-sklearn==0.0.7.

* `tune-sklearn` now supports multiple search algorithms (including TPE from HyperOpt and BOHB). Thanks Yard1!
* `tune-sklearn` now supports iterative training for XGBoost (by iteratively increasing the number of rounds) and most models that have `warm_start` capabilities. This is only enabled if `early_stopping=True`.


Other notes:
* The Ray Dashboard is disabled by default. This should reduce error messages.
* `n_iter` is now renamed to `n_trials` to avoid confusion
* Multi-metric scoring is now supported
* You can set `local_mode` to run everything on a single process. This can be faster in some cases.

List of changes

Update setup.py to remove sklearn version control (96)
[travis] try-fast-build (95)
Travis fix (94)
[docs] Fix docs and build to avoid regression (92)
Warm start for ensembles (90)
Explicitly pass `mode=max` to schedulers (91)
Enable scikit-optimize again (89)
Multimetric scoring (62)
Early stopping for XGBoost + Update Readme (63)
Fix BOHB, change n_iter -> n_trials, fix up early stopping (81)
Disable the Ray Dashboard (82)
Provide local install command (78)
Use warm start for early stopping (46)
Fix condition in _fill_config_hyperparam (76)
Enable local mode + forward compat (74)
Add a missing space in readme (69)
New search algorithms (68)
fix resources per trial (52)

Thanks to inventormc, Yard1 , holgern , krfricke , richardliaw for contributing!

Page 2 of 2

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.