Flaml

Latest version: v2.3.3

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 9 of 13

0.9.1

This release contains several feature improvements and bug fixes. For example,
* support for custom data splitter.
* evaluation_function can receive incumbent result in local search and perform domain-specific early stopping by comparing with the incumbent result. As long as the comparison result (better or worse) is known, the evaluation can be stopped.
* support and automate huggingface metrics.
* use cfo in tune.run if bs is not installed.
* fixed a bug in modifying n_estimators to satisfy constraints.
* new documentation website.

What's Changed
* Update flaml_pytorch_cifar10.ipynb by sonichi in https://github.com/microsoft/FLAML/pull/328
* adding HF metrics by liususan091219 in https://github.com/microsoft/FLAML/pull/335
* train at least one iter when not trained by sonichi in https://github.com/microsoft/FLAML/pull/336
* use cfo in tune.run if bs is not installed by sonichi in https://github.com/microsoft/FLAML/pull/334
* Makes the evaluation_function could receive the incumbent best result as input in Tune by Shao-kun-Zhang in https://github.com/microsoft/FLAML/pull/339
* support for customized splitters by wuchihsu in https://github.com/microsoft/FLAML/pull/333
* Deploy a new doc website by sonichi, qingyun-wu and Shao-kun-Zhang in https://github.com/microsoft/FLAML/pull/338
* version update by sonichi in https://github.com/microsoft/FLAML/pull/341

New Contributors
* Shao-kun-Zhang made their first contribution in https://github.com/microsoft/FLAML/pull/339

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v0.9.0...v0.9.1

0.9.0

1. Revise flaml.tune API
- Add a “scheduler” argument (a user can choose from “flaml”, “asha” or a customized scheduler)
- Rename "prune_attr" to "resource_attr"
- Rename “training_function” to “evaluation_function”
- Remove the “report_intermediate_result” argument (covered by “scheduler” instead)
- Add tests for the supported schedulers
- Re-run notebooks that use schedulers

2. Add save_best_config() to save best config in a json file

What's Changed
* add save_best_config() by sonichi in https://github.com/microsoft/FLAML/pull/324
* tune api for schedulers by qingyun-wu in https://github.com/microsoft/FLAML/pull/322
* add __init__.py in nlp by sonichi in https://github.com/microsoft/FLAML/pull/325
* rename training_function by qingyun-wu in https://github.com/microsoft/FLAML/pull/327


**Full Changelog**: https://github.com/microsoft/FLAML/compare/v0.8.2...v0.9.0

0.8.2

What's Changed
* include default value in rf search space by sonichi in https://github.com/microsoft/FLAML/pull/317
* adding TODOs for NLP module, so students can implement other tasks easier by liususan091219 in https://github.com/microsoft/FLAML/pull/321
* pred_time_limit clarification and logging by sonichi in https://github.com/microsoft/FLAML/pull/319
* bug fix in confg2params by sonichi in https://github.com/microsoft/FLAML/pull/323


**Full Changelog**: https://github.com/microsoft/FLAML/compare/v0.8.1...v0.8.2

0.8.1

What's Changed
* Update test_regression.py by fengsxy in https://github.com/microsoft/FLAML/pull/306
* Add conda forge minimal test by MichalChromcak in https://github.com/microsoft/FLAML/pull/309
* fixing config2params for transformersestimator by liususan091219 in https://github.com/microsoft/FLAML/pull/316
* Code quality improvement based on 275 by abnsy and sonichi in https://github.com/microsoft/FLAML/pull/313
* skip cv preparation if eval_method is holdout by sonichi in https://github.com/microsoft/FLAML/pull/314

New Contributors
* fengsxy made their first contribution in https://github.com/microsoft/FLAML/pull/306
* abnsy made their first contribution in https://github.com/microsoft/FLAML/pull/313

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v0.8.0...v0.8.1

0.8.0

In this release, we add two nlp tasks: sequence classification and sequence regression to `flaml.AutoML`, using transformer-based neural networks. Previously the nlp module was detached from `flaml.AutoML` with a separate API. We redesigned the API such that the nlp tasks can be accessed from the same API as other tasks, and adding more nlp tasks in future would be easy. Thanks for the hard work liususan091219 !

We've also continued to make more performance & feature improvements. Examples:
* We added a variation of XGBoost search space which uses limited `max_depth`. It includes the default configuration from XGBoost library. The new search space leads to significantly better performance for some regression datasets.
* We allow arguments for `flaml.AutoML` to be passed to the constructor. This enables multioutput regression by combining sklearn's MultioutputRegressor and flaml's AutoML.
* We made more memory optimization, while allowing users to keep the best model per estimator in memory through the "model_history" option.

What's Changed
* Unify regression and classification for XGBoost by sonichi in https://github.com/microsoft/FLAML/pull/276
* when max_iter=1, skip search only if retrain_final by sonichi in https://github.com/microsoft/FLAML/pull/280
* example update by sonichi in https://github.com/microsoft/FLAML/pull/281
* Merge exp into flaml by liususan091219 in https://github.com/microsoft/FLAML/pull/210
* add best_loss_per_estimator by qingyun-wu in https://github.com/microsoft/FLAML/pull/286
* model_history -> save_best_model_per_estimator by sonichi in https://github.com/microsoft/FLAML/pull/283
* datetime feature engineering by sonichi in https://github.com/microsoft/FLAML/pull/285
* add warmstart test by qingyun-wu in https://github.com/microsoft/FLAML/pull/298
* empty search space by sonichi in https://github.com/microsoft/FLAML/pull/295
* multioutput regression by sonichi in https://github.com/microsoft/FLAML/pull/292
* add max_depth to xgboost search space by sonichi in https://github.com/microsoft/FLAML/pull/282
* custom metric function clarification by sonichi in https://github.com/microsoft/FLAML/pull/300
* checkpoint naming in nonray mode, fix ray mode, delete checkpoints in nonray mode by liususan091219 in https://github.com/microsoft/FLAML/pull/293


**Full Changelog**: https://github.com/microsoft/FLAML/compare/v0.7.1...v0.8.0

0.7.1

What's Changed
* make default verbose level > 0 when using ray by sonichi in https://github.com/microsoft/FLAML/pull/272
* default to cfo for single estimator by sonichi in https://github.com/microsoft/FLAML/pull/273
* update docstr by sonichi and qingyun-wu in https://github.com/microsoft/FLAML/pull/274
* fixed a bug in 278 by sonichi in https://github.com/microsoft/FLAML/pull/274

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v0.7.0...v0.7.1

Page 9 of 13

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.