Flaml

Latest version: v2.3.3

Safety actively analyzes 706267 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 3 of 13

2.0.0rc5

This version makes auto-reply methods pluggable and supports asynchronous mode in agents. An example of handling data steams is added.
Thanks to qingyun-wu ekzhu for laying the foundation and reviewing!

What's Changed
* Make auto reply method pluggable by sonichi in https://github.com/microsoft/FLAML/pull/1177
* support async in agents by sonichi in https://github.com/microsoft/FLAML/pull/1178

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v2.0.0rc4...v2.0.0rc5

2.0.0rc4

This pre-release makes lots of improvements in the agentchat framework. Many new applications are enabled.
Thanks JieyuZ2 gagb thinkall BeibinLi ekzhu LittleLittleCloud kevin666aa qingyun-wu LeoLjl and others for your contributions!

What's Changed
* update colab link by sonichi in https://github.com/microsoft/FLAML/pull/1118
* fix bug in math_user_proxy_agent by kevin666aa in https://github.com/microsoft/FLAML/pull/1124
* Add log metric by thinkall in https://github.com/microsoft/FLAML/pull/1125
* Update assistant agent by sonichi in https://github.com/microsoft/FLAML/pull/1121
* suppress printing data split type by xiaoboxia in https://github.com/microsoft/FLAML/pull/1126
* change price ratio by sonichi in https://github.com/microsoft/FLAML/pull/1130
* simplify the initiation of chat by sonichi in https://github.com/microsoft/FLAML/pull/1131
* Update docs on how to interact with local LLM by LeoLjl in https://github.com/microsoft/FLAML/pull/1128
* Json config list, agent refactoring and new notebooks by sonichi in https://github.com/microsoft/FLAML/pull/1133
* unify auto_reply; bug fix in UserProxyAgent; reorg agent hierarchy by sonichi in https://github.com/microsoft/FLAML/pull/1142
* rename GenericAgent -> ResponsiveAgent by sonichi in https://github.com/microsoft/FLAML/pull/1146
* Bump semver from 5.7.1 to 5.7.2 in /website by dependabot in https://github.com/microsoft/FLAML/pull/1119
* autogen.agent -> autogen.agentchat by sonichi in https://github.com/microsoft/FLAML/pull/1148
* MathChat blog post by kevin666aa in https://github.com/microsoft/FLAML/pull/1096
* Commenting use_label_encoder - xgboost by minghao51 in https://github.com/microsoft/FLAML/pull/1122
* raise error when msg is invalid; fix docstr; improve ResponsiveAgent; update doc and packaging; capture ipython output; configurable default reply by sonichi in https://github.com/microsoft/FLAML/pull/1154
* consecutive auto reply, history, template, group chat, class-specific reply by sonichi in https://github.com/microsoft/FLAML/pull/1165
* Improve auto reply registration by sonichi in https://github.com/microsoft/FLAML/pull/1170

New Contributors
* xiaoboxia made their first contribution in https://github.com/microsoft/FLAML/pull/1126
* minghao51 made their first contribution in https://github.com/microsoft/FLAML/pull/1122

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v2.0.0rc3...v2.0.0rc4

2.0.0rc3

Highlights
Added new OpenAI models' support of functions in agents. Thanks to kevin666aa, sonichi and qingyun-wu
Please find a code example in this notebook: https://github.com/microsoft/FLAML/blob/main/notebook/autogen_agent_function_call.ipynb

What's Changed
* temp solution for joblib 1.3.0 issue by thinkall in https://github.com/microsoft/FLAML/pull/1100
* support string alg in tune by skzhang1 in https://github.com/microsoft/FLAML/pull/1093
* update flaml version in MathChat notebook by kevin666aa in https://github.com/microsoft/FLAML/pull/1095
* doc update by sonichi in https://github.com/microsoft/FLAML/pull/1089
* Update OptunaSearch by skzhang1 in https://github.com/microsoft/FLAML/pull/1106
* Support function_call in `autogen/agent` by kevin666aa in https://github.com/microsoft/FLAML/pull/1091
* update notebook with new models by sonichi in https://github.com/microsoft/FLAML/pull/1112
* Enhance Integration with Spark by levscaut in https://github.com/microsoft/FLAML/pull/1097
* Add Funccall notebook and document by kevin666aa in https://github.com/microsoft/FLAML/pull/1110
* Update docstring for oai.completion. by LeoLjl in https://github.com/microsoft/FLAML/pull/1113
* Try to prevent the default AssistantAgent from asking users to modify the code by sonichi in https://github.com/microsoft/FLAML/pull/1114

New Contributors
* LeoLjl made their first contribution in https://github.com/microsoft/FLAML/pull/1113

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v2.0.0rc2...v2.0.0rc3

2.0.0rc2

Highlights

* Support new OpenAI gpt-3.5-turbo and gpt-4 models in `autogen`. Thanks to gagb kevin666aa qingyun-wu ekzhu BeibinLi .
* [MathChat](https://arxiv.org/abs/2306.01337) implemented with `autogen.agents`. Thanks to kevin666aa qingyun-wu.
* Time-series related functionality in `automl` is factored out. Thanks to EgorKraevTransferwise .

Thanks to all the contributors and reviewers thinkall qingyun-wu EgorKraevTransferwise kevin666aa liususan091219 skzhang1 jtongxin pcdeadeasy markharley int-chaos !

What's Changed
* Fix documentation by sonichi in https://github.com/microsoft/FLAML/pull/1075
* encode timeout msg in bytes by sonichi in https://github.com/microsoft/FLAML/pull/1078
* Add pands requirement in benchmark option by qingyun-wu in https://github.com/microsoft/FLAML/pull/1070
* Fix pyspark tests in workflow by thinkall in https://github.com/microsoft/FLAML/pull/1071
* Docmentation for agents by qingyun-wu in https://github.com/microsoft/FLAML/pull/1057
* Links to papers by sonichi in https://github.com/microsoft/FLAML/pull/1084
* update openai model support by sonichi in https://github.com/microsoft/FLAML/pull/1082
* string to array by sonichi in https://github.com/microsoft/FLAML/pull/1086
* Factor out time series-related functionality into a time series Task object by EgorKraevTransferwise in https://github.com/microsoft/FLAML/pull/989
* An agent implementation of MathChat by kevin666aa in https://github.com/microsoft/FLAML/pull/1090

New Contributors
* kevin666aa made their first contribution in https://github.com/microsoft/FLAML/pull/1090

**Full Changelog**: https://github.com/microsoft/FLAML/compare/2.0.0rc1...v2.0.0rc2

2.0.0rc1

This release includes:

- A Major Refactor: the creation of an `automl` option to remove unnecessary dependencies for `autogen` and `tune` (thanks to sonichi.)
- A newly added blog post addressing adaptation in HumanEval (thanks to sonichi.)
- A newly added `tutorials` folder containing all the tutorials on FLAML (thanks to qingyun-wu, sonichi, and thinkall.)
- Documentation Improvement and Link Corrections.
- The addition of documentation and a notebook example on interactive LLM agents in FLAML (thanks to qingyun-wu, sonichi, thinkall, and pcdeadeasy.)
- Support more azure openai api_type (thanks to thinkall )
- Suppress warning message of pandas_on_spark to_spark (thanks to thinkall )
- Support shell command and multiple code blocks (thanks to sonichi )
- Improve the system message for assistant agent (thanks to sonichi and gagb )
- Improve utility functions for config lists (thanks to sonichi )
- Reuse docker image in a session (thanks to sonichi and gagb )

A hearty welcome to our new contributor, badjouras, who made their first contribution. Thanks to code reviewers gagb pcdeadeasy
liususan091219 thinkall levscaut sonichi qingyun-wu.


What's Changed
* Blogpost for adaptation in HumanEval by sonichi in https://github.com/microsoft/FLAML/pull/1048
* Improve messaging in documentation by sonichi in https://github.com/microsoft/FLAML/pull/1050
* create an automl option to remove unnecessary dependency for autogen and tune by sonichi in https://github.com/microsoft/FLAML/pull/1007
* docs: 📝 Fix link to installation section in Task-Oriented-AutoML.md by badjouras in https://github.com/microsoft/FLAML/pull/1051
* doc and test update by sonichi in https://github.com/microsoft/FLAML/pull/1053
* remove redundant doc and add tutorial by qingyun-wu in https://github.com/microsoft/FLAML/pull/1004
* add agent notebook and documentation by qingyun-wu in https://github.com/microsoft/FLAML/pull/1052
* Support more azure openai api_type by thinkall in https://github.com/microsoft/FLAML/pull/1059
* suppress warning message of pandas_on_spark to_spark by thinkall in https://github.com/microsoft/FLAML/pull/1058
* Agent notebook example with human feedback; Support shell command and multiple code blocks; Improve the system message for assistant agent; Improve utility functions for config lists; reuse docker image by sonichi in https://github.com/microsoft/FLAML/pull/1056

New Contributors
* badjouras made their first contribution in https://github.com/microsoft/FLAML/pull/1051

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v1.2.4...2.0.0rc1

1.2.4

This release contains:
* improved support for using a list of configurations (thanks to BeibinLi ),
* using a filter to select from the responses out of a sequence of configurations ([doc](https://microsoft.github.io/FLAML/docs/Use-Cases/Auto-Generation/#logic-error)).
* a new experimental human-proxy agent (thanks to qingyun-wu and gagb).
* [utility function](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/openai_utils) to create config lists.
* method [clear_cache](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/completion#clear_cache) added in `oai.Completion`.
* update of the default search space (thanks to Kyoshiin and LittleLittleCloud ).
* prepare for flaml v2 (thanks to qingyun-wu for writing the [blogpost](https://microsoft.github.io/FLAML/blog/2023/05/07/1M-milestone)).

Breaking change:
* `cache_path` is renamed into `cache_path_root` in [set_cache](https://microsoft.github.io/FLAML/docs/reference/autogen/oai/completion#set_cache).

Thanks to code reviewers skzhang1 jtongxin pcdeadeasy ZviBaratz LittleLittleCloud Borda , and to liususan091219 thinkall for fixing test error.

What's Changed
* Catch AuthenticationError trying different configs by BeibinLi in https://github.com/microsoft/FLAML/pull/1023
* chat completion check by sonichi in https://github.com/microsoft/FLAML/pull/1024
* update model of text summarization in test by liususan091219 in https://github.com/microsoft/FLAML/pull/1030
* Human agent by qingyun-wu in https://github.com/microsoft/FLAML/pull/1025
* fix of website link by sonichi in https://github.com/microsoft/FLAML/pull/1042
* Blogpost by qingyun-wu in https://github.com/microsoft/FLAML/pull/1026
* Update default search space by Kyoshiin in https://github.com/microsoft/FLAML/pull/1044
* Fix PULL_REQUEST_TEMPLATE and improve test by removing unnecessary environment variable by thinkall in https://github.com/microsoft/FLAML/pull/1043
* response filter by sonichi in https://github.com/microsoft/FLAML/pull/1039

New Contributors
* BeibinLi made their first contribution in https://github.com/microsoft/FLAML/pull/1023
* Kyoshiin made their first contribution in https://github.com/microsoft/FLAML/pull/1044

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v1.2.3...v1.2.4

Page 3 of 13

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.