Flaml

Latest version: v2.3.2

Safety actively analyzes 681881 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 2 of 12

2.1.0

This update renames a few classes and methods.

What's Changed
* Migration headsup by qingyun-wu in https://github.com/microsoft/FLAML/pull/1204
* group chat for visualization by sonichi in https://github.com/microsoft/FLAML/pull/1213
* rename human to user_proxy by sonichi in https://github.com/microsoft/FLAML/pull/1215
* Rename Responsive -> Conversable by sonichi in https://github.com/microsoft/FLAML/pull/1202


**Full Changelog**: https://github.com/microsoft/FLAML/compare/v2.0.3...v2.1.0

2.0.3

This release adds support for model name suffix "-0613" and allows admin takeover in the group chat, plus a more complex group chat example.
Thanks qingyun-wu LeoLjl JieyuZ2 skzhang1 pcdeadeasy LittleLittleCloud for code review.

What's Changed
* suffix in model name by sonichi in https://github.com/microsoft/FLAML/pull/1206
* fix typo by qingyun-wu in https://github.com/microsoft/FLAML/pull/1210
* admin takeover in group chat by sonichi in https://github.com/microsoft/FLAML/pull/1209


**Full Changelog**: https://github.com/microsoft/FLAML/compare/v2.0.2...v2.0.3

2.0.2

This release contains an improvement to the assistant agent prompt, and doc update.
Thanks thinkall for the contribution and BeibinLi qingyun-wu kevin666aa for reviewing and testing.

What's Changed
* Update readme and AutoGen docs by thinkall in https://github.com/microsoft/FLAML/pull/1183
* Prompt improvement by sonichi in https://github.com/microsoft/FLAML/pull/1203


**Full Changelog**: https://github.com/microsoft/FLAML/compare/v2.0.1...v2.0.2

2.0.1

This release contains prompt improvement and bug fix. In the next version, we will rename ResponsiveAgent to ConversableAgent.

Thanks kevin666aa for the contribution, and skzhang1 LittleLittleCloud JieyuZ2 gagb for reviewing.

What's Changed
* Cover function calls with no arguments by kevin666aa in https://github.com/microsoft/FLAML/pull/1185
* fix generate_reply when sender is None. by kevin666aa in https://github.com/microsoft/FLAML/pull/1186
* prompt improvement by sonichi in https://github.com/microsoft/FLAML/pull/1188
* document response fields by sonichi in https://github.com/microsoft/FLAML/pull/1199


**Full Changelog**: https://github.com/microsoft/FLAML/compare/v2.0.0...v2.0.1

2.0.0

Prepare for a roller coaster ride of innovation with the launch of FLAML v2.0.0! This is not just another update but a culmination of numerous enhancements, novel features, and exciting improvements we've made from v2.0.0rc1 to v2.0.0rc5, leading to the grand v2.0.0 release.
* With v2.0.0rc1, we embarked on a major refactor with the creation of an [automl] option to declutter dependencies for `autogen` and `tune`.
* In v2.0.0rc2, we supercharged FLAML with support for new OpenAI gpt-3.5-turbo and gpt-4 models in `autogen` and rolled out the extensibility of autogen agents.
* With v2.0.0rc3, we upped the ante by adding new OpenAI models' support of functions in agents and provided a handy code example in a dedicated notebook.
* v2.0.0rc4 brought a host of improvements to the `agentchat` framework, enabling many new applications.
* v2.0.0rc5 pushed the boundaries further by making auto-reply methods pluggable and supporting an asynchronous mode in agents.

Finally, we arrive at the grand v2.0.0 release! This version boasts of numerous feature enhancements in `autogen`, like multi-agent chat framework (in preview), expanded OpenAI model support, enhanced integration with Spark, and much more.

Documentation for AutoGen: https://microsoft.github.io/FLAML/docs/Use-Cases/Autogen
Examples: https://microsoft.github.io/FLAML/docs/Examples/AutoGen-AgentChat
Blogposts: https://microsoft.github.io/FLAML/blog

A huge shoutout to qingyun-wu kevin666aa skzhang1 ekzhu BeibinLi thinkall LittleLittleCloud JieyuZ2 gagb EgorKraevTransferwise markharley int-chaos levscaut feiran-jia liususan091219 royninja pcdeadeasy as well as our new contributors badjouras, LeoLjl, xiaoboxia, and minghao51 who joined us during this journey. Your contributions have played a pivotal role in shaping this release.

What's Changed
* Blogpost for adaptation in HumanEval by sonichi in https://github.com/microsoft/FLAML/pull/1048
* Improve messaging in documentation by sonichi in https://github.com/microsoft/FLAML/pull/1050
* create an automl option to remove unnecessary dependency for autogen and tune by sonichi in https://github.com/microsoft/FLAML/pull/1007
* docs: 📝 Fix link to installation section in Task-Oriented-AutoML.md by badjouras in https://github.com/microsoft/FLAML/pull/1051
* doc and test update by sonichi in https://github.com/microsoft/FLAML/pull/1053
* remove redundant doc and add tutorial by qingyun-wu in https://github.com/microsoft/FLAML/pull/1004
* add agent notebook and documentation by qingyun-wu in https://github.com/microsoft/FLAML/pull/1052
* Support more azure openai api_type by thinkall in https://github.com/microsoft/FLAML/pull/1059
* suppress warning message of pandas_on_spark to_spark by thinkall in https://github.com/microsoft/FLAML/pull/1058
* Agent notebook example with human feedback; Support shell command and multiple code blocks; Improve the system message for assistant agent; Improve utility functions for config lists; reuse docker image by sonichi in https://github.com/microsoft/FLAML/pull/1056
* Fix documentation by sonichi in https://github.com/microsoft/FLAML/pull/1075
* encode timeout msg in bytes by sonichi in https://github.com/microsoft/FLAML/pull/1078
* Add pands requirement in benchmark option by qingyun-wu in https://github.com/microsoft/FLAML/pull/1070
* Fix pyspark tests in workflow by thinkall in https://github.com/microsoft/FLAML/pull/1071
* Docmentation for agents by qingyun-wu in https://github.com/microsoft/FLAML/pull/1057
* Links to papers by sonichi in https://github.com/microsoft/FLAML/pull/1084
* update openai model support by sonichi in https://github.com/microsoft/FLAML/pull/1082
* string to array by sonichi in https://github.com/microsoft/FLAML/pull/1086
* Factor out time series-related functionality into a time series Task object by EgorKraevTransferwise in https://github.com/microsoft/FLAML/pull/989
* An agent implementation of MathChat by kevin666aa in https://github.com/microsoft/FLAML/pull/1090
* temp solution for joblib 1.3.0 issue by thinkall in https://github.com/microsoft/FLAML/pull/1100
* support string alg in tune by skzhang1 in https://github.com/microsoft/FLAML/pull/1093
* update flaml version in MathChat notebook by kevin666aa in https://github.com/microsoft/FLAML/pull/1095
* doc update by sonichi in https://github.com/microsoft/FLAML/pull/1089
* Update OptunaSearch by skzhang1 in https://github.com/microsoft/FLAML/pull/1106
* Support function_call in `autogen/agent` by kevin666aa in https://github.com/microsoft/FLAML/pull/1091
* update notebook with new models by sonichi in https://github.com/microsoft/FLAML/pull/1112
* Enhance Integration with Spark by levscaut in https://github.com/microsoft/FLAML/pull/1097
* Add Funccall notebook and document by kevin666aa in https://github.com/microsoft/FLAML/pull/1110
* Update docstring for oai.completion. by LeoLjl in https://github.com/microsoft/FLAML/pull/1113
* Try to prevent the default AssistantAgent from asking users to modify the code by sonichi in https://github.com/microsoft/FLAML/pull/1114
* update colab link by sonichi in https://github.com/microsoft/FLAML/pull/1118
* fix bug in math_user_proxy_agent by kevin666aa in https://github.com/microsoft/FLAML/pull/1124
* Add log metric by thinkall in https://github.com/microsoft/FLAML/pull/1125
* Update assistant agent by sonichi in https://github.com/microsoft/FLAML/pull/1121
* suppress printing data split type by xiaoboxia in https://github.com/microsoft/FLAML/pull/1126
* change price ratio by sonichi in https://github.com/microsoft/FLAML/pull/1130
* simplify the initiation of chat by sonichi in https://github.com/microsoft/FLAML/pull/1131
* Update docs on how to interact with local LLM by LeoLjl in https://github.com/microsoft/FLAML/pull/1128
* Json config list, agent refactoring and new notebooks by sonichi in https://github.com/microsoft/FLAML/pull/1133
* unify auto_reply; bug fix in UserProxyAgent; reorg agent hierarchy by sonichi in https://github.com/microsoft/FLAML/pull/1142
* rename GenericAgent -> ResponsiveAgent by sonichi in https://github.com/microsoft/FLAML/pull/1146
* Bump semver from 5.7.1 to 5.7.2 in /website by dependabot in https://github.com/microsoft/FLAML/pull/1119
* autogen.agent -> autogen.agentchat by sonichi in https://github.com/microsoft/FLAML/pull/1148
* MathChat blog post by kevin666aa in https://github.com/microsoft/FLAML/pull/1096
* Commenting use_label_encoder - xgboost by minghao51 in https://github.com/microsoft/FLAML/pull/1122
* raise error when msg is invalid; fix docstr; improve ResponsiveAgent; update doc and packaging; capture ipython output; configurable default reply by sonichi in https://github.com/microsoft/FLAML/pull/1154
* consecutive auto reply, history, template, group chat, class-specific reply by sonichi in https://github.com/microsoft/FLAML/pull/1165
* Improve auto reply registration by sonichi in https://github.com/microsoft/FLAML/pull/1170
* Make auto reply method pluggable by sonichi in https://github.com/microsoft/FLAML/pull/1177
* support async in agents by sonichi in https://github.com/microsoft/FLAML/pull/1178
* Updated README.md with installation Link by royninja in https://github.com/microsoft/FLAML/pull/1180
* Add RetrieveChat by thinkall in https://github.com/microsoft/FLAML/pull/1158
* silent; code_execution_config; exit; version by sonichi in https://github.com/microsoft/FLAML/pull/1179

New Contributors
* badjouras made their first contribution in https://github.com/microsoft/FLAML/pull/1051
* kevin666aa made their first contribution in https://github.com/microsoft/FLAML/pull/1090
* LeoLjl made their first contribution in https://github.com/microsoft/FLAML/pull/1113
* xiaoboxia made their first contribution in https://github.com/microsoft/FLAML/pull/1126
* minghao51 made their first contribution in https://github.com/microsoft/FLAML/pull/1122

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v1.2.4...v2.0.0

2.0.0rc5

This version makes auto-reply methods pluggable and supports asynchronous mode in agents. An example of handling data steams is added.
Thanks to qingyun-wu ekzhu for laying the foundation and reviewing!

What's Changed
* Make auto reply method pluggable by sonichi in https://github.com/microsoft/FLAML/pull/1177
* support async in agents by sonichi in https://github.com/microsoft/FLAML/pull/1178

**Full Changelog**: https://github.com/microsoft/FLAML/compare/v2.0.0rc4...v2.0.0rc5

Page 2 of 12

© 2024 Safety CLI Cybersecurity Inc. All Rights Reserved.