* Fixing SKChatCompletionAdapter bug that disabled tool use 5830
**Full Changelog**: https://github.com/microsoft/autogen/compare/python-v0.4.8...python-v0.4.8.1
python-v0.4.8
What's New
Ollama Chat Completion Client
To use the new [Ollama Client](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.models.ollama.html#autogen_ext.models.ollama.OllamaChatCompletionClient):
pip install -U "autogen-ext[ollama]"
python
from autogen_ext.models.ollama import OllamaChatCompletionClient
from autogen_core.models import UserMessage
ollama_client = OllamaChatCompletionClient(
model="llama3",
)
result = await ollama_client.create([UserMessage(content="What is the capital of France?", source="user")]) type: ignore
print(result)
To load a client from configuration:
python
from autogen_core.models import ChatCompletionClient
config = {
"provider": "OllamaChatCompletionClient",
"config": {"model": "llama3"},
}
client = ChatCompletionClient.load_component(config)
It also supports structured output:
python
from autogen_ext.models.ollama import OllamaChatCompletionClient
from autogen_core.models import UserMessage
from pydantic import BaseModel
class StructuredOutput(BaseModel):
first_name: str
last_name: str
ollama_client = OllamaChatCompletionClient(
model="llama3",
response_format=StructuredOutput,
)
result = await ollama_client.create([UserMessage(content="Who was the first man on the moon?", source="user")]) type: ignore
print(result)
* Ollama client by peterychang in https://github.com/microsoft/autogen/pull/5553
* Fix ollama docstring by peterychang in https://github.com/microsoft/autogen/pull/5600
* Ollama client docs by peterychang in https://github.com/microsoft/autogen/pull/5605
New Required `name` Field in `FunctionExecutionResult`
Now `name` field is required in [`FunctionExecutionResult`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.models.html#autogen_core.models.FunctionExecutionResult):
python
exec_result = FunctionExecutionResult(call_id="...", content="...", name="...", is_error=False)
* fix: Update SKChatCompletionAdapter message conversion by lspinheiro in https://github.com/microsoft/autogen/pull/5749
Using `thought` Field in `CreateResult` and `ThoughtEvent`
Now [`CreateResult`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.models.html#autogen_core.models.CreateResult) uses the optional `thought` field for the extra text content generated as part of a tool call from model. It is currently supported by `OpenAIChatCompletionClient`.
When available, the `thought` content will be emitted by `AssistantAgent` as a `ThoughtEvent` message.
* feat: Add thought process handling in tool calls and expose ThoughtEvent through stream in AgentChat by ekzhu in https://github.com/microsoft/autogen/pull/5500
New `metadata` Field in AgentChat Message Types
Added a `metadata` field for custom message content set by applications.
* Add metadata field to basemessage by husseinmozannar in https://github.com/microsoft/autogen/pull/5372
Exception in AgentChat Agents is now fatal
Now, if there is an exception raised within an AgentChat agent such as the `AssistantAgent`, instead of silently stopping the team, it will raise the exception.
* fix: Allow background exceptions to be fatal by jackgerrits in https://github.com/microsoft/autogen/pull/5716
New Termination Conditions
New termination conditions for better control of agents.
See how you use `TextMessageTerminationCondition` to control a single agent team running in a loop: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/teams.html#single-agent-team.
`FunctionCallTermination` is also discussed as an example for custom termination condition: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/termination.html#custom-termination-condition
* TextMessageTerminationCondition for agentchat by EItanya in https://github.com/microsoft/autogen/pull/5742
* FunctionCallTermination condition by ekzhu in https://github.com/microsoft/autogen/pull/5808
Docs Update
The ChainLit sample contains `UserProxyAgent` in a team, and shows you how to use it to get user input from UI. See: https://github.com/microsoft/autogen/tree/main/python/samples/agentchat_chainlit
* doc & sample: Update documentation for human-in-the-loop and UserProxyAgent; Add UserProxyAgent to ChainLit sample; by ekzhu in https://github.com/microsoft/autogen/pull/5656
* docs: Add logging instructions for AgentChat and enhance core logging guide by ekzhu in https://github.com/microsoft/autogen/pull/5655
* doc: Enrich AssistantAgent API documentation with usage examples. by ekzhu in https://github.com/microsoft/autogen/pull/5653
* doc: Update SelectorGroupChat doc on how to use O3-mini model. by ekzhu in https://github.com/microsoft/autogen/pull/5657
* update human in the loop docs for agentchat by victordibia in https://github.com/microsoft/autogen/pull/5720
* doc: update guide for termination condition and tool usage by ekzhu in https://github.com/microsoft/autogen/pull/5807
* Add examples for custom model context in AssistantAgent and ChatCompletionContext by ekzhu in https://github.com/microsoft/autogen/pull/5810
Bug Fixes
* Initialize BaseGroupChat before reset by gagb in https://github.com/microsoft/autogen/pull/5608
* fix: Remove R1 model family from is_openai function by ekzhu in https://github.com/microsoft/autogen/pull/5652
* fix: Crash in argument parsing when using Openrouter by philippHorn in https://github.com/microsoft/autogen/pull/5667
* Fix: Add support for custom headers in HTTP tool requests by linznin in https://github.com/microsoft/autogen/pull/5660
* fix: Structured output with tool calls for OpenAIChatCompletionClient by ekzhu in https://github.com/microsoft/autogen/pull/5671
* fix: Allow background exceptions to be fatal by jackgerrits in https://github.com/microsoft/autogen/pull/5716
* Fix: Auto-Convert Pydantic and Dataclass Arguments in AutoGen Tool Calls by mjunaidca in https://github.com/microsoft/autogen/pull/5737
Other Python Related Changes
* Update website version by ekzhu in https://github.com/microsoft/autogen/pull/5561
* doc: fix typo (recpients -> recipients) by radamson in https://github.com/microsoft/autogen/pull/5570
* feat: enhance issue templates with detailed guidance by ekzhu in https://github.com/microsoft/autogen/pull/5594
* Improve the model mismatch warning msg by thinkall in https://github.com/microsoft/autogen/pull/5586
* Fixing grammar issues by OndeVai in https://github.com/microsoft/autogen/pull/5537
* Fix typo in doc by weijen in https://github.com/microsoft/autogen/pull/5628
* Make ChatCompletionCache support component config by victordibia in https://github.com/microsoft/autogen/pull/5658
* DOCS: Minor updates to handoffs.ipynb by xtophs in https://github.com/microsoft/autogen/pull/5665
* DOCS: Fixed small errors in the text and made code format more consistent by xtophs in https://github.com/microsoft/autogen/pull/5664
* Replace the undefined tools variable with tool_schema parameter in ToolUseAgent class by shuklaham in https://github.com/microsoft/autogen/pull/5684
* Improve readme inconsistency by gagb in https://github.com/microsoft/autogen/pull/5691
* update versions to 0.4.8 by ekzhu in https://github.com/microsoft/autogen/pull/5689
* Update issue templates by jackgerrits in https://github.com/microsoft/autogen/pull/5686
* Change base image to one with arm64 support by jackgerrits in https://github.com/microsoft/autogen/pull/5681
* REF: replaced variable name in TextMentionTermination by pengjunfeng11 in https://github.com/microsoft/autogen/pull/5698
* Refactor AssistantAgent on_message_stream by lspinheiro in https://github.com/microsoft/autogen/pull/5642
* Fix accessibility issue 14 for visual accessibility by peterychang in https://github.com/microsoft/autogen/pull/5709
* Specify specific UV version should be used by jackgerrits in https://github.com/microsoft/autogen/pull/5711
* Update README.md for improved clarity and formatting by gagb in https://github.com/microsoft/autogen/pull/5714
* add anthropic native support by victordibia in https://github.com/microsoft/autogen/pull/5695
* 5663 ollama client host by rylativity in https://github.com/microsoft/autogen/pull/5674
* Fix visual accessibility issues 6 and 20 by peterychang in https://github.com/microsoft/autogen/pull/5725
* Add Serialization Instruction for MemoryContent by victordibia in https://github.com/microsoft/autogen/pull/5727
* Fix typo by stuartleeks in https://github.com/microsoft/autogen/pull/5754
* Add support for default model client, in AGS updates to settings UI by victordibia in https://github.com/microsoft/autogen/pull/5763
* fix incorrect field name from config to component by peterj in https://github.com/microsoft/autogen/pull/5761
* Make FileSurfer and CodeExecAgent Declarative by victordibia in https://github.com/microsoft/autogen/pull/5765
* docs: add note about markdown code block requirement in CodeExecutorA… by jay-thakur in https://github.com/microsoft/autogen/pull/5785
* add options to ollama client by peterychang in https://github.com/microsoft/autogen/pull/5805
* add stream_options to openai model by peterj in https://github.com/microsoft/autogen/pull/5788
* add api docstring to with_requirements by victordibia in https://github.com/microsoft/autogen/pull/5746
* Update with correct message types by laurentran in https://github.com/microsoft/autogen/pull/5789
* Update installation.md by LuSrackhall in https://github.com/microsoft/autogen/pull/5784
* Update magentic-one.md by Paulhb7 in https://github.com/microsoft/autogen/pull/5779
* Add ChromaDBVectorMemory in Extensions by victordibia in https://github.com/microsoft/autogen/pull/5308
New Contributors
* radamson made their first contribution in https://github.com/microsoft/autogen/pull/5570
* OndeVai made their first contribution in https://github.com/microsoft/autogen/pull/5537
* philippHorn made their first contribution in https://github.com/microsoft/autogen/pull/5667
* shuklaham made their first contribution in https://github.com/microsoft/autogen/pull/5684
* pengjunfeng11 made their first contribution in https://github.com/microsoft/autogen/pull/5698
* cedricmendelin made their first contribution in https://github.com/microsoft/autogen/pull/5422
* rylativity made their first contribution in https://github.com/microsoft/autogen/pull/5674
* stuartleeks made their first contribution in https://github.com/microsoft/autogen/pull/5754
* peterj made their first contribution in https://github.com/microsoft/autogen/pull/5761
* jay-thakur made their first contribution in https://github.com/microsoft/autogen/pull/5785
* YASAI03 made their first contribution in https://github.com/microsoft/autogen/pull/5794
* laurentran made their first contribution in https://github.com/microsoft/autogen/pull/5789
* mjunaidca made their first contribution in https://github.com/microsoft/autogen/pull/5737
* LuSrackhall made their first contribution in https://github.com/microsoft/autogen/pull/5784
* Paulhb7 made their first contribution in https://github.com/microsoft/autogen/pull/5779
**Full Changelog**: https://github.com/microsoft/autogen/compare/python-v0.4.7...python-v0.4.8
python-v0.4.7
Overview
This release contains various bug fixes and feature improvements for the Python API.
Related news: our .NET API website is up and running: https://microsoft.github.io/autogen/dotnet/dev/. Our .NET Core API now has dev releases. Check it out!
Important
Starting from v0.4.7, `ModelInfo`'s required fields will be enforced. So please include all required fields when you use `model_info` when creating model clients. For example,
python
from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(
model="llama3.2:latest",
base_url="http://localhost:11434/v1",
api_key="placeholder",
model_info={
"vision": False,
"function_calling": True,
"json_output": False,
"family": "unknown",
},
)
response = await model_client.create([UserMessage(content="What is the capital of France?", source="user")])
print(response)
See [ModelInfo](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.models.html#autogen_core.models.ModelInfo) for more details.
New Features
* DockerCommandLineCodeExecutor support for additional volume mounts, exposed host ports by andrejpk in https://github.com/microsoft/autogen/pull/5383
* Remove and get subscription APIs for Python GrpcWorkerAgentRuntime by jackgerrits in https://github.com/microsoft/autogen/pull/5365
* Add `strict` mode support to `BaseTool`, `ToolSchema` and `FunctionTool` to allow tool calls to be used together with structured output mode by ekzhu in https://github.com/microsoft/autogen/pull/5507
* Make CodeExecutor components serializable by victordibia in https://github.com/microsoft/autogen/pull/5527
Bug Fixes
* fix: Address tool call execution scenario when model produces empty tool call ids by ekzhu in https://github.com/microsoft/autogen/pull/5509
* doc & fix: Enhance AgentInstantiationContext with detailed documentation and examples for agent instantiation; Fix a but that caused value error when the expected class is not provided in register_factory by ekzhu in https://github.com/microsoft/autogen/pull/5555
* fix: Add model info validation and improve error messaging by ekzhu in https://github.com/microsoft/autogen/pull/5556
* fix: Add warning and doc for Windows event loop policy to avoid subprocess issues in web surfer and local executor by ekzhu in https://github.com/microsoft/autogen/pull/5557
Doc Updates
* doc: Update API doc for MCP tool to include installation instructions by ekzhu in https://github.com/microsoft/autogen/pull/5482
* doc: Update AgentChat quickstart guide to enhance clarity and installation instructions by ekzhu in https://github.com/microsoft/autogen/pull/5499
* doc: API doc example for langchain database tool kit by ekzhu in https://github.com/microsoft/autogen/pull/5498
* Update Model Client Docs to Mention API Key from Environment Variables by victordibia in https://github.com/microsoft/autogen/pull/5515
* doc: improve tool guide in Core API doc by ekzhu in https://github.com/microsoft/autogen/pull/5546
Other Python Related Changes
* Update website version v0.4.6 by ekzhu in https://github.com/microsoft/autogen/pull/5481
* Reduce number of doc jobs for old releases by jackgerrits in https://github.com/microsoft/autogen/pull/5375
* Fix class name style in document by weijen in https://github.com/microsoft/autogen/pull/5516
* Update custom-agents.ipynb by yosuaw in https://github.com/microsoft/autogen/pull/5531
* fix: update 0.2 deployment workflow to use tag input instead of branch by ekzhu in https://github.com/microsoft/autogen/pull/5536
* fix: update help text for model configuration argument by gagb in https://github.com/microsoft/autogen/pull/5533
* Update python version to v0.4.7 by ekzhu in https://github.com/microsoft/autogen/pull/5558
New Contributors
* andrejpk made their first contribution in https://github.com/microsoft/autogen/pull/5383
* yosuaw made their first contribution in https://github.com/microsoft/autogen/pull/5531
**Full Changelog**: https://github.com/microsoft/autogen/compare/python-v0.4.6...python-v0.4.7
python-v0.4.6
Features and Improvements
MCP Tool
In this release we added a new built-in tool by richard-gyiko for using [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) servers. MCP is an open protocol that allows agents to tap into an ecosystem of tools, from browsing file system to Git repo management.
Here is an example of using the `mcp-server-fetch` tool for fetching web content as Markdown.
python
pip install mcp-server-fetch autogen-ext[mcp]
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.mcp import StdioServerParams, mcp_server_tools
async def main() -> None:
Get the fetch tool from mcp-server-fetch.
fetch_mcp_server = StdioServerParams(command="uvx", args=["mcp-server-fetch"])
tools = await mcp_server_tools(fetch_mcp_server)
Create an agent that can use the fetch tool.
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(name="fetcher", model_client=model_client, tools=tools, reflect_on_tool_use=True) type: ignore
Let the agent fetch the content of a URL and summarize it.
result = await agent.run(task="Summarize the content of https://en.wikipedia.org/wiki/Seattle")
print(result.messages[-1].content)
asyncio.run(main())
* Add MCP adapters to autogen-ext by richard-gyiko in https://github.com/microsoft/autogen/pull/5251
HTTP Tool
In this release we introduce a new built-in tool built by EItanya for querying HTTP-based API endpoints. This lets agent call remotely hosted tools through HTTP.
Here is an example of using the `httpbin.org` API for base64 decoding.
python
pip install autogen-ext[http-tool]
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.http import HttpTool
Define a JSON schema for a base64 decode tool
base64_schema = {
"type": "object",
"properties": {
"value": {"type": "string", "description": "The base64 value to decode"},
},
"required": ["value"],
}
Create an HTTP tool for the httpbin API
base64_tool = HttpTool(
name="base64_decode",
description="base64 decode a value",
scheme="https",
host="httpbin.org",
port=443,
path="/base64/{value}",
method="GET",
json_schema=base64_schema,
)
async def main():
Create an assistant with the base64 tool
model = OpenAIChatCompletionClient(model="gpt-4")
assistant = AssistantAgent("base64_assistant", model_client=model, tools=[base64_tool])
The assistant can now use the base64 tool to decode the string
response = await assistant.on_messages(
[TextMessage(content="Can you base64 decode the value 'YWJjZGU=', please?", source="user")],
CancellationToken(),
)
print(response.chat_message.content)
asyncio.run(main())
* Adding declarative HTTP tools to autogen ext by EItanya in https://github.com/microsoft/autogen/pull/5181
MagenticOne Improvement
We introduced several improvements to MagenticOne (M1) and its agents. We made M1 work with text-only models that can't read screenshots, and prompt changes to make it work better with smaller models.
Do you know now you can configure `m1` CLI tool with a YAML configuration file?
* WebSurfer: print viewport text by afourney in https://github.com/microsoft/autogen/pull/5329
* Allow m1 cli to read a configuration from a yaml file. by afourney in https://github.com/microsoft/autogen/pull/5341
* Add text-only model support to M1 by afourney in https://github.com/microsoft/autogen/pull/5344
* Ensure decriptions appear each on one line. Fix web_surfer's desc by afourney in https://github.com/microsoft/autogen/pull/5390
* Prompting changes to better support smaller models. by afourney in https://github.com/microsoft/autogen/pull/5386
* doc: improve m1 docs, remove duplicates by ekzhu in https://github.com/microsoft/autogen/pull/5460
* M1 docker by afourney in https://github.com/microsoft/autogen/pull/5437
SelectorGroupChat Improvement
In this release we made several improvements to make `SelectorGroupChat` work well with smaller models such as LLama 13B, and hosted models that do not support the `name` field in Chat Completion messages.
Do you know you can use models served through Ollama directly through the `OpenAIChatCompletionClient`? See: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/models.html#ollama
* Get SelectorGroupChat working for Llama models. by afourney in https://github.com/microsoft/autogen/pull/5409
* Mitigates 5401 by optionally prepending names to messages. by afourney in https://github.com/microsoft/autogen/pull/5448
* fix: improve speaker selection in SelectorGroupChat for weaker models by ekzhu in https://github.com/microsoft/autogen/pull/5454
Gemini Model Client
We enhanced our support for Gemini models. Now you can use Gemini models without passing in `model_info` and `base_url`.
python
from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(
model="gemini-1.5-flash-8b",
api_key="GEMINI_API_KEY",
)
response = await model_client.create([UserMessage(content="What is the capital of France?", source="user")])
print(response)
* feat: add gemini model families, enhance group chat selection for Gemini model and add tests by ekzhu in https://github.com/microsoft/autogen/pull/5334
* feat: enhance Gemini model support in OpenAI client and tests by ekzhu in https://github.com/microsoft/autogen/pull/5461
AGBench Update
* Significant updates to agbench. by afourney in https://github.com/microsoft/autogen/pull/5313
New Sample
Interested in integration with FastAPI? We have a new sample: https://github.com/microsoft/autogen/blob/main/python/samples/agentchat_fastapi
* Add sample chat application with FastAPI by ekzhu in https://github.com/microsoft/autogen/pull/5433
* docs: enhance human-in-the-loop tutorial with FastAPI websocket example by ekzhu in https://github.com/microsoft/autogen/pull/5455
Bug Fixes
* Fix reading string args from m1 cli by afourney in https://github.com/microsoft/autogen/pull/5343
* Fix summarize_page in a text-only context, and for unknown models. by afourney in https://github.com/microsoft/autogen/pull/5388
* fix: warn on empty chunks, don't error out by MohMaz in https://github.com/microsoft/autogen/pull/5332
* fix: add state management for oai assistant by lspinheiro in https://github.com/microsoft/autogen/pull/5352
* fix: streaming token mode cannot work in function calls and will infi… by so2liu in https://github.com/microsoft/autogen/pull/5396
* fix: do not count agent event in MaxMessageTermination condition by ekzhu in https://github.com/microsoft/autogen/pull/5436
* fix: remove sk tool adapter plugin name by lspinheiro in https://github.com/microsoft/autogen/pull/5444
* fix & doc: update selector prompt documentation and remove validation checks by ekzhu in https://github.com/microsoft/autogen/pull/5456
* fix: update SK adapter stream tool call processing. by lspinheiro in https://github.com/microsoft/autogen/pull/5449
* fix: Update SK kernel from tool to use method. by lspinheiro in https://github.com/microsoft/autogen/pull/5469
Other Python Changes
* Update Python website to v0.4.5 by ekzhu in https://github.com/microsoft/autogen/pull/5316
* Adding o3 family: o3-mini by razvanvalca in https://github.com/microsoft/autogen/pull/5325
* Ensure ModelInfo field is serialized for OpenAIChatCompletionClient by victordibia in https://github.com/microsoft/autogen/pull/5315
* docs(core_distributed-group-chat): fix the typos in the docs in the README.md by jsburckhardt in https://github.com/microsoft/autogen/pull/5347
* Assistant agent drop images when not provided with a vision-capable model. by afourney in https://github.com/microsoft/autogen/pull/5351
* docs(python): add instructions for syncing dependencies and checking samples by ekzhu in https://github.com/microsoft/autogen/pull/5362
* Fix typo by weijen in https://github.com/microsoft/autogen/pull/5361
* docs: add blog link to README for updates and resources by gagb in https://github.com/microsoft/autogen/pull/5368
* Memory component base by EItanya in https://github.com/microsoft/autogen/pull/5380
* Fixed example code in doc:Custom Agents by weijen in https://github.com/microsoft/autogen/pull/5381
* Various web surfer fixes. by afourney in https://github.com/microsoft/autogen/pull/5393
* Refactor grpc channel connection in servicer by jackgerrits in https://github.com/microsoft/autogen/pull/5402
* Updates to proto for state apis by jackgerrits in https://github.com/microsoft/autogen/pull/5407
* feat: add integration workflow for testing multiple packages by ekzhu in https://github.com/microsoft/autogen/pull/5412
* Flush console output after every message. by afourney in https://github.com/microsoft/autogen/pull/5415
* Use a root json element instead of dict by jackgerrits in https://github.com/microsoft/autogen/pull/5430
* Split out GRPC tests by jackgerrits in https://github.com/microsoft/autogen/pull/5431
* feat: enhance AzureAIChatCompletionClient validation and add unit tests by ekzhu in https://github.com/microsoft/autogen/pull/5417
* Fix typo in Swarm doc by weijen in https://github.com/microsoft/autogen/pull/5435
* Update teams.ipynb : In the sample code the termination condition is set to the text "APPROVE" but the documentation mentions "TERMINATE" by abhijeethaval in https://github.com/microsoft/autogen/pull/5426
* Added the Claude family of models to ModelFamily by rohanthacker in https://github.com/microsoft/autogen/pull/5443
* feat: add indictor for tool failure to FunctionExecutionResult by wistuba in https://github.com/microsoft/autogen/pull/5428
* Update version to 0.4.6 by ekzhu in https://github.com/microsoft/autogen/pull/5477
* doc: improve agent tutorial to include multi-modal input. by ekzhu in https://github.com/microsoft/autogen/pull/5471
* doc: enhance extensions user guide with component examples by ekzhu in https://github.com/microsoft/autogen/pull/5480
* Implement control channel in python host servicer by jackgerrits in https://github.com/microsoft/autogen/pull/5427
* Improve custom agentchat agent docs with model clients (gemini example) and serialization by victordibia in https://github.com/microsoft/autogen/pull/5468
New Contributors
* razvanvalca made their first contribution in https://github.com/microsoft/autogen/pull/5325
* jsburckhardt made their first contribution in https://github.com/microsoft/autogen/pull/5347
* weijen made their first contribution in https://github.com/microsoft/autogen/pull/5361
* EItanya made their first contribution in https://github.com/microsoft/autogen/pull/5380
* so2liu made their first contribution in https://github.com/microsoft/autogen/pull/5396
* abhijeethaval made their first contribution in https://github.com/microsoft/autogen/pull/5426
* wistuba made their first contribution in https://github.com/microsoft/autogen/pull/5428
**Full Changelog**: https://github.com/microsoft/autogen/compare/python-v0.4.5...python-v0.4.6
autogenstudio-v0.4.1
Whats New
AutoGen Studio Declarative Configuration
- in 5172, you can now build your agents in python and export to a json format that works in autogen studio
AutoGen studio now used the same [declarative configuration](https://microsoft.github.io/autogen/dev/user-guide/core-user-guide/framework/component-config.html) interface as the rest of the AutoGen library. This means you can create your agent teams in python and then `dump_component()` it into a JSON spec that can be directly used in AutoGen Studio! This eliminates compatibility (or feature inconsistency) errors between AGS/AgentChat Python as the exact same specs can be used across.
> See a video tutorial on AutoGen Studio v0.4 (02/25) - [https://youtu.be/oum6EI7wohM](https://youtu.be/oum6EI7wohM)
[](https://www.youtube.com/watch?v=oum6EI7wohM)
Here's an example of an agent team and how it is converted to a JSON file:
python
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.conditions import TextMentionTermination
agent = AssistantAgent(
name="weather_agent",
model_client=OpenAIChatCompletionClient(
model="gpt-4o-mini",
),
)
agent_team = RoundRobinGroupChat([agent], termination_condition=TextMentionTermination("TERMINATE"))
config = agent_team.dump_component()
print(config.model_dump_json())
json
{
"provider": "autogen_agentchat.teams.RoundRobinGroupChat",
"component_type": "team",
"version": 1,
"component_version": 1,
"description": "A team that runs a group chat with participants taking turns in a round-robin fashion\n to publish a message to all.",
"label": "RoundRobinGroupChat",
"config": {
"participants": [
{
"provider": "autogen_agentchat.agents.AssistantAgent",
"component_type": "agent",
"version": 1,
"component_version": 1,
"description": "An agent that provides assistance with tool use.",
"label": "AssistantAgent",
"config": {
"name": "weather_agent",
"model_client": {
"provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
"component_type": "model",
"version": 1,
"component_version": 1,
"description": "Chat completion client for OpenAI hosted models.",
"label": "OpenAIChatCompletionClient",
"config": { "model": "gpt-4o-mini" }
},
"tools": [],
"handoffs": [],
"model_context": {
"provider": "autogen_core.model_context.UnboundedChatCompletionContext",
"component_type": "chat_completion_context",
"version": 1,
"component_version": 1,
"description": "An unbounded chat completion context that keeps a view of the all the messages.",
"label": "UnboundedChatCompletionContext",
"config": {}
},
"description": "An agent that provides assistance with ability to use tools.",
"system_message": "You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.",
"model_client_stream": false,
"reflect_on_tool_use": false,
"tool_call_summary_format": "{result}"
}
}
],
"termination_condition": {
"provider": "autogen_agentchat.conditions.TextMentionTermination",
"component_type": "termination",
"version": 1,
"component_version": 1,
"description": "Terminate the conversation if a specific text is mentioned.",
"label": "TextMentionTermination",
"config": { "text": "TERMINATE" }
}
}
}
> Note: If you are building custom agents and want to use them in AGS, you will need to inherit from the AgentChat BaseChat agent and Component class.
> Note: This is a breaking change in AutoGen Studio. You will need to update your AGS specs for any teams created with version autogenstudio <0.4.1
Ability to Test Teams in Team Builder
- in 5392, you can now test your teams as you build them. No need to switch between team builder and playground sessions to test.
You can now test teams directly as you build them in the team builder UI. As you edit your team (either via drag and drop or by editing the JSON spec)
<img width="1738" alt="Image" src="https://github.com/user-attachments/assets/4b895df2-3bad-474e-bec6-4fbcbf1c4346" />
<img width="1761" alt="Image" src="https://github.com/user-attachments/assets/65f52eb9-e926-4168-88fb-d2496c159474" />
New Default Agents in Gallery (Web Agent Team, Deep Research Team)
- in 5416, adds an implementation of a Web Agent Team and Deep Research Team in the default gallery.
The default gallery now has two additional default agents that you can build on and test:
- Web Agent Team - A team with 3 agents - a Web Surfer agent that can browse the web, a Verification Assistant that verifies and summarizes information, and a User Proxy that provides human feedback when needed.
- Deep Research Team - A team with 3 agents - a Research Assistant that performs web searches and analyzes information, a Verifier that ensures research quality and completeness, and a Summary Agent that provides a detailed markdown summary of the research as a report to the user.
Other Improvements
Older features that are currently possible in `v0.4.1`
- Real-time agent updates streaming to the frontend
- Run control: You can now stop agents mid-execution if they're heading in the wrong direction, adjust the team, and continue
- Interactive feedback: Add a UserProxyAgent to get human input through the UI during team runs
- Message flow visualization: See how agents communicate with each other
- Ability to import specifications from external galleries
- Ability to wrap agent teams into an API using the AutoGen Studio CLI
To update to the latest version:
bash
pip install -U autogenstudio
Overall roadmap for AutoGen Studion is here 4006 .
Contributions welcome!
python-v0.4.5
What's New
Streaming for AgentChat agents and teams
* Introduce ModelClientStreamingChunkEvent for streaming model output and update handling in agents and console by ekzhu in https://github.com/microsoft/autogen/pull/5208
To enable streaming from an AssistantAgent, set `model_client_stream=True` when creating it. The token stream will be available when you run the agent directly, or as part of a team when you call `run_stream`.
If you want to see tokens streaming in your console application, you can use `Console` directly.
python
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
agent = AssistantAgent("assistant", OpenAIChatCompletionClient(model="gpt-4o"), model_client_stream=True)
await Console(agent.run_stream(task="Write a short story with a surprising ending."))
asyncio.run(main())
If you are handling the messages yourself and streaming to the frontend, you can handle
`autogen_agentchat.messages.ModelClientStreamingChunkEvent` message.
python
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
agent = AssistantAgent("assistant", OpenAIChatCompletionClient(model="gpt-4o"), model_client_stream=True)
async for message in agent.run_stream(task="Write 3 line poem."):
print(message)
asyncio.run(main())
source='user' models_usage=None content='Write 3 line poem.' type='TextMessage'
source='assistant' models_usage=None content='Silent' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' whispers' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' glide' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=',' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' \n' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='Moon' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='lit' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' dreams' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' dance' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' through' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' the' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' night' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=',' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' \n' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='Stars' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' watch' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' from' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' above' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='.' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0) content='Silent whispers glide, \nMoonlit dreams dance through the night, \nStars watch from above.' type='TextMessage'
TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Write 3 line poem.', type='TextMessage'), TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), content='Silent whispers glide, \nMoonlit dreams dance through the night, \nStars watch from above.', type='TextMessage')], stop_reason=None)
Read more here: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/agents.html#streaming-tokens
Also, see the sample showing how to stream a team's messages to ChainLit frontend: https://github.com/microsoft/autogen/tree/python-v0.4.5/python/samples/agentchat_chainlit
R1-style reasoning output
* Support R1 reasoning text in model create result; enhance API docs by ekzhu in https://github.com/microsoft/autogen/pull/5262
python
import asyncio
from autogen_core.models import UserMessage, ModelFamily
from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
model_client = OpenAIChatCompletionClient(
model="deepseek-r1:1.5b",
api_key="placeholder",
base_url="http://localhost:11434/v1",
model_info={
"function_calling": False,
"json_output": False,
"vision": False,
"family": ModelFamily.R1,
}
)
Test basic completion with the Ollama deepseek-r1:1.5b model.
create_result = await model_client.create(
messages=[
UserMessage(
content="Taking two balls from a bag of 10 green balls and 20 red balls, "
"what is the probability of getting a green and a red balls?",
source="user",
),
]
)
CreateResult.thought field contains the thinking content.
print(create_result.thought)
print(create_result.content)
asyncio.run(main())
Streaming is also supported with R1-style reasoning output.
See the sample showing R1 playing chess: https://github.com/microsoft/autogen/tree/python-v0.4.5/python/samples/agentchat_chess_game
FunctionTool for partial functions
* FunctionTool partial support by nour-bouzid in https://github.com/microsoft/autogen/pull/5183
Now you can define function tools from partial functions, where some parameters have been set before hand.
python
import json
from functools import partial
from autogen_core.tools import FunctionTool
def get_weather(country: str, city: str) -> str:
return f"The temperature in {city}, {country} is 75°"
partial_function = partial(get_weather, "Germany")
tool = FunctionTool(partial_function, description="Partial function tool.")
print(json.dumps(tool.schema, indent=2))
json
{
"name": "get_weather",
"description": "Partial function tool.",
"parameters": {
"type": "object",
"properties": {
"city": {
"description": "city",
"title": "City",
"type": "string"
}
},
"required": [
"city"
]
}
}
CodeExecutorAgent update
* Added an optional sources parameter to CodeExecutorAgent by afourney in https://github.com/microsoft/autogen/pull/5259
New Samples
* Streamlit + AgentChat sample by husseinkorly in https://github.com/microsoft/autogen/pull/5306
* ChainLit + AgentChat sample with streaming by ekzhu in https://github.com/microsoft/autogen/pull/5304
* Chess sample showing R1-Style reasoning for planning and strategizing by ekzhu in https://github.com/microsoft/autogen/pull/5285
Documentation update:
* Add Semantic Kernel Adapter documentation and usage examples in user guides by ekzhu in https://github.com/microsoft/autogen/pull/5256
* Update human-in-the-loop tutorial with better system message to signal termination condition by ekzhu in https://github.com/microsoft/autogen/pull/5253
Moves
* Remove old autogen_magentic_one package. by afourney in https://github.com/microsoft/autogen/pull/5305
Bug Fixes
* fix: handle non-string function arguments in tool calls and add corresponding warnings by ekzhu in https://github.com/microsoft/autogen/pull/5260
* Add default_header support by nour-bouzid in https://github.com/microsoft/autogen/pull/5249
* feat: update OpenAIAssistantAgent to support AsyncAzureOpenAI client by ekzhu in https://github.com/microsoft/autogen/pull/5312
All Other Python Related Changes
* Update website for v0.4.4 by ekzhu in https://github.com/microsoft/autogen/pull/5246
* update dependencies to work with protobuf 5 by MohMaz in https://github.com/microsoft/autogen/pull/5195
* Adjusted M1 agent system prompt to remove TERMINATE by afourney in https://github.com/microsoft/autogen/pull/5263
https://github.com/microsoft/autogen/pull/5270
* chore: update package versions to 0.4.5 and remove deprecated requirements by ekzhu in https://github.com/microsoft/autogen/pull/5280
* Update Distributed Agent Runtime Cross-platform Sample by linznin in https://github.com/microsoft/autogen/pull/5164
* fix: windows check ci failure by bassmang in https://github.com/microsoft/autogen/pull/5287
* fix: type issues in streamlit sample and add streamlit to dev dependencies by ekzhu in https://github.com/microsoft/autogen/pull/5309
* chore: add asyncio_atexit dependency to docker requirements by ekzhu in https://github.com/microsoft/autogen/pull/5307
* feat: add o3 to model info; update chess example by ekzhu in https://github.com/microsoft/autogen/pull/5311
New Contributors
* nour-bouzid made their first contribution in https://github.com/microsoft/autogen/pull/5183
* linznin made their first contribution in https://github.com/microsoft/autogen/pull/5164
* husseinkorly made their first contribution in https://github.com/microsoft/autogen/pull/5306
**Full Changelog**: https://github.com/microsoft/autogen/compare/v0.4.4...python-v0.4.5