Autogen

Latest version: v0.8.4

Safety actively analyzes 723217 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 13

0.4.8

* Fixing SKChatCompletionAdapter bug that disabled tool use 5830

**Full Changelog**: https://github.com/microsoft/autogen/compare/python-v0.4.8...python-v0.4.8.1

python-v0.4.8
What's New

Ollama Chat Completion Client

To use the new [Ollama Client](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.models.ollama.html#autogen_ext.models.ollama.OllamaChatCompletionClient):


pip install -U "autogen-ext[ollama]"


python
from autogen_ext.models.ollama import OllamaChatCompletionClient
from autogen_core.models import UserMessage

ollama_client = OllamaChatCompletionClient(
model="llama3",
)

result = await ollama_client.create([UserMessage(content="What is the capital of France?", source="user")]) type: ignore
print(result)


To load a client from configuration:

python
from autogen_core.models import ChatCompletionClient

config = {
"provider": "OllamaChatCompletionClient",
"config": {"model": "llama3"},
}

client = ChatCompletionClient.load_component(config)


It also supports structured output:

python
from autogen_ext.models.ollama import OllamaChatCompletionClient
from autogen_core.models import UserMessage
from pydantic import BaseModel


class StructuredOutput(BaseModel):
first_name: str
last_name: str


ollama_client = OllamaChatCompletionClient(
model="llama3",
response_format=StructuredOutput,
)
result = await ollama_client.create([UserMessage(content="Who was the first man on the moon?", source="user")]) type: ignore
print(result)


* Ollama client by peterychang in https://github.com/microsoft/autogen/pull/5553
* Fix ollama docstring by peterychang in https://github.com/microsoft/autogen/pull/5600
* Ollama client docs by peterychang in https://github.com/microsoft/autogen/pull/5605

New Required `name` Field in `FunctionExecutionResult`

Now `name` field is required in [`FunctionExecutionResult`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.models.html#autogen_core.models.FunctionExecutionResult):

python
exec_result = FunctionExecutionResult(call_id="...", content="...", name="...", is_error=False)


* fix: Update SKChatCompletionAdapter message conversion by lspinheiro in https://github.com/microsoft/autogen/pull/5749

Using `thought` Field in `CreateResult` and `ThoughtEvent`

Now [`CreateResult`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.models.html#autogen_core.models.CreateResult) uses the optional `thought` field for the extra text content generated as part of a tool call from model. It is currently supported by `OpenAIChatCompletionClient`.

When available, the `thought` content will be emitted by `AssistantAgent` as a `ThoughtEvent` message.

* feat: Add thought process handling in tool calls and expose ThoughtEvent through stream in AgentChat by ekzhu in https://github.com/microsoft/autogen/pull/5500

New `metadata` Field in AgentChat Message Types

Added a `metadata` field for custom message content set by applications.

* Add metadata field to basemessage by husseinmozannar in https://github.com/microsoft/autogen/pull/5372

Exception in AgentChat Agents is now fatal

Now, if there is an exception raised within an AgentChat agent such as the `AssistantAgent`, instead of silently stopping the team, it will raise the exception.

* fix: Allow background exceptions to be fatal by jackgerrits in https://github.com/microsoft/autogen/pull/5716

New Termination Conditions

New termination conditions for better control of agents.

See how you use `TextMessageTerminationCondition` to control a single agent team running in a loop: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/teams.html#single-agent-team.

`FunctionCallTermination` is also discussed as an example for custom termination condition: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/termination.html#custom-termination-condition

* TextMessageTerminationCondition for agentchat by EItanya in https://github.com/microsoft/autogen/pull/5742
* FunctionCallTermination condition by ekzhu in https://github.com/microsoft/autogen/pull/5808


Docs Update

The ChainLit sample contains `UserProxyAgent` in a team, and shows you how to use it to get user input from UI. See: https://github.com/microsoft/autogen/tree/main/python/samples/agentchat_chainlit

* doc & sample: Update documentation for human-in-the-loop and UserProxyAgent; Add UserProxyAgent to ChainLit sample; by ekzhu in https://github.com/microsoft/autogen/pull/5656
* docs: Add logging instructions for AgentChat and enhance core logging guide by ekzhu in https://github.com/microsoft/autogen/pull/5655
* doc: Enrich AssistantAgent API documentation with usage examples. by ekzhu in https://github.com/microsoft/autogen/pull/5653
* doc: Update SelectorGroupChat doc on how to use O3-mini model. by ekzhu in https://github.com/microsoft/autogen/pull/5657
* update human in the loop docs for agentchat by victordibia in https://github.com/microsoft/autogen/pull/5720
* doc: update guide for termination condition and tool usage by ekzhu in https://github.com/microsoft/autogen/pull/5807
* Add examples for custom model context in AssistantAgent and ChatCompletionContext by ekzhu in https://github.com/microsoft/autogen/pull/5810


Bug Fixes

* Initialize BaseGroupChat before reset by gagb in https://github.com/microsoft/autogen/pull/5608
* fix: Remove R1 model family from is_openai function by ekzhu in https://github.com/microsoft/autogen/pull/5652
* fix: Crash in argument parsing when using Openrouter by philippHorn in https://github.com/microsoft/autogen/pull/5667
* Fix: Add support for custom headers in HTTP tool requests by linznin in https://github.com/microsoft/autogen/pull/5660
* fix: Structured output with tool calls for OpenAIChatCompletionClient by ekzhu in https://github.com/microsoft/autogen/pull/5671
* fix: Allow background exceptions to be fatal by jackgerrits in https://github.com/microsoft/autogen/pull/5716
* Fix: Auto-Convert Pydantic and Dataclass Arguments in AutoGen Tool Calls by mjunaidca in https://github.com/microsoft/autogen/pull/5737

Other Python Related Changes
* Update website version by ekzhu in https://github.com/microsoft/autogen/pull/5561
* doc: fix typo (recpients -> recipients) by radamson in https://github.com/microsoft/autogen/pull/5570
* feat: enhance issue templates with detailed guidance by ekzhu in https://github.com/microsoft/autogen/pull/5594
* Improve the model mismatch warning msg by thinkall in https://github.com/microsoft/autogen/pull/5586
* Fixing grammar issues by OndeVai in https://github.com/microsoft/autogen/pull/5537
* Fix typo in doc by weijen in https://github.com/microsoft/autogen/pull/5628
* Make ChatCompletionCache support component config by victordibia in https://github.com/microsoft/autogen/pull/5658
* DOCS: Minor updates to handoffs.ipynb by xtophs in https://github.com/microsoft/autogen/pull/5665
* DOCS: Fixed small errors in the text and made code format more consistent by xtophs in https://github.com/microsoft/autogen/pull/5664
* Replace the undefined tools variable with tool_schema parameter in ToolUseAgent class by shuklaham in https://github.com/microsoft/autogen/pull/5684
* Improve readme inconsistency by gagb in https://github.com/microsoft/autogen/pull/5691
* update versions to 0.4.8 by ekzhu in https://github.com/microsoft/autogen/pull/5689
* Update issue templates by jackgerrits in https://github.com/microsoft/autogen/pull/5686
* Change base image to one with arm64 support by jackgerrits in https://github.com/microsoft/autogen/pull/5681
* REF: replaced variable name in TextMentionTermination by pengjunfeng11 in https://github.com/microsoft/autogen/pull/5698
* Refactor AssistantAgent on_message_stream by lspinheiro in https://github.com/microsoft/autogen/pull/5642
* Fix accessibility issue 14 for visual accessibility by peterychang in https://github.com/microsoft/autogen/pull/5709
* Specify specific UV version should be used by jackgerrits in https://github.com/microsoft/autogen/pull/5711
* Update README.md for improved clarity and formatting by gagb in https://github.com/microsoft/autogen/pull/5714
* add anthropic native support by victordibia in https://github.com/microsoft/autogen/pull/5695
* 5663 ollama client host by rylativity in https://github.com/microsoft/autogen/pull/5674
* Fix visual accessibility issues 6 and 20 by peterychang in https://github.com/microsoft/autogen/pull/5725
* Add Serialization Instruction for MemoryContent by victordibia in https://github.com/microsoft/autogen/pull/5727
* Fix typo by stuartleeks in https://github.com/microsoft/autogen/pull/5754
* Add support for default model client, in AGS updates to settings UI by victordibia in https://github.com/microsoft/autogen/pull/5763
* fix incorrect field name from config to component by peterj in https://github.com/microsoft/autogen/pull/5761
* Make FileSurfer and CodeExecAgent Declarative by victordibia in https://github.com/microsoft/autogen/pull/5765
* docs: add note about markdown code block requirement in CodeExecutorA… by jay-thakur in https://github.com/microsoft/autogen/pull/5785
* add options to ollama client by peterychang in https://github.com/microsoft/autogen/pull/5805
* add stream_options to openai model by peterj in https://github.com/microsoft/autogen/pull/5788
* add api docstring to with_requirements by victordibia in https://github.com/microsoft/autogen/pull/5746
* Update with correct message types by laurentran in https://github.com/microsoft/autogen/pull/5789
* Update installation.md by LuSrackhall in https://github.com/microsoft/autogen/pull/5784
* Update magentic-one.md by Paulhb7 in https://github.com/microsoft/autogen/pull/5779
* Add ChromaDBVectorMemory in Extensions by victordibia in https://github.com/microsoft/autogen/pull/5308

New Contributors
* radamson made their first contribution in https://github.com/microsoft/autogen/pull/5570
* OndeVai made their first contribution in https://github.com/microsoft/autogen/pull/5537
* philippHorn made their first contribution in https://github.com/microsoft/autogen/pull/5667
* shuklaham made their first contribution in https://github.com/microsoft/autogen/pull/5684
* pengjunfeng11 made their first contribution in https://github.com/microsoft/autogen/pull/5698
* cedricmendelin made their first contribution in https://github.com/microsoft/autogen/pull/5422
* rylativity made their first contribution in https://github.com/microsoft/autogen/pull/5674
* stuartleeks made their first contribution in https://github.com/microsoft/autogen/pull/5754
* peterj made their first contribution in https://github.com/microsoft/autogen/pull/5761
* jay-thakur made their first contribution in https://github.com/microsoft/autogen/pull/5785
* YASAI03 made their first contribution in https://github.com/microsoft/autogen/pull/5794
* laurentran made their first contribution in https://github.com/microsoft/autogen/pull/5789
* mjunaidca made their first contribution in https://github.com/microsoft/autogen/pull/5737
* LuSrackhall made their first contribution in https://github.com/microsoft/autogen/pull/5784
* Paulhb7 made their first contribution in https://github.com/microsoft/autogen/pull/5779

**Full Changelog**: https://github.com/microsoft/autogen/compare/python-v0.4.7...python-v0.4.8

python-v0.4.7
Overview

This release contains various bug fixes and feature improvements for the Python API.

Related news: our .NET API website is up and running: https://microsoft.github.io/autogen/dotnet/dev/. Our .NET Core API now has dev releases. Check it out!  

Important

Starting from v0.4.7, `ModelInfo`'s required fields will be enforced. So please include all required fields when you use `model_info` when creating model clients. For example,

python
from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(
model="llama3.2:latest",
base_url="http://localhost:11434/v1",
api_key="placeholder",
model_info={
"vision": False,
"function_calling": True,
"json_output": False,
"family": "unknown",
},
)

response = await model_client.create([UserMessage(content="What is the capital of France?", source="user")])
print(response)


See [ModelInfo](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.models.html#autogen_core.models.ModelInfo) for more details.
 
New Features
* DockerCommandLineCodeExecutor support for additional volume mounts, exposed host ports by andrejpk in https://github.com/microsoft/autogen/pull/5383
* Remove and get subscription APIs for Python GrpcWorkerAgentRuntime by jackgerrits in https://github.com/microsoft/autogen/pull/5365
* Add `strict` mode support to `BaseTool`, `ToolSchema` and `FunctionTool` to allow tool calls to be used together with structured output mode by ekzhu in https://github.com/microsoft/autogen/pull/5507
* Make CodeExecutor components serializable by victordibia in https://github.com/microsoft/autogen/pull/5527

Bug Fixes
* fix: Address tool call execution scenario when model produces empty tool call ids by ekzhu in https://github.com/microsoft/autogen/pull/5509
* doc & fix: Enhance AgentInstantiationContext with detailed documentation and examples for agent instantiation; Fix a but that caused value error when the expected class is not provided in register_factory by ekzhu in https://github.com/microsoft/autogen/pull/5555
* fix: Add model info validation and improve error messaging by ekzhu in https://github.com/microsoft/autogen/pull/5556
* fix: Add warning and doc for Windows event loop policy to avoid subprocess issues in web surfer and local executor by ekzhu in https://github.com/microsoft/autogen/pull/5557

Doc Updates
* doc: Update API doc for MCP tool to include installation instructions by ekzhu in https://github.com/microsoft/autogen/pull/5482
* doc: Update AgentChat quickstart guide to enhance clarity and installation instructions by ekzhu in https://github.com/microsoft/autogen/pull/5499
* doc: API doc example for langchain database tool kit by ekzhu in https://github.com/microsoft/autogen/pull/5498
* Update Model Client Docs to Mention API Key from Environment Variables by victordibia in https://github.com/microsoft/autogen/pull/5515
* doc: improve tool guide in Core API doc by ekzhu in https://github.com/microsoft/autogen/pull/5546

Other Python Related Changes
* Update website version v0.4.6 by ekzhu in https://github.com/microsoft/autogen/pull/5481
* Reduce number of doc jobs for old releases by jackgerrits in https://github.com/microsoft/autogen/pull/5375
* Fix class name style in document by weijen in https://github.com/microsoft/autogen/pull/5516
* Update custom-agents.ipynb by yosuaw in https://github.com/microsoft/autogen/pull/5531
* fix: update 0.2 deployment workflow to use tag input instead of branch by ekzhu in https://github.com/microsoft/autogen/pull/5536
* fix: update help text for model configuration argument by gagb in https://github.com/microsoft/autogen/pull/5533
* Update python version to v0.4.7 by ekzhu in https://github.com/microsoft/autogen/pull/5558

New Contributors
* andrejpk made their first contribution in https://github.com/microsoft/autogen/pull/5383
* yosuaw made their first contribution in https://github.com/microsoft/autogen/pull/5531

**Full Changelog**: https://github.com/microsoft/autogen/compare/python-v0.4.6...python-v0.4.7

python-v0.4.6
Features and Improvements

MCP Tool

In this release we added a new built-in tool by richard-gyiko for using [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) servers. MCP is an open protocol that allows agents to tap into an ecosystem of tools, from browsing file system to Git repo management.

Here is an example of using the `mcp-server-fetch` tool for fetching web content as Markdown.

python
pip install mcp-server-fetch autogen-ext[mcp]

import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.mcp import StdioServerParams, mcp_server_tools


async def main() -> None:
Get the fetch tool from mcp-server-fetch.
fetch_mcp_server = StdioServerParams(command="uvx", args=["mcp-server-fetch"])
tools = await mcp_server_tools(fetch_mcp_server)

Create an agent that can use the fetch tool.
model_client = OpenAIChatCompletionClient(model="gpt-4o")
agent = AssistantAgent(name="fetcher", model_client=model_client, tools=tools, reflect_on_tool_use=True) type: ignore

Let the agent fetch the content of a URL and summarize it.
result = await agent.run(task="Summarize the content of https://en.wikipedia.org/wiki/Seattle")
print(result.messages[-1].content)


asyncio.run(main())


* Add MCP adapters to autogen-ext by richard-gyiko in https://github.com/microsoft/autogen/pull/5251

HTTP Tool

In this release we introduce a new built-in tool built by EItanya for querying HTTP-based API endpoints. This lets agent call remotely hosted tools through HTTP.

Here is an example of using the `httpbin.org` API for base64 decoding.

python
pip install autogen-ext[http-tool]

import asyncio

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.messages import TextMessage
from autogen_core import CancellationToken
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.tools.http import HttpTool

Define a JSON schema for a base64 decode tool
base64_schema = {
"type": "object",
"properties": {
"value": {"type": "string", "description": "The base64 value to decode"},
},
"required": ["value"],
}

Create an HTTP tool for the httpbin API
base64_tool = HttpTool(
name="base64_decode",
description="base64 decode a value",
scheme="https",
host="httpbin.org",
port=443,
path="/base64/{value}",
method="GET",
json_schema=base64_schema,
)


async def main():
Create an assistant with the base64 tool
model = OpenAIChatCompletionClient(model="gpt-4")
assistant = AssistantAgent("base64_assistant", model_client=model, tools=[base64_tool])

The assistant can now use the base64 tool to decode the string
response = await assistant.on_messages(
[TextMessage(content="Can you base64 decode the value 'YWJjZGU=', please?", source="user")],
CancellationToken(),
)
print(response.chat_message.content)


asyncio.run(main())


* Adding declarative HTTP tools to autogen ext by EItanya in https://github.com/microsoft/autogen/pull/5181


MagenticOne Improvement

We introduced several improvements to MagenticOne (M1) and its agents. We made M1 work with text-only models that can't read screenshots, and prompt changes to make it work better with smaller models.

Do you know now you can configure `m1` CLI tool with a YAML configuration file?

* WebSurfer: print viewport text by afourney in https://github.com/microsoft/autogen/pull/5329
* Allow m1 cli to read a configuration from a yaml file. by afourney in https://github.com/microsoft/autogen/pull/5341
* Add text-only model support to M1 by afourney in https://github.com/microsoft/autogen/pull/5344
* Ensure decriptions appear each on one line. Fix web_surfer's desc by afourney in https://github.com/microsoft/autogen/pull/5390
* Prompting changes to better support smaller models. by afourney in https://github.com/microsoft/autogen/pull/5386
* doc: improve m1 docs, remove duplicates by ekzhu in https://github.com/microsoft/autogen/pull/5460
* M1 docker by afourney in https://github.com/microsoft/autogen/pull/5437

SelectorGroupChat Improvement

In this release we made several improvements to make `SelectorGroupChat` work well with smaller models such as LLama 13B, and hosted models that do not support the `name` field in Chat Completion messages.

Do you know you can use models served through Ollama directly through the `OpenAIChatCompletionClient`? See: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/models.html#ollama

* Get SelectorGroupChat working for Llama models. by afourney in https://github.com/microsoft/autogen/pull/5409
* Mitigates 5401 by optionally prepending names to messages. by afourney in https://github.com/microsoft/autogen/pull/5448
* fix: improve speaker selection in SelectorGroupChat for weaker models by ekzhu in https://github.com/microsoft/autogen/pull/5454

Gemini Model Client

We enhanced our support for Gemini models. Now you can use Gemini models without passing in `model_info` and `base_url`.

python
from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient

model_client = OpenAIChatCompletionClient(
model="gemini-1.5-flash-8b",
api_key="GEMINI_API_KEY",
)

response = await model_client.create([UserMessage(content="What is the capital of France?", source="user")])
print(response)


* feat: add gemini model families, enhance group chat selection for Gemini model and add tests by ekzhu in https://github.com/microsoft/autogen/pull/5334
* feat: enhance Gemini model support in OpenAI client and tests by ekzhu in https://github.com/microsoft/autogen/pull/5461

AGBench Update

* Significant updates to agbench. by afourney in https://github.com/microsoft/autogen/pull/5313

New Sample

Interested in integration with FastAPI? We have a new sample: https://github.com/microsoft/autogen/blob/main/python/samples/agentchat_fastapi

* Add sample chat application with FastAPI by ekzhu in https://github.com/microsoft/autogen/pull/5433
* docs: enhance human-in-the-loop tutorial with FastAPI websocket example by ekzhu in https://github.com/microsoft/autogen/pull/5455


Bug Fixes

* Fix reading string args from m1 cli by afourney in https://github.com/microsoft/autogen/pull/5343
* Fix summarize_page in a text-only context, and for unknown models. by afourney in https://github.com/microsoft/autogen/pull/5388
* fix: warn on empty chunks, don't error out by MohMaz in https://github.com/microsoft/autogen/pull/5332
* fix: add state management for oai assistant by lspinheiro in https://github.com/microsoft/autogen/pull/5352
* fix: streaming token mode cannot work in function calls and will infi… by so2liu in https://github.com/microsoft/autogen/pull/5396
* fix: do not count agent event in MaxMessageTermination condition by ekzhu in https://github.com/microsoft/autogen/pull/5436
* fix: remove sk tool adapter plugin name by lspinheiro in https://github.com/microsoft/autogen/pull/5444
* fix & doc: update selector prompt documentation and remove validation checks by ekzhu in https://github.com/microsoft/autogen/pull/5456
* fix: update SK adapter stream tool call processing. by lspinheiro in https://github.com/microsoft/autogen/pull/5449
* fix: Update SK kernel from tool to use method. by lspinheiro in https://github.com/microsoft/autogen/pull/5469

Other Python Changes
* Update Python website to v0.4.5 by ekzhu in https://github.com/microsoft/autogen/pull/5316
* Adding o3 family: o3-mini by razvanvalca in https://github.com/microsoft/autogen/pull/5325
* Ensure ModelInfo field is serialized for OpenAIChatCompletionClient by victordibia in https://github.com/microsoft/autogen/pull/5315
* docs(core_distributed-group-chat): fix the typos in the docs in the README.md by jsburckhardt in https://github.com/microsoft/autogen/pull/5347
* Assistant agent drop images when not provided with a vision-capable model. by afourney in https://github.com/microsoft/autogen/pull/5351
* docs(python): add instructions for syncing dependencies and checking samples by ekzhu in https://github.com/microsoft/autogen/pull/5362
* Fix typo by weijen in https://github.com/microsoft/autogen/pull/5361
* docs: add blog link to README for updates and resources by gagb in https://github.com/microsoft/autogen/pull/5368
* Memory component base by EItanya in https://github.com/microsoft/autogen/pull/5380
* Fixed example code in doc:Custom Agents by weijen in https://github.com/microsoft/autogen/pull/5381
* Various web surfer fixes. by afourney in https://github.com/microsoft/autogen/pull/5393
* Refactor grpc channel connection in servicer by jackgerrits in https://github.com/microsoft/autogen/pull/5402
* Updates to proto for state apis by jackgerrits in https://github.com/microsoft/autogen/pull/5407
* feat: add integration workflow for testing multiple packages by ekzhu in https://github.com/microsoft/autogen/pull/5412
* Flush console output after every message. by afourney in https://github.com/microsoft/autogen/pull/5415
* Use a root json element instead of dict by jackgerrits in https://github.com/microsoft/autogen/pull/5430
* Split out GRPC tests by jackgerrits in https://github.com/microsoft/autogen/pull/5431
* feat: enhance AzureAIChatCompletionClient validation and add unit tests by ekzhu in https://github.com/microsoft/autogen/pull/5417
* Fix typo in Swarm doc by weijen in https://github.com/microsoft/autogen/pull/5435
* Update teams.ipynb : In the sample code the termination condition is set to the text "APPROVE" but the documentation mentions "TERMINATE" by abhijeethaval in https://github.com/microsoft/autogen/pull/5426
* Added the Claude family of models to ModelFamily by rohanthacker in https://github.com/microsoft/autogen/pull/5443
* feat: add indictor for tool failure to FunctionExecutionResult by wistuba in https://github.com/microsoft/autogen/pull/5428
* Update version to 0.4.6 by ekzhu in https://github.com/microsoft/autogen/pull/5477
* doc: improve agent tutorial to include multi-modal input. by ekzhu in https://github.com/microsoft/autogen/pull/5471
* doc: enhance extensions user guide with component examples by ekzhu in https://github.com/microsoft/autogen/pull/5480
* Implement control channel in python host servicer by jackgerrits in https://github.com/microsoft/autogen/pull/5427
* Improve custom agentchat agent docs with model clients (gemini example) and serialization by victordibia in https://github.com/microsoft/autogen/pull/5468

New Contributors
* razvanvalca made their first contribution in https://github.com/microsoft/autogen/pull/5325
* jsburckhardt made their first contribution in https://github.com/microsoft/autogen/pull/5347
* weijen made their first contribution in https://github.com/microsoft/autogen/pull/5361
* EItanya made their first contribution in https://github.com/microsoft/autogen/pull/5380
* so2liu made their first contribution in https://github.com/microsoft/autogen/pull/5396
* abhijeethaval made their first contribution in https://github.com/microsoft/autogen/pull/5426
* wistuba made their first contribution in https://github.com/microsoft/autogen/pull/5428

**Full Changelog**: https://github.com/microsoft/autogen/compare/python-v0.4.5...python-v0.4.6

autogenstudio-v0.4.1
Whats New


AutoGen Studio Declarative Configuration
- in 5172, you can now build your agents in python and export to a json format that works in autogen studio

AutoGen studio now used the same [declarative configuration](https://microsoft.github.io/autogen/dev/user-guide/core-user-guide/framework/component-config.html) interface as the rest of the AutoGen library. This means you can create your agent teams in python and then `dump_component()` it into a JSON spec that can be directly used in AutoGen Studio! This eliminates compatibility (or feature inconsistency) errors between AGS/AgentChat Python as the exact same specs can be used across.

> See a video tutorial on AutoGen Studio v0.4 (02/25) - [https://youtu.be/oum6EI7wohM](https://youtu.be/oum6EI7wohM)

[![A Friendly Introduction to AutoGen Studio v0.4](https://img.youtube.com/vi/oum6EI7wohM/maxresdefault.jpg)](https://www.youtube.com/watch?v=oum6EI7wohM)

Here's an example of an agent team and how it is converted to a JSON file:

python
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.conditions import TextMentionTermination

agent = AssistantAgent(
name="weather_agent",
model_client=OpenAIChatCompletionClient(
model="gpt-4o-mini",
),
)

agent_team = RoundRobinGroupChat([agent], termination_condition=TextMentionTermination("TERMINATE"))
config = agent_team.dump_component()
print(config.model_dump_json())


json
{
"provider": "autogen_agentchat.teams.RoundRobinGroupChat",
"component_type": "team",
"version": 1,
"component_version": 1,
"description": "A team that runs a group chat with participants taking turns in a round-robin fashion\n to publish a message to all.",
"label": "RoundRobinGroupChat",
"config": {
"participants": [
{
"provider": "autogen_agentchat.agents.AssistantAgent",
"component_type": "agent",
"version": 1,
"component_version": 1,
"description": "An agent that provides assistance with tool use.",
"label": "AssistantAgent",
"config": {
"name": "weather_agent",
"model_client": {
"provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
"component_type": "model",
"version": 1,
"component_version": 1,
"description": "Chat completion client for OpenAI hosted models.",
"label": "OpenAIChatCompletionClient",
"config": { "model": "gpt-4o-mini" }
},
"tools": [],
"handoffs": [],
"model_context": {
"provider": "autogen_core.model_context.UnboundedChatCompletionContext",
"component_type": "chat_completion_context",
"version": 1,
"component_version": 1,
"description": "An unbounded chat completion context that keeps a view of the all the messages.",
"label": "UnboundedChatCompletionContext",
"config": {}
},
"description": "An agent that provides assistance with ability to use tools.",
"system_message": "You are a helpful AI assistant. Solve tasks using your tools. Reply with TERMINATE when the task has been completed.",
"model_client_stream": false,
"reflect_on_tool_use": false,
"tool_call_summary_format": "{result}"
}
}
],
"termination_condition": {
"provider": "autogen_agentchat.conditions.TextMentionTermination",
"component_type": "termination",
"version": 1,
"component_version": 1,
"description": "Terminate the conversation if a specific text is mentioned.",
"label": "TextMentionTermination",
"config": { "text": "TERMINATE" }
}
}
}


> Note: If you are building custom agents and want to use them in AGS, you will need to inherit from the AgentChat BaseChat agent and Component class.

> Note: This is a breaking change in AutoGen Studio. You will need to update your AGS specs for any teams created with version autogenstudio <0.4.1

Ability to Test Teams in Team Builder
- in 5392, you can now test your teams as you build them. No need to switch between team builder and playground sessions to test.

You can now test teams directly as you build them in the team builder UI. As you edit your team (either via drag and drop or by editing the JSON spec)

<img width="1738" alt="Image" src="https://github.com/user-attachments/assets/4b895df2-3bad-474e-bec6-4fbcbf1c4346" />

<img width="1761" alt="Image" src="https://github.com/user-attachments/assets/65f52eb9-e926-4168-88fb-d2496c159474" />



New Default Agents in Gallery (Web Agent Team, Deep Research Team)
- in 5416, adds an implementation of a Web Agent Team and Deep Research Team in the default gallery.

The default gallery now has two additional default agents that you can build on and test:

- Web Agent Team - A team with 3 agents - a Web Surfer agent that can browse the web, a Verification Assistant that verifies and summarizes information, and a User Proxy that provides human feedback when needed.
- Deep Research Team - A team with 3 agents - a Research Assistant that performs web searches and analyzes information, a Verifier that ensures research quality and completeness, and a Summary Agent that provides a detailed markdown summary of the research as a report to the user.


Other Improvements

Older features that are currently possible in `v0.4.1`

- Real-time agent updates streaming to the frontend
- Run control: You can now stop agents mid-execution if they're heading in the wrong direction, adjust the team, and continue
- Interactive feedback: Add a UserProxyAgent to get human input through the UI during team runs
- Message flow visualization: See how agents communicate with each other
- Ability to import specifications from external galleries
- Ability to wrap agent teams into an API using the AutoGen Studio CLI

To update to the latest version:

bash
pip install -U autogenstudio


Overall roadmap for AutoGen Studion is here 4006 .
Contributions welcome!



python-v0.4.5
What's New

Streaming for AgentChat agents and teams

* Introduce ModelClientStreamingChunkEvent for streaming model output and update handling in agents and console by ekzhu in https://github.com/microsoft/autogen/pull/5208

To enable streaming from an AssistantAgent, set `model_client_stream=True` when creating it. The token stream will be available when you run the agent directly, or as part of a team when you call `run_stream`.

If you want to see tokens streaming in your console application, you can use `Console` directly.

python
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
agent = AssistantAgent("assistant", OpenAIChatCompletionClient(model="gpt-4o"), model_client_stream=True)
await Console(agent.run_stream(task="Write a short story with a surprising ending."))

asyncio.run(main())


If you are handling the messages yourself and streaming to the frontend, you can handle
`autogen_agentchat.messages.ModelClientStreamingChunkEvent` message.

python
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
agent = AssistantAgent("assistant", OpenAIChatCompletionClient(model="gpt-4o"), model_client_stream=True)
async for message in agent.run_stream(task="Write 3 line poem."):
print(message)

asyncio.run(main())



source='user' models_usage=None content='Write 3 line poem.' type='TextMessage'
source='assistant' models_usage=None content='Silent' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' whispers' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' glide' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=',' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' \n' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='Moon' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='lit' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' dreams' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' dance' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' through' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' the' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' night' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=',' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' \n' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='Stars' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' watch' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' from' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content=' above' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=None content='.' type='ModelClientStreamingChunkEvent'
source='assistant' models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0) content='Silent whispers glide, \nMoonlit dreams dance through the night, \nStars watch from above.' type='TextMessage'
TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Write 3 line poem.', type='TextMessage'), TextMessage(source='assistant', models_usage=RequestUsage(prompt_tokens=0, completion_tokens=0), content='Silent whispers glide, \nMoonlit dreams dance through the night, \nStars watch from above.', type='TextMessage')], stop_reason=None)


Read more here: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/agents.html#streaming-tokens

Also, see the sample showing how to stream a team's messages to ChainLit frontend: https://github.com/microsoft/autogen/tree/python-v0.4.5/python/samples/agentchat_chainlit

R1-style reasoning output

* Support R1 reasoning text in model create result; enhance API docs by ekzhu in https://github.com/microsoft/autogen/pull/5262

python
import asyncio
from autogen_core.models import UserMessage, ModelFamily
from autogen_ext.models.openai import OpenAIChatCompletionClient

async def main() -> None:
model_client = OpenAIChatCompletionClient(
model="deepseek-r1:1.5b",
api_key="placeholder",
base_url="http://localhost:11434/v1",
model_info={
"function_calling": False,
"json_output": False,
"vision": False,
"family": ModelFamily.R1,
}
)

Test basic completion with the Ollama deepseek-r1:1.5b model.
create_result = await model_client.create(
messages=[
UserMessage(
content="Taking two balls from a bag of 10 green balls and 20 red balls, "
"what is the probability of getting a green and a red balls?",
source="user",
),
]
)

CreateResult.thought field contains the thinking content.
print(create_result.thought)
print(create_result.content)

asyncio.run(main())


Streaming is also supported with R1-style reasoning output.

See the sample showing R1 playing chess: https://github.com/microsoft/autogen/tree/python-v0.4.5/python/samples/agentchat_chess_game

FunctionTool for partial functions

* FunctionTool partial support by nour-bouzid in https://github.com/microsoft/autogen/pull/5183

Now you can define function tools from partial functions, where some parameters have been set before hand.

python
import json
from functools import partial
from autogen_core.tools import FunctionTool


def get_weather(country: str, city: str) -> str:
return f"The temperature in {city}, {country} is 75°"


partial_function = partial(get_weather, "Germany")
tool = FunctionTool(partial_function, description="Partial function tool.")

print(json.dumps(tool.schema, indent=2))


json
{
"name": "get_weather",
"description": "Partial function tool.",
"parameters": {
"type": "object",
"properties": {
"city": {
"description": "city",
"title": "City",
"type": "string"
}
},
"required": [
"city"
]
}
}



CodeExecutorAgent update

* Added an optional sources parameter to CodeExecutorAgent by afourney in https://github.com/microsoft/autogen/pull/5259

New Samples

* Streamlit + AgentChat sample by husseinkorly in https://github.com/microsoft/autogen/pull/5306
* ChainLit + AgentChat sample with streaming by ekzhu in https://github.com/microsoft/autogen/pull/5304
* Chess sample showing R1-Style reasoning for planning and strategizing by ekzhu in https://github.com/microsoft/autogen/pull/5285


Documentation update:
* Add Semantic Kernel Adapter documentation and usage examples in user guides by ekzhu in https://github.com/microsoft/autogen/pull/5256
* Update human-in-the-loop tutorial with better system message to signal termination condition by ekzhu in https://github.com/microsoft/autogen/pull/5253

Moves

* Remove old autogen_magentic_one package. by afourney in https://github.com/microsoft/autogen/pull/5305


Bug Fixes

* fix: handle non-string function arguments in tool calls and add corresponding warnings by ekzhu in https://github.com/microsoft/autogen/pull/5260
* Add default_header support by nour-bouzid in https://github.com/microsoft/autogen/pull/5249
* feat: update OpenAIAssistantAgent to support AsyncAzureOpenAI client by ekzhu in https://github.com/microsoft/autogen/pull/5312


All Other Python Related Changes
* Update website for v0.4.4 by ekzhu in https://github.com/microsoft/autogen/pull/5246
* update dependencies to work with protobuf 5 by MohMaz in https://github.com/microsoft/autogen/pull/5195
* Adjusted M1 agent system prompt to remove TERMINATE by afourney in https://github.com/microsoft/autogen/pull/5263
https://github.com/microsoft/autogen/pull/5270
* chore: update package versions to 0.4.5 and remove deprecated requirements by ekzhu in https://github.com/microsoft/autogen/pull/5280
* Update Distributed Agent Runtime Cross-platform Sample by linznin in https://github.com/microsoft/autogen/pull/5164
* fix: windows check ci failure by bassmang in https://github.com/microsoft/autogen/pull/5287
* fix: type issues in streamlit sample and add streamlit to dev dependencies by ekzhu in https://github.com/microsoft/autogen/pull/5309
* chore: add asyncio_atexit dependency to docker requirements by ekzhu in https://github.com/microsoft/autogen/pull/5307
* feat: add o3 to model info; update chess example by ekzhu in https://github.com/microsoft/autogen/pull/5311

New Contributors
* nour-bouzid made their first contribution in https://github.com/microsoft/autogen/pull/5183
* linznin made their first contribution in https://github.com/microsoft/autogen/pull/5164
* husseinkorly made their first contribution in https://github.com/microsoft/autogen/pull/5306

**Full Changelog**: https://github.com/microsoft/autogen/compare/v0.4.4...python-v0.4.5

0.4.4

What's New

Serializable Configuration for AgentChat

* Make FunctionTools Serializable (Declarative) by victordibia in https://github.com/microsoft/autogen/pull/5052
* Make AgentChat Team Config Serializable by victordibia in https://github.com/microsoft/autogen/pull/5071
* improve component config, add description support in dump_component by victordibia in https://github.com/microsoft/autogen/pull/5203

This new feature allows you to serialize an agent or a team to a JSON string, and deserialize them back into objects. Make sure to also read about `save_state` and `load_state`: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/state.html.

**You now can serialize and deserialize both the configurations and the state of agents and teams.**

For example, create a `RoundRobinGroupChat`, and serialize its configuration and state.

python
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.base import Team
from autogen_agentchat.ui import Console
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def dump_team_config() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o")
assistant = AssistantAgent(
"assistant",
model_client=model_client,
system_message="You are a helpful assistant.",
)
critic = AssistantAgent(
"critic",
model_client=model_client,
system_message="Provide feedback. Reply with 'APPROVE' if the feedback has been addressed.",
)
termination = TextMentionTermination("APPROVE", sources=["critic"])
group_chat = RoundRobinGroupChat(
[assistant, critic], termination_condition=termination
)
Run the group chat.
await Console(group_chat.run_stream(task="Write a short poem about winter."))
Dump the team configuration to a JSON file.
config = group_chat.dump_component()
with open("team_config.json", "w") as f:
f.write(config.model_dump_json(indent=4))
Dump the team state to a JSON file.
state = await group_chat.save_state()
with open("team_state.json", "w") as f:
f.write(json.dumps(state, indent=4))

asyncio.run(dump_team_config())


Produces serialized team configuration and state. Truncated for illustration purpose.

json
{
"provider": "autogen_agentchat.teams.RoundRobinGroupChat",
"component_type": "team",
"version": 1,
"component_version": 1,
"description": "A team that runs a group chat with participants taking turns in a round-robin fashion\n to publish a message to all.",
"label": "RoundRobinGroupChat",
"config": {
"participants": [
{
"provider": "autogen_agentchat.agents.AssistantAgent",
"component_type": "agent",
"version": 1,
"component_version": 1,
"description": "An agent that provides assistance with tool use.",
"label": "AssistantAgent",
"config": {
"name": "assistant",
"model_client": {
"provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
"component_type": "model",
"version": 1,
"component_version": 1,
"description": "Chat completion client for OpenAI hosted models.",
"label": "OpenAIChatCompletionClient",
"config": {
"model": "gpt-4o"
}


json
{
"type": "TeamState",
"version": "1.0.0",
"agent_states": {
"group_chat_manager/25763eb1-78b2-4509-8607-7224ae383575": {
"type": "RoundRobinManagerState",
"version": "1.0.0",
"message_thread": [
{
"source": "user",
"models_usage": null,
"content": "Write a short poem about winter.",
"type": "TextMessage"
},
{
"source": "assistant",
"models_usage": {
"prompt_tokens": 25,
"completion_tokens": 150
},
"content": "Amidst the still and silent air, \nWhere frost adorns the branches bare, \nThe world transforms in shades of white, \nA wondrous, shimmering, quiet sight.\n\nThe whisper of the wind is low, \nAs snowflakes drift and dance and glow. \nEach crystal, delicate and bright, \nFalls gently through the silver night.\n\nThe earth is hushed in pure embrace, \nA tranquil, glistening, untouched space. \nYet warmth resides in hearts that roam, \nFinding solace in the hearth of home.\n\nIn winter\u2019s breath, a promise lies, \nBeneath the veil of cold, clear skies: \nThat spring will wake the sleeping land, \nAnd life will bloom where now we stand.",
"type": "TextMessage"


Load the configuration and state back into objects.

python
import asyncio
import json
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.base import Team

async def load_team_config() -> None:
Load the team configuration from a JSON file.
with open("team_config.json", "r") as f:
config = json.load(f)
group_chat = Team.load_component(config)
Load the team state from a JSON file.
with open("team_state.json", "r") as f:
state = json.load(f)
await group_chat.load_state(state)
assert isinstance(group_chat, RoundRobinGroupChat)

asyncio.run(load_team_config())


This new feature allows you to manage persistent sessions across server-client based user interaction.

Azure AI Client for Azure-Hosted Models

* Feature/azure ai inference client by lspinheiro and rohanthacker in https://github.com/microsoft/autogen/pull/5153

This allows you to use Azure and GitHub-hosted models, including Phi-4, Mistral models, and Cohere models.

python
import asyncio
import os

from autogen_core.models import UserMessage
from autogen_ext.models.azure import AzureAIChatCompletionClient
from azure.core.credentials import AzureKeyCredential


async def main() -> None:
client = AzureAIChatCompletionClient(
model="Phi-4",
endpoint="https://models.inference.ai.azure.com",
To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings.
Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
credential=AzureKeyCredential(os.environ["GITHUB_TOKEN"]),
model_info={
"json_output": False,
"function_calling": False,
"vision": False,
"family": "unknown",
},
)
result = await client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
print(result)


asyncio.run(main())


Rich Console UI for Magentic One CLI

* RichConsole: Prettify m1 CLI console using rich 4806 by gziz in https://github.com/microsoft/autogen/pull/5123

You can now enable pretty printed output for `m1` command line tool by adding `--rich` argument.

bash
m1 --rich "Find information about AutoGen"


<img width="1091" alt="Screenshot 2025-01-28 191752" src="https://github.com/user-attachments/assets/18a7fa9f-158e-4531-b449-b16c2f0c5c2b" />


Default In-Memory Cache for ChatCompletionCache

* Implement default in-memory store for ChatCompletionCache by srjoglekar246 in https://github.com/microsoft/autogen/pull/5188

This allows you to cache model client calls without specifying an external cache service.

python
import asyncio

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache


async def main() -> None:
Create a model client.
client = OpenAIChatCompletionClient(model="gpt-4o")

Create a cached wrapper around the model client.
cached_client = ChatCompletionCache(client)

Call the cached client.
result = await cached_client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
print(result.content, result.cached)

Call the cached client again.
result = await cached_client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
print(result.content, result.cached)


asyncio.run(main())



The capital of France is Paris. False
The capital of France is Paris. True


Docs Update
* Update model client documentation add Ollama, Gemini, Azure AI models by ekzhu in https://github.com/microsoft/autogen/pull/5196
* Add Model Client Cache section to migration guide by ekzhu in https://github.com/microsoft/autogen/pull/5197
* docs: Enhance documentation for SingleThreadedAgentRuntime with usage examples and clarifications; undeprecate process_next by ekzhu in https://github.com/microsoft/autogen/pull/5230
* docs: Update user guide notebooks to enhance clarity and add structured output by ekzhu in https://github.com/microsoft/autogen/pull/5224
* docs: Core API doc update: split out model context from model clients; separate framework and components by ekzhu in https://github.com/microsoft/autogen/pull/5171
* docs: Add a helpful comment to swarm.ipynb by withsmilo in https://github.com/microsoft/autogen/pull/5145
* docs: Enhance documentation for SingleThreadedAgentRuntime with usage examples and clarifications; undeprecate process_next by ekzhu in https://github.com/microsoft/autogen/pull/5230


Bug Fixes
* fix: update SK model adapter constructor by lspinheiro in https://github.com/microsoft/autogen/pull/5150. This allows the SK Model Client to be used inside an `AssistantAgent`.
* Fix function tool naming to avoid overriding the name input by Pierrolo in https://github.com/microsoft/autogen/pull/5165
* fix: Enhance OpenAI client to handle additional stop reasons and improve tool call validation in tests to address empty tool_calls list. by ekzhu in https://github.com/microsoft/autogen/pull/5223


Other Changes
* Make ChatAgent an ABC by jackgerrits in https://github.com/microsoft/autogen/pull/5129
* Update website for 0.4.3 by jackgerrits in https://github.com/microsoft/autogen/pull/5139
* Make Memory and Team an ABC by victordibia in https://github.com/microsoft/autogen/pull/5149
* Closes 5059 by fbpazos in https://github.com/microsoft/autogen/pull/5156
* Update proto to include remove sub, move to rpc based operations by jackgerrits in https://github.com/microsoft/autogen/pull/5168
* Add dependencies to distributed group chat example by MohMaz in https://github.com/microsoft/autogen/pull/5175
* Communicate client id via metadata in grpc runtime by jackgerrits in https://github.com/microsoft/autogen/pull/5185
* Fixed typo fixing issue 5186 by raimondasl in https://github.com/microsoft/autogen/pull/5187
* Improve grpc type checking by jackgerrits in https://github.com/microsoft/autogen/pull/5189
* Impl register and add sub RPC by jackgerrits in https://github.com/microsoft/autogen/pull/5191
* rysweet-unsubscribe-and-agent-tests-4744 by rysweet in https://github.com/microsoft/autogen/pull/4920
* make AssistantAgent and Handoff use BaseTool by victordibia in https://github.com/microsoft/autogen/pull/5193
* docs: s/Exisiting/Existing/g by bih in https://github.com/microsoft/autogen/pull/5202
* Rysweet 5201 refactor runtime interface by rysweet in https://github.com/microsoft/autogen/pull/5204
* Update model client documentation add Ollama, Gemini, Azure AI models by ekzhu in https://github.com/microsoft/autogen/pull/5196
* Rysweet 5207 net runtime interface to match python add registration to interface and inmemoryruntime by rysweet in https://github.com/microsoft/autogen/pull/5215
* Rysweet 5217 add send message by rysweet in https://github.com/microsoft/autogen/pull/5219
* Update literature-review.ipynb to fix possible copy-and-paste error by xtophs in https://github.com/microsoft/autogen/pull/5214
* Updated docs for _azure_ai_client.py by rohanthacker in https://github.com/microsoft/autogen/pull/5199
* Refactor Dotnet core to align with Python by jackgerrits in https://github.com/microsoft/autogen/pull/5225
* Remove channel based control plane APIs, cleanup proto by jackgerrits in https://github.com/microsoft/autogen/pull/5236
* update versions to 0.4.4 and m1 cli to 0.2.3 by ekzhu in https://github.com/microsoft/autogen/pull/5229
* feat: Enable queueing and step mode in InProcessRuntime by lokitoth in https://github.com/microsoft/autogen/pull/5239
* feat: Expose self-delivery for InProcessRuntime in AgentsApp by lokitoth in https://github.com/microsoft/autogen/pull/5240
* refactor: Reduce reflection calls when using HandlerInvoker by lokitoth in https://github.com/microsoft/autogen/pull/5241
* fix: Various fixes and cleanups to dotnet autogen core by bassmang in https://github.com/microsoft/autogen/pull/5242
* Start from just protos in core.grpc by jackgerrits in https://github.com/microsoft/autogen/pull/5243

New Contributors
* fbpazos made their first contribution in https://github.com/microsoft/autogen/pull/5156
* withsmilo made their first contribution in https://github.com/microsoft/autogen/pull/5145
* Pierrolo made their first contribution in https://github.com/microsoft/autogen/pull/5165
* raimondasl made their first contribution in https://github.com/microsoft/autogen/pull/5187
* bih made their first contribution in https://github.com/microsoft/autogen/pull/5202
* xtophs made their first contribution in https://github.com/microsoft/autogen/pull/5214

**Full Changelog**: https://github.com/microsoft/autogen/compare/v0.4.3...v0.4.4

0.4.3

What's new

This is the first release since 0.4.0 with significant new features! We look forward to hearing feedback and suggestions from the community.

Chat completion model cache

One of the big missing features from 0.2 was the ability to seamlessly cache model client completions. This release adds [`ChatCompletionCache`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.models.cache.html#autogen_ext.models.cache.ChatCompletionCache) which can wrap **any** other [`ChatCompletionClient`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.models.html#autogen_core.models.ChatCompletionClient) and cache completions.

There is a [`CacheStore`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.html#autogen_core.CacheStore) interface to allow for easy implementation of new caching backends. The currently available implementations are:

- [`DiskCacheStore`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.cache_store.diskcache.html#autogen_ext.cache_store.diskcache.DiskCacheStore)
- [`RedisStore`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.cache_store.redis.html#autogen_ext.cache_store.redis.RedisStore)

python
import asyncio
import tempfile

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache, CHAT_CACHE_VALUE_TYPE
from autogen_ext.cache_store.diskcache import DiskCacheStore
from diskcache import Cache


async def main():
with tempfile.TemporaryDirectory() as tmpdirname:
openai_model_client = OpenAIChatCompletionClient(model="gpt-4o")

cache_store = DiskCacheStore[CHAT_CACHE_VALUE_TYPE](Cache(tmpdirname))
cache_client = ChatCompletionCache(openai_model_client, cache_store)

response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response) Should print response from OpenAI
response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response) Should print cached response


asyncio.run(main())


`ChatCompletionCache` is not yet supported by the declarative component config, see the [issue](https://github.com/microsoft/autogen/issues/5141) to track progress.

4924 by srjoglekar246

GraphRAG

This releases adds support for GraphRAG as a tool agents can call. You can find a sample for how to use this integration [here](https://github.com/microsoft/autogen/tree/main/python/samples/agentchat_graphrag), and docs for [`LocalSearchTool`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.tools.graphrag.html#autogen_ext.tools.graphrag.LocalSearchTool) and [`GlobalSearchTool`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.tools.graphrag.html#autogen_ext.tools.graphrag.GlobalSearchTool).

python
import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console
from autogen_ext.tools.graphrag import GlobalSearchTool
from autogen_agentchat.agents import AssistantAgent


async def main():
Initialize the OpenAI client
openai_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
)

Set up global search tool
global_tool = GlobalSearchTool.from_settings(settings_path="./settings.yaml")

Create assistant agent with the global search tool
assistant_agent = AssistantAgent(
name="search_assistant",
tools=[global_tool],
model_client=openai_client,
system_message=(
"You are a tool selector AI assistant using the GraphRAG framework. "
"Your primary task is to determine the appropriate search tool to call based on the user's query. "
"For broader, abstract questions requiring a comprehensive understanding of the dataset, call the 'global_search' function."
),
)

Run a sample query
query = "What is the overall sentiment of the community reports?"
await Console(assistant_agent.run_stream(task=query))


if __name__ == "__main__":
asyncio.run(main())


4612 by lspinheiro

Semantic Kernel model adapter

Semantic Kernel has an [extensive collection of AI connectors](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai#readme). In this release we added support to adapt a Semantic Kernel AI Connector to an AutoGen ChatCompletionClient using the [`SKChatCompletionAdapter`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.models.semantic_kernel.html#autogen_ext.models.semantic_kernel.SKChatCompletionAdapter).

Currently this requires passing the kernel during create, and so cannot be used with `AssistantAgent` directly yet. This will be fixed in a future release (5144).

4851 by lspinheiro

AutoGen to Semantic Kernel tool adapter

We also added a tool adapter, but this time to allow AutoGen tools to be added to a Kernel, called [`KernelFunctionFromTool`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.tools.semantic_kernel.html#autogen_ext.tools.semantic_kernel.KernelFunctionFromTool).

4851 by lspinheiro

Jupyter Code Executor

This release also brings forward Jupyter code executor functionality that we had in 0.2, as the [`JupyterCodeExecutor`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.code_executors.jupyter.html#autogen_ext.code_executors.jupyter.JupyterCodeExecutor).

Please note that this currently on supports **local** execution and should be used with caution.

4885 by Leon0402

Memory

It's still early on but we merged the interface for agent memory in this release. This allows agents to enrich their context from a memory store and save information to it. The interface is defined in core and AssistantAgent in agentchat accepts memory as a parameter now. There is an initial example memory implementation which simply injects all memories as system messages for the agent. The intention is for the memory interface to be able to be used for both RAG and agent memory systems going forward.

- [Tutorial](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/memory.html)
- Core [`Memory`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.memory.html#autogen_core.memory.Memory) interface
- Existing [`AssistantAgent`](https://microsoft.github.io/autogen/stable/reference/python/autogen_agentchat.agents.html#autogen_agentchat.agents.AssistantAgent) with new memory parameter

4438 by victordibia, 5053 by ekzhu

Declarative config

We're continuing to expand support for declarative configs throughout the framework. In this release, we've added support for termination conditions and base chat agents. Once we're done with this, you'll be able to configure and entire team of agents with a single config file and have it work seamlessly with AutoGen studio. Stay tuned!

4984, 5055 by victordibia

Other

* Add sources field to TextMentionTermination by Leon0402 in https://github.com/microsoft/autogen/pull/5106
* Update gpt-4o model version to 2024-08-06 by ekzhu in https://github.com/microsoft/autogen/pull/5117

Bug fixes
* Retry multiple times when M1 selects an invalid agent. Make agent sel… by afourney in https://github.com/microsoft/autogen/pull/5079
* fix: normalize finish reason in CreateResult response by ekzhu in https://github.com/microsoft/autogen/pull/5085
* Pass context between AssistantAgent for handoffs by ekzhu in https://github.com/microsoft/autogen/pull/5084
* fix: ensure proper handling of structured output in OpenAI client and improve test coverage for structured output by ekzhu in https://github.com/microsoft/autogen/pull/5116
* fix: use tool_calls field to detect tool calls in OpenAI client; add integration tests for OpenAI and Gemini by ekzhu in https://github.com/microsoft/autogen/pull/5122

Other changes

* Update website for 0.4.1 by jackgerrits in https://github.com/microsoft/autogen/pull/5031
* PoC AGS dev container by JohanForngren in https://github.com/microsoft/autogen/pull/5026
* Update studio dep by ekzhu in https://github.com/microsoft/autogen/pull/5062
* Update studio dep to use version bound by ekzhu in https://github.com/microsoft/autogen/pull/5063
* Update gpt-4o model version and add new model details by keenranger in https://github.com/microsoft/autogen/pull/5056
* Improve AGS Documentation by victordibia in https://github.com/microsoft/autogen/pull/5065
* Pin uv to 0.5.18 by jackgerrits in https://github.com/microsoft/autogen/pull/5067
* Update version to 0.4.3 pre-emptively by jackgerrits in https://github.com/microsoft/autogen/pull/5066
* fix: dotnet azure pipeline (uv sync installation) by bassmang in https://github.com/microsoft/autogen/pull/5042
* docs: .NET Documentation by lokitoth in https://github.com/microsoft/autogen/pull/5039
* [Documentation] Update tools.ipynb: use system messages in the tool_agent_caller_loop session by zysoong in https://github.com/microsoft/autogen/pull/5068
* docs: enhance agents.ipynb with parallel tool calls section by ekzhu in https://github.com/microsoft/autogen/pull/5088
* Use caching to run tests and report coverage by lspinheiro in https://github.com/microsoft/autogen/pull/5086
* fix: ESPR dotnet code signing by bassmang in https://github.com/microsoft/autogen/pull/5081
* Update AGS pyproject.toml by victordibia in https://github.com/microsoft/autogen/pull/5101
* docs: update AssistantAgent documentation with a new figure, attention and warning notes by ekzhu in https://github.com/microsoft/autogen/pull/5099
* Rysweet fix integration tests and xlang by rysweet in https://github.com/microsoft/autogen/pull/5107
* docs: enhance Swarm user guide with notes on tool calling by ekzhu in https://github.com/microsoft/autogen/pull/5103
* fix a small typo by marinator86 in https://github.com/microsoft/autogen/pull/5120

New Contributors
* lokitoth made their first contribution in https://github.com/microsoft/autogen/pull/5060
* keenranger made their first contribution in https://github.com/microsoft/autogen/pull/5056
* zysoong made their first contribution in https://github.com/microsoft/autogen/pull/5068
* marinator86 made their first contribution in https://github.com/microsoft/autogen/pull/5120

**Full Changelog**: https://github.com/microsoft/autogen/compare/v0.4.1...v0.4.3

0.4.2

- Change async input strategy in order to remove unintentional and accidentally added GPL dependency (5060)

**Full Changelog**: https://github.com/microsoft/autogen/compare/v0.4.1...v0.4.2

0.4.1

What's Important

* Fixed console user input bug that affects `m1` and other apps that use console user input. 4995
* Improved component config by allowing subclassing the `BaseComponent` class. 5017 To read more about how to create your own component config to support serializable components: https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/framework/component-config.html
* Fixed `stop_reason` related bug by making the stop reason setting more robust 5027
* Disable `Console` output statistics by default.
* Minor doc fixes.

0.4

To upgrade from v0.2, read the [migration guide](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/migration-guide.html). For a basic setup:

bash
pip install -U "autogen-agentchat" "autogen-ext[openai]"


You can refer to our updated [README](https://github.com/microsoft/autogen/tree/main/README.md) for more information about the new API.

Page 1 of 13

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.