Autogen

Latest version: v0.7.3

Safety actively analyzes 701993 Python packages for vulnerabilities to keep your Python projects secure.

Scan your dependencies

Page 1 of 12

0.4.4

What's New

Serializable Configuration for AgentChat

* Make FunctionTools Serializable (Declarative) by victordibia in https://github.com/microsoft/autogen/pull/5052
* Make AgentChat Team Config Serializable by victordibia in https://github.com/microsoft/autogen/pull/5071
* improve component config, add description support in dump_component by victordibia in https://github.com/microsoft/autogen/pull/5203

This new feature allows you to serialize an agent or a team to a JSON string, and deserialize them back into objects. Make sure to also read about `save_state` and `load_state`: https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/state.html.

**You now can serialize and deserialize both the configurations and the state of agents and teams.**

For example, create a `RoundRobinGroupChat`, and serialize its configuration and state.

python
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.base import Team
from autogen_agentchat.ui import Console
from autogen_agentchat.conditions import TextMentionTermination
from autogen_ext.models.openai import OpenAIChatCompletionClient


async def dump_team_config() -> None:
model_client = OpenAIChatCompletionClient(model="gpt-4o")
assistant = AssistantAgent(
"assistant",
model_client=model_client,
system_message="You are a helpful assistant.",
)
critic = AssistantAgent(
"critic",
model_client=model_client,
system_message="Provide feedback. Reply with 'APPROVE' if the feedback has been addressed.",
)
termination = TextMentionTermination("APPROVE", sources=["critic"])
group_chat = RoundRobinGroupChat(
[assistant, critic], termination_condition=termination
)
Run the group chat.
await Console(group_chat.run_stream(task="Write a short poem about winter."))
Dump the team configuration to a JSON file.
config = group_chat.dump_component()
with open("team_config.json", "w") as f:
f.write(config.model_dump_json(indent=4))
Dump the team state to a JSON file.
state = await group_chat.save_state()
with open("team_state.json", "w") as f:
f.write(json.dumps(state, indent=4))

asyncio.run(dump_team_config())


Produces serialized team configuration and state. Truncated for illustration purpose.

json
{
"provider": "autogen_agentchat.teams.RoundRobinGroupChat",
"component_type": "team",
"version": 1,
"component_version": 1,
"description": "A team that runs a group chat with participants taking turns in a round-robin fashion\n to publish a message to all.",
"label": "RoundRobinGroupChat",
"config": {
"participants": [
{
"provider": "autogen_agentchat.agents.AssistantAgent",
"component_type": "agent",
"version": 1,
"component_version": 1,
"description": "An agent that provides assistance with tool use.",
"label": "AssistantAgent",
"config": {
"name": "assistant",
"model_client": {
"provider": "autogen_ext.models.openai.OpenAIChatCompletionClient",
"component_type": "model",
"version": 1,
"component_version": 1,
"description": "Chat completion client for OpenAI hosted models.",
"label": "OpenAIChatCompletionClient",
"config": {
"model": "gpt-4o"
}


json
{
"type": "TeamState",
"version": "1.0.0",
"agent_states": {
"group_chat_manager/25763eb1-78b2-4509-8607-7224ae383575": {
"type": "RoundRobinManagerState",
"version": "1.0.0",
"message_thread": [
{
"source": "user",
"models_usage": null,
"content": "Write a short poem about winter.",
"type": "TextMessage"
},
{
"source": "assistant",
"models_usage": {
"prompt_tokens": 25,
"completion_tokens": 150
},
"content": "Amidst the still and silent air, \nWhere frost adorns the branches bare, \nThe world transforms in shades of white, \nA wondrous, shimmering, quiet sight.\n\nThe whisper of the wind is low, \nAs snowflakes drift and dance and glow. \nEach crystal, delicate and bright, \nFalls gently through the silver night.\n\nThe earth is hushed in pure embrace, \nA tranquil, glistening, untouched space. \nYet warmth resides in hearts that roam, \nFinding solace in the hearth of home.\n\nIn winter\u2019s breath, a promise lies, \nBeneath the veil of cold, clear skies: \nThat spring will wake the sleeping land, \nAnd life will bloom where now we stand.",
"type": "TextMessage"


Load the configuration and state back into objects.

python
import asyncio
import json
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_agentchat.base import Team

async def load_team_config() -> None:
Load the team configuration from a JSON file.
with open("team_config.json", "r") as f:
config = json.load(f)
group_chat = Team.load_component(config)
Load the team state from a JSON file.
with open("team_state.json", "r") as f:
state = json.load(f)
await group_chat.load_state(state)
assert isinstance(group_chat, RoundRobinGroupChat)

asyncio.run(load_team_config())


This new feature allows you to manage persistent sessions across server-client based user interaction.

Azure AI Client for Azure-Hosted Models

* Feature/azure ai inference client by lspinheiro and rohanthacker in https://github.com/microsoft/autogen/pull/5153

This allows you to use Azure and GitHub-hosted models, including Phi-4, Mistral models, and Cohere models.

python
import asyncio
import os

from autogen_core.models import UserMessage
from autogen_ext.models.azure import AzureAIChatCompletionClient
from azure.core.credentials import AzureKeyCredential


async def main() -> None:
client = AzureAIChatCompletionClient(
model="Phi-4",
endpoint="https://models.inference.ai.azure.com",
To authenticate with the model you will need to generate a personal access token (PAT) in your GitHub settings.
Create your PAT token by following instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens
credential=AzureKeyCredential(os.environ["GITHUB_TOKEN"]),
model_info={
"json_output": False,
"function_calling": False,
"vision": False,
"family": "unknown",
},
)
result = await client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
print(result)


asyncio.run(main())


Rich Console UI for Magentic One CLI

* RichConsole: Prettify m1 CLI console using rich 4806 by gziz in https://github.com/microsoft/autogen/pull/5123

You can now enable pretty printed output for `m1` command line tool by adding `--rich` argument.

bash
m1 --rich "Find information about AutoGen"


<img width="1091" alt="Screenshot 2025-01-28 191752" src="https://github.com/user-attachments/assets/18a7fa9f-158e-4531-b449-b16c2f0c5c2b" />


Default In-Memory Cache for ChatCompletionCache

* Implement default in-memory store for ChatCompletionCache by srjoglekar246 in https://github.com/microsoft/autogen/pull/5188

This allows you to cache model client calls without specifying an external cache service.

python
import asyncio

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache


async def main() -> None:
Create a model client.
client = OpenAIChatCompletionClient(model="gpt-4o")

Create a cached wrapper around the model client.
cached_client = ChatCompletionCache(client)

Call the cached client.
result = await cached_client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
print(result.content, result.cached)

Call the cached client again.
result = await cached_client.create(
[UserMessage(content="What is the capital of France?", source="user")]
)
print(result.content, result.cached)


asyncio.run(main())



The capital of France is Paris. False
The capital of France is Paris. True


Docs Update
* Update model client documentation add Ollama, Gemini, Azure AI models by ekzhu in https://github.com/microsoft/autogen/pull/5196
* Add Model Client Cache section to migration guide by ekzhu in https://github.com/microsoft/autogen/pull/5197
* docs: Enhance documentation for SingleThreadedAgentRuntime with usage examples and clarifications; undeprecate process_next by ekzhu in https://github.com/microsoft/autogen/pull/5230
* docs: Update user guide notebooks to enhance clarity and add structured output by ekzhu in https://github.com/microsoft/autogen/pull/5224
* docs: Core API doc update: split out model context from model clients; separate framework and components by ekzhu in https://github.com/microsoft/autogen/pull/5171
* docs: Add a helpful comment to swarm.ipynb by withsmilo in https://github.com/microsoft/autogen/pull/5145
* docs: Enhance documentation for SingleThreadedAgentRuntime with usage examples and clarifications; undeprecate process_next by ekzhu in https://github.com/microsoft/autogen/pull/5230


Bug Fixes
* fix: update SK model adapter constructor by lspinheiro in https://github.com/microsoft/autogen/pull/5150. This allows the SK Model Client to be used inside an `AssistantAgent`.
* Fix function tool naming to avoid overriding the name input by Pierrolo in https://github.com/microsoft/autogen/pull/5165
* fix: Enhance OpenAI client to handle additional stop reasons and improve tool call validation in tests to address empty tool_calls list. by ekzhu in https://github.com/microsoft/autogen/pull/5223


Other Changes
* Make ChatAgent an ABC by jackgerrits in https://github.com/microsoft/autogen/pull/5129
* Update website for 0.4.3 by jackgerrits in https://github.com/microsoft/autogen/pull/5139
* Make Memory and Team an ABC by victordibia in https://github.com/microsoft/autogen/pull/5149
* Closes 5059 by fbpazos in https://github.com/microsoft/autogen/pull/5156
* Update proto to include remove sub, move to rpc based operations by jackgerrits in https://github.com/microsoft/autogen/pull/5168
* Add dependencies to distributed group chat example by MohMaz in https://github.com/microsoft/autogen/pull/5175
* Communicate client id via metadata in grpc runtime by jackgerrits in https://github.com/microsoft/autogen/pull/5185
* Fixed typo fixing issue 5186 by raimondasl in https://github.com/microsoft/autogen/pull/5187
* Improve grpc type checking by jackgerrits in https://github.com/microsoft/autogen/pull/5189
* Impl register and add sub RPC by jackgerrits in https://github.com/microsoft/autogen/pull/5191
* rysweet-unsubscribe-and-agent-tests-4744 by rysweet in https://github.com/microsoft/autogen/pull/4920
* make AssistantAgent and Handoff use BaseTool by victordibia in https://github.com/microsoft/autogen/pull/5193
* docs: s/Exisiting/Existing/g by bih in https://github.com/microsoft/autogen/pull/5202
* Rysweet 5201 refactor runtime interface by rysweet in https://github.com/microsoft/autogen/pull/5204
* Update model client documentation add Ollama, Gemini, Azure AI models by ekzhu in https://github.com/microsoft/autogen/pull/5196
* Rysweet 5207 net runtime interface to match python add registration to interface and inmemoryruntime by rysweet in https://github.com/microsoft/autogen/pull/5215
* Rysweet 5217 add send message by rysweet in https://github.com/microsoft/autogen/pull/5219
* Update literature-review.ipynb to fix possible copy-and-paste error by xtophs in https://github.com/microsoft/autogen/pull/5214
* Updated docs for _azure_ai_client.py by rohanthacker in https://github.com/microsoft/autogen/pull/5199
* Refactor Dotnet core to align with Python by jackgerrits in https://github.com/microsoft/autogen/pull/5225
* Remove channel based control plane APIs, cleanup proto by jackgerrits in https://github.com/microsoft/autogen/pull/5236
* update versions to 0.4.4 and m1 cli to 0.2.3 by ekzhu in https://github.com/microsoft/autogen/pull/5229
* feat: Enable queueing and step mode in InProcessRuntime by lokitoth in https://github.com/microsoft/autogen/pull/5239
* feat: Expose self-delivery for InProcessRuntime in AgentsApp by lokitoth in https://github.com/microsoft/autogen/pull/5240
* refactor: Reduce reflection calls when using HandlerInvoker by lokitoth in https://github.com/microsoft/autogen/pull/5241
* fix: Various fixes and cleanups to dotnet autogen core by bassmang in https://github.com/microsoft/autogen/pull/5242
* Start from just protos in core.grpc by jackgerrits in https://github.com/microsoft/autogen/pull/5243

New Contributors
* fbpazos made their first contribution in https://github.com/microsoft/autogen/pull/5156
* withsmilo made their first contribution in https://github.com/microsoft/autogen/pull/5145
* Pierrolo made their first contribution in https://github.com/microsoft/autogen/pull/5165
* raimondasl made their first contribution in https://github.com/microsoft/autogen/pull/5187
* bih made their first contribution in https://github.com/microsoft/autogen/pull/5202
* xtophs made their first contribution in https://github.com/microsoft/autogen/pull/5214

**Full Changelog**: https://github.com/microsoft/autogen/compare/v0.4.3...v0.4.4

0.4.3

What's new

This is the first release since 0.4.0 with significant new features! We look forward to hearing feedback and suggestions from the community.

Chat completion model cache

One of the big missing features from 0.2 was the ability to seamlessly cache model client completions. This release adds [`ChatCompletionCache`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.models.cache.html#autogen_ext.models.cache.ChatCompletionCache) which can wrap **any** other [`ChatCompletionClient`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.models.html#autogen_core.models.ChatCompletionClient) and cache completions.

There is a [`CacheStore`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.html#autogen_core.CacheStore) interface to allow for easy implementation of new caching backends. The currently available implementations are:

- [`DiskCacheStore`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.cache_store.diskcache.html#autogen_ext.cache_store.diskcache.DiskCacheStore)
- [`RedisStore`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.cache_store.redis.html#autogen_ext.cache_store.redis.RedisStore)

python
import asyncio
import tempfile

from autogen_core.models import UserMessage
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_ext.models.cache import ChatCompletionCache, CHAT_CACHE_VALUE_TYPE
from autogen_ext.cache_store.diskcache import DiskCacheStore
from diskcache import Cache


async def main():
with tempfile.TemporaryDirectory() as tmpdirname:
openai_model_client = OpenAIChatCompletionClient(model="gpt-4o")

cache_store = DiskCacheStore[CHAT_CACHE_VALUE_TYPE](Cache(tmpdirname))
cache_client = ChatCompletionCache(openai_model_client, cache_store)

response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response) Should print response from OpenAI
response = await cache_client.create([UserMessage(content="Hello, how are you?", source="user")])
print(response) Should print cached response


asyncio.run(main())


`ChatCompletionCache` is not yet supported by the declarative component config, see the [issue](https://github.com/microsoft/autogen/issues/5141) to track progress.

4924 by srjoglekar246

GraphRAG

This releases adds support for GraphRAG as a tool agents can call. You can find a sample for how to use this integration [here](https://github.com/microsoft/autogen/tree/main/python/samples/agentchat_graphrag), and docs for [`LocalSearchTool`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.tools.graphrag.html#autogen_ext.tools.graphrag.LocalSearchTool) and [`GlobalSearchTool`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.tools.graphrag.html#autogen_ext.tools.graphrag.GlobalSearchTool).

python
import asyncio
from autogen_ext.models.openai import OpenAIChatCompletionClient
from autogen_agentchat.ui import Console
from autogen_ext.tools.graphrag import GlobalSearchTool
from autogen_agentchat.agents import AssistantAgent


async def main():
Initialize the OpenAI client
openai_client = OpenAIChatCompletionClient(
model="gpt-4o-mini",
)

Set up global search tool
global_tool = GlobalSearchTool.from_settings(settings_path="./settings.yaml")

Create assistant agent with the global search tool
assistant_agent = AssistantAgent(
name="search_assistant",
tools=[global_tool],
model_client=openai_client,
system_message=(
"You are a tool selector AI assistant using the GraphRAG framework. "
"Your primary task is to determine the appropriate search tool to call based on the user's query. "
"For broader, abstract questions requiring a comprehensive understanding of the dataset, call the 'global_search' function."
),
)

Run a sample query
query = "What is the overall sentiment of the community reports?"
await Console(assistant_agent.run_stream(task=query))


if __name__ == "__main__":
asyncio.run(main())


4612 by lspinheiro

Semantic Kernel model adapter

Semantic Kernel has an [extensive collection of AI connectors](https://github.com/microsoft/semantic-kernel/tree/main/python/semantic_kernel/connectors/ai#readme). In this release we added support to adapt a Semantic Kernel AI Connector to an AutoGen ChatCompletionClient using the [`SKChatCompletionAdapter`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.models.semantic_kernel.html#autogen_ext.models.semantic_kernel.SKChatCompletionAdapter).

Currently this requires passing the kernel during create, and so cannot be used with `AssistantAgent` directly yet. This will be fixed in a future release (5144).

4851 by lspinheiro

AutoGen to Semantic Kernel tool adapter

We also added a tool adapter, but this time to allow AutoGen tools to be added to a Kernel, called [`KernelFunctionFromTool`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.tools.semantic_kernel.html#autogen_ext.tools.semantic_kernel.KernelFunctionFromTool).

4851 by lspinheiro

Jupyter Code Executor

This release also brings forward Jupyter code executor functionality that we had in 0.2, as the [`JupyterCodeExecutor`](https://microsoft.github.io/autogen/stable/reference/python/autogen_ext.code_executors.jupyter.html#autogen_ext.code_executors.jupyter.JupyterCodeExecutor).

Please note that this currently on supports **local** execution and should be used with caution.

4885 by Leon0402

Memory

It's still early on but we merged the interface for agent memory in this release. This allows agents to enrich their context from a memory store and save information to it. The interface is defined in core and AssistantAgent in agentchat accepts memory as a parameter now. There is an initial example memory implementation which simply injects all memories as system messages for the agent. The intention is for the memory interface to be able to be used for both RAG and agent memory systems going forward.

- [Tutorial](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/memory.html)
- Core [`Memory`](https://microsoft.github.io/autogen/stable/reference/python/autogen_core.memory.html#autogen_core.memory.Memory) interface
- Existing [`AssistantAgent`](https://microsoft.github.io/autogen/stable/reference/python/autogen_agentchat.agents.html#autogen_agentchat.agents.AssistantAgent) with new memory parameter

4438 by victordibia, 5053 by ekzhu

Declarative config

We're continuing to expand support for declarative configs throughout the framework. In this release, we've added support for termination conditions and base chat agents. Once we're done with this, you'll be able to configure and entire team of agents with a single config file and have it work seamlessly with AutoGen studio. Stay tuned!

4984, 5055 by victordibia

Other

* Add sources field to TextMentionTermination by Leon0402 in https://github.com/microsoft/autogen/pull/5106
* Update gpt-4o model version to 2024-08-06 by ekzhu in https://github.com/microsoft/autogen/pull/5117

Bug fixes
* Retry multiple times when M1 selects an invalid agent. Make agent sel… by afourney in https://github.com/microsoft/autogen/pull/5079
* fix: normalize finish reason in CreateResult response by ekzhu in https://github.com/microsoft/autogen/pull/5085
* Pass context between AssistantAgent for handoffs by ekzhu in https://github.com/microsoft/autogen/pull/5084
* fix: ensure proper handling of structured output in OpenAI client and improve test coverage for structured output by ekzhu in https://github.com/microsoft/autogen/pull/5116
* fix: use tool_calls field to detect tool calls in OpenAI client; add integration tests for OpenAI and Gemini by ekzhu in https://github.com/microsoft/autogen/pull/5122

Other changes

* Update website for 0.4.1 by jackgerrits in https://github.com/microsoft/autogen/pull/5031
* PoC AGS dev container by JohanForngren in https://github.com/microsoft/autogen/pull/5026
* Update studio dep by ekzhu in https://github.com/microsoft/autogen/pull/5062
* Update studio dep to use version bound by ekzhu in https://github.com/microsoft/autogen/pull/5063
* Update gpt-4o model version and add new model details by keenranger in https://github.com/microsoft/autogen/pull/5056
* Improve AGS Documentation by victordibia in https://github.com/microsoft/autogen/pull/5065
* Pin uv to 0.5.18 by jackgerrits in https://github.com/microsoft/autogen/pull/5067
* Update version to 0.4.3 pre-emptively by jackgerrits in https://github.com/microsoft/autogen/pull/5066
* fix: dotnet azure pipeline (uv sync installation) by bassmang in https://github.com/microsoft/autogen/pull/5042
* docs: .NET Documentation by lokitoth in https://github.com/microsoft/autogen/pull/5039
* [Documentation] Update tools.ipynb: use system messages in the tool_agent_caller_loop session by zysoong in https://github.com/microsoft/autogen/pull/5068
* docs: enhance agents.ipynb with parallel tool calls section by ekzhu in https://github.com/microsoft/autogen/pull/5088
* Use caching to run tests and report coverage by lspinheiro in https://github.com/microsoft/autogen/pull/5086
* fix: ESPR dotnet code signing by bassmang in https://github.com/microsoft/autogen/pull/5081
* Update AGS pyproject.toml by victordibia in https://github.com/microsoft/autogen/pull/5101
* docs: update AssistantAgent documentation with a new figure, attention and warning notes by ekzhu in https://github.com/microsoft/autogen/pull/5099
* Rysweet fix integration tests and xlang by rysweet in https://github.com/microsoft/autogen/pull/5107
* docs: enhance Swarm user guide with notes on tool calling by ekzhu in https://github.com/microsoft/autogen/pull/5103
* fix a small typo by marinator86 in https://github.com/microsoft/autogen/pull/5120

New Contributors
* lokitoth made their first contribution in https://github.com/microsoft/autogen/pull/5060
* keenranger made their first contribution in https://github.com/microsoft/autogen/pull/5056
* zysoong made their first contribution in https://github.com/microsoft/autogen/pull/5068
* marinator86 made their first contribution in https://github.com/microsoft/autogen/pull/5120

**Full Changelog**: https://github.com/microsoft/autogen/compare/v0.4.1...v0.4.3

0.4.2

- Change async input strategy in order to remove unintentional and accidentally added GPL dependency (5060)

**Full Changelog**: https://github.com/microsoft/autogen/compare/v0.4.1...v0.4.2

0.4.1

What's Important

* Fixed console user input bug that affects `m1` and other apps that use console user input. 4995
* Improved component config by allowing subclassing the `BaseComponent` class. 5017 To read more about how to create your own component config to support serializable components: https://microsoft.github.io/autogen/stable/user-guide/core-user-guide/framework/component-config.html
* Fixed `stop_reason` related bug by making the stop reason setting more robust 5027
* Disable `Console` output statistics by default.
* Minor doc fixes.

0.4

To upgrade from v0.2, read the [migration guide](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/migration-guide.html). For a basic setup:

bash
pip install -U "autogen-agentchat" "autogen-ext[openai]"


You can refer to our updated [README](https://github.com/microsoft/autogen/tree/main/README.md) for more information about the new API.

0.4.0

❤️ Big thanks to all the contributors since the first preview version was open sourced. ❤️
* lspinheiro made their first contribution in https://github.com/microsoft/autogen/pull/3652
* husseinmozannar made their first contribution in https://github.com/microsoft/autogen/pull/3714
* NiklasGustafsson made their first contribution in https://github.com/microsoft/autogen/pull/3727
* maxgolov made their first contribution in https://github.com/microsoft/autogen/pull/3758
* vikas434 made their first contribution in https://github.com/microsoft/autogen/pull/3770
* tarockey made their first contribution in https://github.com/microsoft/autogen/pull/3813
* zboyles made their first contribution in https://github.com/microsoft/autogen/pull/3855
* markdouthwaite made their first contribution in https://github.com/microsoft/autogen/pull/3871
* gziz made their first contribution in https://github.com/microsoft/autogen/pull/3876
* SeryioGonzalez made their first contribution in https://github.com/microsoft/autogen/pull/3901
* rohanthacker made their first contribution in https://github.com/microsoft/autogen/pull/3929
* Ucoming made their first contribution in https://github.com/microsoft/autogen/pull/3979
* auphof made their first contribution in https://github.com/microsoft/autogen/pull/3972
* ReubenBond made their first contribution in https://github.com/microsoft/autogen/pull/4034
* maheshpec made their first contribution in https://github.com/microsoft/autogen/pull/4070
* mbaneshi made their first contribution in https://github.com/microsoft/autogen/pull/4168
* hasamm90 made their first contribution in https://github.com/microsoft/autogen/pull/4170
* tsinggggg made their first contribution in https://github.com/microsoft/autogen/pull/4130
* genlin made their first contribution in https://github.com/microsoft/autogen/pull/4205
* kkasemos made their first contribution in https://github.com/microsoft/autogen/pull/4218
* JMLX42 made their first contribution in https://github.com/microsoft/autogen/pull/4201
* ksachdeva made their first contribution in https://github.com/microsoft/autogen/pull/4265
* MervinPraison made their first contribution in https://github.com/microsoft/autogen/pull/4280
* thainduy made their first contribution in https://github.com/microsoft/autogen/pull/4123
* goyalpramod made their first contribution in https://github.com/microsoft/autogen/pull/4149
* kartikx made their first contribution in https://github.com/microsoft/autogen/pull/4336
* wi-ski made their first contribution in https://github.com/microsoft/autogen/pull/4102
* timparka made their first contribution in https://github.com/microsoft/autogen/pull/4432
* vballoli made their first contribution in https://github.com/microsoft/autogen/pull/4548
* eranco74 made their first contribution in https://github.com/microsoft/autogen/pull/4639
* hsm207 made their first contribution in https://github.com/microsoft/autogen/pull/4655
* iamarunbrahma made their first contribution in https://github.com/microsoft/autogen/pull/4500
* inbal2l made their first contribution in https://github.com/microsoft/autogen/pull/4717
* r-bit-rry made their first contribution in https://github.com/microsoft/autogen/pull/4759
* jspv made their first contribution in https://github.com/microsoft/autogen/pull/4755
* akurniawan made their first contribution in https://github.com/microsoft/autogen/pull/4681
* lanbaoshen made their first contribution in https://github.com/microsoft/autogen/pull/4767
* srjoglekar246 made their first contribution in https://github.com/microsoft/autogen/pull/4801
* richard-gyiko made their first contribution in https://github.com/microsoft/autogen/pull/4826
* kimmywork made their first contribution in https://github.com/microsoft/autogen/pull/4732
* Leon0402 made their first contribution in https://github.com/microsoft/autogen/pull/4848
* w121211 made their first contribution in https://github.com/microsoft/autogen/pull/4874
* PratyushNag made their first contribution in https://github.com/microsoft/autogen/pull/4903

Page 1 of 12

© 2025 Safety CLI Cybersecurity Inc. All Rights Reserved.