Microsoft AutoGen was the framework that introduced "multi-agent conversation" as a category. v0.2 (2023) shaped the public conversation about agentic AI. v0.4 (2025) was a complete actor-model rewrite. And as of late 2025, AutoGen is in maintenance mode — Microsoft directs new users toward the Microsoft Agent Framework (MAF), "the enterprise-ready successor to AutoGen."
This is not a hidden detail. It's in the GitHub README. And it shapes the honest answer to "should I use AutoGen?" — which is: yes for research and prototyping, no for green-field production. The ideas in AutoGen are excellent and worth understanding even if you end up on MAF or another framework. This post explains what v0.4 actually is, why the rewrite was worth doing, and how the patterns translate.
The three layers
AutoGen v0.4 split the framework into three layers, and the split is genuinely useful:
- Core — "an event-driven programming framework for building scalable multi-agent AI systems." This is the actor-model runtime: typed messages, async dispatch, distributed-capable.
- AgentChat — "a programming framework for building conversational single and multi-agent applications. Built on Core." A simpler API for rapid prototyping.
- Extensions — model clients (e.g.,
OpenAIChatCompletionClient) and other third-party integrations.
Plus AutoGen Studio, a no-code GUI for prototyping. The README explicitly warns Studio is "not meant to be a production-ready app."
The mental model: agents are actors
AutoGen v0.4 adopted the Actor model — the same one that powers Erlang, Akka, Microsoft Orleans, and a long history of distributed systems. From the Microsoft Research blog: the runtime provides "asynchronous message exchange between agents" and "event-driven agents that perform computations in response to these messages," which "decouples how the messages are delivered between the agents from how the agents handle them."
Concretely: each agent is an independent actor. They send each other typed messages. A coordinator (round-robin, selector, swarm) picks who speaks next. The framework doesn't mandate a graph; it gives you a runtime with good message-passing primitives.
Where LangGraph gives you an explicit graph and CrewAI gives you roles, AutoGen gives you a message bus. Different ways of slicing the multi-agent problem.
The core primitives
From autogen-agentchat:
AssistantAgent— an LLM-powered agent with tools. The default building block.UserProxyAgent— represents the human in the loop.RoundRobinGroupChat— agents take turns in a fixed order.SelectorGroupChat— "centralized, customizable selector" — an LLM picks who speaks next on every turn.Swarm— "localized, tool-based selector" — agents hand off via tool calls.MagenticOneGroupChat— implementation of the Magentic-One pattern, a state-of-the-art multi-agent team for file- and web-related tasks.TextMentionTerminationand other termination conditions — the loop stops when a specific phrase appears.
And from autogen-core (when you need finer-grained control):
RoutedAgentwith@message_handlerdecorators for typed message dispatch.SingleThreadedAgentRuntimefor local execution.
A worked example: a primary agent with a critic
The canonical AutoGen pattern: two agents in a round-robin loop. One produces, the other critiques. The loop terminates when the critic signals approval. It's a clean self-improving pattern that works for writing, code review, ideation — anywhere "produce, critique, revise" is the right shape.
The code, in 18 lines
import asyncio
from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.conditions import TextMentionTermination
from autogen_agentchat.teams import RoundRobinGroupChat
from autogen_ext.models.openai import OpenAIChatCompletionClient
model_client = OpenAIChatCompletionClient(model="gpt-4o-2024-08-06")
primary = AssistantAgent(
"primary", model_client=model_client,
system_message="You are a helpful AI assistant.")
critic = AssistantAgent(
"critic", model_client=model_client,
system_message="Provide constructive feedback. Respond with 'APPROVE' when the work is good.")
team = RoundRobinGroupChat(
[primary, critic],
termination_condition=TextMentionTermination("APPROVE"))
result = await team.run(task="Write a short poem about fall.")That's a complete two-agent self-improving loop. The primary writes; the critic reviews; the loop terminates when the critic says "APPROVE." Swap the system messages and the same pattern works for code review, marketing copy, technical writing, anywhere a produce-then-critique loop fits.
Multi-agent patterns: pick your coordinator
AutoGen v0.4's coordination is its strongest design contribution. Four built-in patterns:
- RoundRobinGroupChat — agents take turns in fixed order. Simplest, most predictable.
- SelectorGroupChat — an LLM picks who speaks next based on the conversation so far. Good for tasks where the right next speaker depends on the state.
- Swarm — agents hand off via tool calls. Closer to the OpenAI Agents SDK pattern.
- MagenticOneGroupChat — implementation of the Magentic-One pattern from Microsoft Research. A team of agents (orchestrator, file surfer, web surfer, coder, terminal) for complex web and file tasks.
v0.2 to v0.4: what the rewrite bought
v0.4 was a complete reimplementation. From the Microsoft Research blog, the user-driven goals were "greater modularity and the ability to reuse agents seamlessly" and "better support for debugging and scaling." The team chose to "question our assumptions and even possibly reimagine the platform" and landed on the actor model.
The wins are real: async messaging, modular layers (Core / AgentChat / Extensions), built-in observability, distributed scalability, and cross-language support (Python and .NET). v0.2 still works and has its own legacy docs, but every new project should start on v0.4.
The honest part: maintenance mode
The GitHub README is direct: AutoGen is in maintenance mode. Microsoft's active investment is going into the Microsoft Agent Framework (MAF), which the team frames as "the enterprise-ready successor to AutoGen." MAF inherits the actor-model architecture and adds enterprise tooling.
For green-field production work in 2026, MAF is the Microsoft-blessed path. AutoGen is still the right answer if:
- You're doing research where AutoGen patterns are well-established (especially Magentic-One).
- You're prototyping multi-agent shapes and want the simplest possible API.
- You have an existing AutoGen v0.4 deployment that's working — there's no reason to rewrite it on MAF unless you need MAF-specific features.
For everything else, evaluate LangGraph, CrewAI, OpenAI Agents SDK, or MAF first.
AutoGen Studio: prototype, don't deploy
AutoGen Studio is a no-code GUI for designing multi-agent teams. Drag-and-drop builder, real-time updates, flow visualizations. It's genuinely useful for showing non-developers what a multi-agent system looks like — and the docs explicitly say not to ship it. Use it as a whiteboard, then translate the design to code.
Where AutoGen fits next to the others
- AutoGen v0.4 — research and prototyping; existing deployments; multi-agent patterns from Microsoft Research (Magentic-One). New production work should evaluate MAF.
- LangGraph — most-used in production; durable execution; named customers at scale.
- CrewAI — fastest path to a role-based multi-agent app; strong enterprise platform (AMP).
- OpenAI Agents SDK — fastest path on OpenAI's stack; hosted tools and tracing.
Where on-device fits
AutoGen is server-side Python (or .NET, in the Microsoft port). Same hybrid pattern as the others: on-device for the user-facing surface, AutoGen behind it for the multi-agent reasoning. The actor runtime makes it easier than most frameworks to build agents that scale across a cluster — useful when the cloud-side work is genuinely heavy (overnight batch processing, multi-hour synthesis).
What to do with this
If you're learning multi-agent patterns from first principles, AutoGen v0.4 is one of the best teachers — the layered Core / AgentChat split makes the framework's ideas legible in a way most agent SDKs hide. Read the docs, build the round-robin example, then evaluate whether to ship on AutoGen or graduate to MAF / LangGraph / CrewAI based on production constraints.
If you're running AutoGen in production today, watch the MAF roadmap closely. Microsoft has been clear that AutoGen is in maintenance mode; new features will land on MAF. Plan a migration path on a year-or-two horizon, not an emergency one.
Further reading
- AutoGen v0.4 documentation — Core, AgentChat, Studio.
- microsoft/autogen on GitHub — framework source. README links to MAF.
- AutoGen — Microsoft Research — project page with research context.
- Magentic-One announcement — the multi-agent pattern that AutoGen exposes via
MagenticOneGroupChat.
