Comparisons / AutoGen vs LangGraph

AutoGen vs LangGraph: Which Agent Framework to Use?

AutoGen by Microsoft models agents as ConversableAgents that chat with each other. LangGraph is LangChain's stateful workflow framework — a graph of nodes (functions) connected by edges with shared state. Here is how they compare — paradigm, ecosystem, and the use cases each one is actually built for.

By the numbers

AutoGen

GitHub Stars

56.7k

Forks

8.5k

Language

Python

License

CC-BY-4.0

Created

2023-08-18

Created by

Microsoft Research

github.com/microsoft/autogen

LangGraph

GitHub Stars

18.9k

Forks

3.4k

Language

Python

License

MIT

Created

2024-01-17

Created by

LangChain Inc (Harrison Chase)

Backed by

Sequoia Capital, Benchmark

Funding

Part of LangChain Inc — $50M raised across A and B

Weekly downloads

8.2M

Cloud/SaaS

LangGraph Platform (hosted), LangSmith (observability)

Production ready

Yes

Used by: Replit, Klarna, Elastic

github.com/langchain-ai/langgraph

GitHub stats as of April 2026. Stars indicate community interest, not necessarily quality or fit for your use case.

ConceptAutoGenLangGraph
Agent`ConversableAgent` with `system_message`, `llm_config`A `StateGraph` with nodes, edges, and a typed `State` channel
Tools`register_for_llm()` and `register_for_execution()``ToolNode(tools)` paired with a conditional edge for routing
ConversationTwo-agent chat with `initiate_chat()`, message history
Multi-Agent`GroupChat` with `GroupChatManager`, speaker selection
Nested Chats`register_nested_chats()` for sub-task handling
Termination`is_termination_msg` callback, `max_consecutive_auto_reply`
Loop`add_conditional_edges` from a node back to itself until a `END` condition
StateTyped `State` channels with reducers (`Annotated[list, add_messages]`)
Checkpointing`MemorySaver` / `PostgresSaver` persists state per `thread_id`
Human-in-loop`interrupt_before` / `interrupt_after` pauses execution for review
Parallel fanoutMultiple edges from one node + reducers merge results

AutoGen vs LangGraph, head to head

Paradigm

AutoGen models agents as ConversableAgent instances that chat with each other — a GroupChat plus GroupChatManager picks the next speaker, and register_nested_chats() spawns sub-conversations. LangGraph models agents as a StateGraph of nodes and edges over a typed State channel, with conditional edges and reducers like Annotated[list, add_messages].

One thinks in dialogue, the other in state machines. AutoGen's primitive is the message; LangGraph's primitive is the node transition.

Ecosystem

AutoGen ships from Microsoft Research (CC-BY-4.0, ~57k stars) with a code execution sandbox and v0.4 rewrite for scale. LangGraph ships from LangChain Inc (MIT, ~19k stars) and plugs into the LangChain tool/memory layer plus LangSmith tracing and the hosted LangGraph Platform.

LangGraph has the deeper production storyPostgresSaver checkpoints, time-travel debugging, listed users like Replit and Klarna. AutoGen's story is research-flavored multi-agent patterns with enterprise backing.

Use case

Reach for AutoGen when the interesting part is agents debating — author/reviewer loops, planner/executor pairs, dynamic speaker selection where you don't know the order in advance. Reach for LangGraph when the interesting part is the workflow shape — explicit branches, parallel fanout, checkpointed pause/resume, interrupt_before for human approval.

A two-agent critique loop is awkward in LangGraph (you'd model speakers as nodes). A long-running approval pipeline with retries is awkward in AutoGen (no first-class checkpointing). Pick the one whose primitive matches your problem shape.

Pick AutoGen if

Pick autogen if your project lives or dies on multiple agents talking to each other in patterns you can't pre-script.

  • Dynamic speaker selection: GroupChatManager with LLM-based routing handles "who should respond?" decisions that would be tedious to hand-roll across a 5-agent debate.
  • Nested sub-conversations: register_nested_chats() lets one agent spin up a side conversation to resolve a subtask, then return — useful for planner/executor or author/critic loops.
  • Code-writing agents: AutoGen's built-in execution sandbox is a real time-saver if your agents need to write, run, and iterate on code as part of the conversation.
Full AutoGen comparison →

Pick LangGraph if

Pick langgraph if your agent is a workflow with explicit branches, persistence, or human gates — not just a tool-calling loop.

  • Checkpointed pause/resume: MemorySaver and PostgresSaver per thread_id survive crashes and long async waits — non-trivial to bolt on later.
  • Human-in-the-loop review: interrupt_before and interrupt_after give you approval gates with state inspection; LangSmith shows node-by-node diffs.
  • Parallel fanout with merge: multiple edges from one node plus reducers handle research-style "search 5 sources in parallel, judge, decide" flows where you'd otherwise wire asyncio.gather and merge logic by hand.
Full LangGraph comparison →

What both add

Both frameworks bring real conceptual surface area. AutoGen wants you to think in ConversableAgent, initiate_chat, is_termination_msg, and speaker selection policies. LangGraph wants you to think in nodes, edges, typed state channels, reducers, and checkpointers. That's a week of ramp-up before you ship anything that wasn't already a few hundred lines of code.

Both also pull in transitive dependencies and lock you into a runtime model that's hard to back out of once your tools, memory, and tracing are wired through it. Worth paying when the abstraction matches your problem; expensive when it doesn't.

Or build your own in 60 lines

Both AutoGen and LangGraph implement the same 8 patterns. An agent is a function. Tools are a dict. The loop is a while loop. The whole thing composes in ~60 lines of Python.

No framework. No dependencies. No opinions. Just the code.

Build it from scratch →