Comparisons / AutoGen vs Rasa

AutoGen vs Rasa: Which Agent Framework to Use?

AutoGen autogen by microsoft models agents as conversableagents that chat with each other. Rasa rasa is an open-source framework for building conversational ai — chatbots and virtual assistants. Here is how they compare — and what the same patterns look like in plain Python.

By the numbers

AutoGen

GitHub Stars

56.7k

Forks

8.5k

Language

Python

License

CC-BY-4.0

Created

2023-08-18

Created by

Microsoft Research

github.com/microsoft/autogen

Rasa

GitHub Stars

21.1k

Forks

4.9k

Language

Python

License

Apache-2.0

Created

2016-10-14

Created by

Rasa Technologies

Cloud/SaaS

Rasa Pro / Rasa Cloud

Production ready

Yes

github.com/RasaHQ/rasa

GitHub stats as of April 2026. Stars indicate community interest, not necessarily quality or fit for your use case.

ConceptAutoGenRasaPlain Python
Agent`ConversableAgent` with `system_message`, `llm_config`Rasa agent with NLU pipeline, dialogue policies, and action serverA function with a system prompt that POSTs to the LLM API
Tools`register_for_llm()` and `register_for_execution()`Custom actions running on a separate action server via HTTPA dict of callables + JSON schema descriptions
ConversationTwo-agent chat with `initiate_chat()`, message historyA `messages` array that grows with each turn
Multi-Agent`GroupChat` with `GroupChatManager`, speaker selectionMultiple agent functions called in sequence on shared `messages`
Nested Chats`register_nested_chats()` for sub-task handlingA task queue (BFS) — agent schedules follow-ups via a tool
Termination`is_termination_msg` callback, `max_consecutive_auto_reply`The `while` loop exits when no `tool_calls` or `max_turns` reached
NLUNLU pipeline: tokenizer, featurizer, intent classifier, entity extractorAn LLM call with a prompt: `"Classify this message's intent: {message}"`
DialogueStories/Rules YAML + dialogue policies for conversation flowA state machine: `if intent == 'greet': state = 'greeting'; respond()`
SlotsTyped slots for tracking entities and state across turnsA dict updated during conversation: `slots = {"order_id": "123"}`
CALMLLM for understanding + deterministic `Flows` for business logicLLM parses user intent, `if`/`else` routes to the right handler function

What both do in plain Python

Every concept in the table above — agent, tools, loop, memory, state — maps to a handful of Python primitives: a function, a dict, a list, and a while loop. Both AutoGen and Rasa wrap these primitives in their own class hierarchies and APIs. The underlying pattern is the same ~60 lines of code. The difference is how much ceremony each framework adds on top.

When to use AutoGen

AutoGen excels at complex multi-agent workflows where agents need to debate or collaborate. For single-agent use cases or simple tool-calling agents, the plain Python version is significantly simpler.

What AutoGen does

AutoGen's core abstraction is the `ConversableAgent` — an agent that can send and receive messages. Two agents chat by alternating turns on a shared message history. `GroupChat` extends this to N agents, with a `GroupChatManager` that selects the next speaker (round-robin, random, or LLM-based selection). **Nested chats** allow an agent to spin up a sub-conversation to handle a complex subtask before returning to the main thread. AutoGen also provides code execution sandboxes, letting agents write and run code as part of their conversation. The framework thinks in terms of **conversations, not chains or graphs**. This makes it natural for workflows where agents need to debate, critique, or iteratively refine outputs together.

The plain Python equivalent

A `ConversableAgent` is a function that takes a `messages` array, calls the LLM with a system prompt, and returns the assistant message. Two-agent chat is a `while` loop where you alternate between calling `agent_a(messages)` and `agent_b(messages)`, appending each response. `GroupChat` is the same loop but with a **speaker selection step** — either rotate through a list or ask the LLM "who should speak next?" and call that agent function. Nested chats are a function call within the loop: pause the main conversation, run a sub-loop with different agents, and inject the result back. Tool registration is adding functions to a `tools` dict with their JSON schemas. The conversation-as-primitive model is **just `messages` arrays passed between functions**.

Full AutoGen comparison →

When to use Rasa

Rasa is purpose-built for production conversational AI with enterprise requirements — on-premise deployment, regulatory compliance, deterministic business logic. For general-purpose agents or simple chatbots, an LLM with a system prompt and a few tools is faster to build and more flexible.

What Rasa does

Rasa provides a **complete framework for building conversational AI systems**. The traditional stack includes: - an **NLU pipeline** (intent classification and entity extraction) - **dialogue management** (stories and rules that define conversation flows) - an **action server** for custom business logic The newer **CALM architecture** separates language understanding (handled by LLMs) from business logic (handled by deterministic `Flows`), giving you LLM fluency without sacrificing reliability. Rasa focuses on enterprise requirements: on-premise deployment, data privacy, regulatory compliance, and deterministic behavior for critical business flows. You define your domain in YAML — intents, entities, slots, responses, actions — and Rasa trains a model that handles the conversation lifecycle. The framework is **battle-tested in production** across banking, telecom, and healthcare.

The plain Python equivalent

Intent classification is **one LLM call**: send the user's message with a prompt asking for the intent and entities, parse the JSON response. Dialogue management is a state machine — a dict tracking the current state and a series of `if`/`else` branches routing to the next step. Custom actions are functions you call based on the classified intent. Slot filling is updating a dict as entities are extracted. The entire conversational agent — intent handling, state tracking, tool dispatch, response generation — fits in about **60 lines**. The LLM handles the language understanding that Rasa's NLU pipeline was trained for, and your `if`/`else` logic handles the flows that Rasa's dialogue policies managed. **No YAML domain files, no training pipeline, no action server.**

Full Rasa comparison →

Or build your own in 60 lines

Both AutoGen and Rasa implement the same 8 patterns. An agent is a function. Tools are a dict. The loop is a while loop. The whole thing composes in ~60 lines of Python.

No framework. No dependencies. No opinions. Just the code.

Build it from scratch →