Comparisons / Agno vs LangChain
Agno vs LangChain: Which Agent Framework to Use?
Agno agno (formerly phidata) is a lightweight python framework for building agents. LangChain langchain is the most popular agent framework. Here is how they compare — and what the same patterns look like in plain Python.
By the numbers
Agno
39.2k
5.2k
Python
Apache-2.0
2022-05-04
Agno (formerly Phidata)
LangChain
132.3k
21.8k
Python
MIT
2022-10-17
Harrison Chase
Sequoia Capital, Benchmark
$25M Series A (2023), $25M Series B (2024)
3.5M
LangSmith (observability), LangServe (deployment)
Yes
Used by: Notion, Elastic, Instacart
github.com/langchain-ai/langchain →GitHub stats as of April 2026. Stars indicate community interest, not necessarily quality or fit for your use case.
| Concept | Agno | LangChain | Plain Python |
|---|---|---|---|
| Agent | Agent(model=OpenAIChat(), instructions=[...]) class with run() method | AgentExecutor with LLMChain, PromptTemplate, OutputParser | A function that POSTs to /chat/completions and returns the response |
| Tools | Function tools via @tool decorator or built-in toolkits (web search, SQL, etc.) | @tool decorator, StructuredTool, BaseTool class hierarchy | A dict of callables: tools = {"search": search_web, "sql": run_query} |
| Agent Loop | Agent.run() handles tool dispatch internally, configurable via show_tool_calls | AgentExecutor.invoke() with internal iteration | A while loop: call LLM, check for tool_calls, execute, repeat |
| Memory / Knowledge | Knowledge bases (PDF, URL, vector DB) injected via knowledge param + built-in memory | — | A list of relevant chunks injected into the system prompt via a retrieval function |
| Multi-Agent (Teams) | Team class with agents list, mode (sequential, parallel, coordinate), and shared memory | — | A function that calls agent functions in sequence or parallel, passing results between them |
| Storage | SqlAgentStorage, PostgresAgentStorage for persisting sessions and state | — | json.dump() / json.load() to a file, or a simple DB insert |
| Conversation | — | ConversationBufferMemory, ConversationSummaryMemory | A messages list that persists outside the function |
| State | — | LangGraph state channels with typed reducers | A dict updated inside the loop: state["turns"] += 1 |
| Memory | — | VectorStoreRetrieverMemory, ConversationEntityMemory | A dict injected into the system prompt, saved via a remember() tool |
| Guardrails | — | OutputParser, PydanticOutputParser, custom validators | Two lists of lambda rules checked before and after the LLM call |
What both do in plain Python
Every concept in the table above — agent, tools, loop, memory, state — maps to a handful of Python primitives: a function, a dict, a list, and a while loop. Both Agno and LangChain wrap these primitives in their own class hierarchies and APIs. The underlying pattern is the same ~60 lines of code. The difference is how much ceremony each framework adds on top.
When to use Agno
Agno adds value when you want a batteries-included agent with minimal boilerplate — especially for multi-modal agents or team orchestration. But each of its abstractions maps to a small piece of plain Python. If your agent is straightforward, writing it directly gives you full control with zero framework overhead.
What Agno does
Agno gives you a single Agent class that wires together an LLM, tools, instructions, knowledge bases, and storage. You configure an agent declaratively — pass in a model, a list of tools, and optional knowledge sources — and call agent.run(). It handles the tool-calling loop, injects knowledge into context, and persists conversation state. Agno also supports multi-modal agents (vision, audio) and team-based orchestration where multiple agents coordinate on tasks. The framework ships with built-in toolkits for common tasks: web search, SQL queries, file operations. Compared to LangChain, it's lighter — fewer abstractions, less indirection. The tradeoff is a smaller ecosystem and fewer third-party integrations.
The plain Python equivalent
Every Agno abstraction maps to plain Python. The Agent class is a function that POSTs to the LLM API, checks for tool_calls, dispatches them from a dict, and loops. Knowledge bases are a retrieval function that fetches relevant chunks and injects them into the system prompt. Memory is a messages list. Storage is json.dump(). Teams are a function that calls multiple agent functions and combines their outputs. The entire agent — with tools, knowledge retrieval, memory, and multi-agent coordination — fits in about 60 lines. No base classes, no decorators. When something breaks, you debug your function, not a framework's internals.
When to use LangChain
LangChain adds value when you need production integrations (vector stores, specific LLM providers, deployment tooling). But if you want to understand what's happening — or your use case is straightforward — the plain Python version is easier to debug, modify, and reason about.
What LangChain does
LangChain provides a unifying interface across LLM providers, a class hierarchy for tools and memory, and orchestration via AgentExecutor and LangGraph. The core value proposition is interchangeable components: swap OpenAI for Anthropic by changing one class, plug in a vector store for retrieval, add memory without rewriting your loop. It also ships with dozens of integrations — document loaders, text splitters, embedding models, vector stores — that save you from writing boilerplate HTTP calls. For teams that need to compose many integrations quickly, this catalog is genuinely useful. The tradeoff is that you inherit a large dependency tree and a set of abstractions that sit between you and the actual API calls.
The plain Python equivalent
Every LangChain abstraction maps to a small piece of plain Python. AgentExecutor is a while loop that calls the LLM, checks for tool_calls in the response, executes the matching function from a tools dict, appends the result to a messages array, and repeats. Memory is a dict you inject into the system prompt. Output parsing is a function that validates the LLM's response before returning it. The entire agent — tool dispatch, conversation history, state tracking, guardrails — fits in about 60 lines of Python. No base classes, no decorators, no chain composition. Just a function, a dict, a list, and a loop. When something breaks, you read your 60 lines instead of navigating a class hierarchy.
Or build your own in 60 lines
Both Agno and LangChain implement the same 8 patterns. An agent is a function. Tools are a dict. The loop is a while loop. The whole thing composes in ~60 lines of Python.
No framework. No dependencies. No opinions. Just the code.
Build it from scratch →