Comparisons / Anthropic Agent SDK vs LangChain

Anthropic Agent SDK vs LangChain: Which Agent Framework to Use?

Anthropic Agent SDK the anthropic agent sdk packages claude code's agent loop as a library. LangChain langchain is the most popular agent framework. Here is how they compare — and what the same patterns look like in plain Python.

By the numbers

Anthropic Agent SDK

GitHub Stars

3.1k

Forks

582

Language

Python

License

MIT

Created

2023-01-17

Created by

Anthropic

Backed by

Google, Spark Capital

Production ready

Yes

github.com/anthropics/anthropic-sdk-python

LangChain

GitHub Stars

132.3k

Forks

21.8k

Language

Python

License

MIT

Created

2022-10-17

Created by

Harrison Chase

Backed by

Sequoia Capital, Benchmark

Funding

$25M Series A (2023), $25M Series B (2024)

Weekly downloads

3.5M

Cloud/SaaS

LangSmith (observability), LangServe (deployment)

Production ready

Yes

Used by: Notion, Elastic, Instacart

github.com/langchain-ai/langchain

GitHub stats as of April 2026. Stars indicate community interest, not necessarily quality or fit for your use case.

ConceptAnthropic Agent SDKLangChainPlain Python
AgentClaude agent with built-in tools, MCP servers, and system promptAgentExecutor with LLMChain, PromptTemplate, OutputParserA function that POSTs to /messages and returns the response
ToolsBuilt-in tools (bash, file read/write, web) + MCP server connections@tool decorator, StructuredTool, BaseTool class hierarchyA dict of callables: tools = {"bash": run_command, "read": read_file}
Agent LoopSDK's internal agentic loop with automatic tool dispatchAgentExecutor.invoke() with internal iterationA while loop: call LLM, check for tool_use blocks, execute, repeat
Sub-AgentsAgents invoke other agents as tools via the SDKA function that calls another function: result = research_agent(query)
Lifecycle Hooks18 hook events: pre/post tool call, message, error, etc.if/else checks inside your loop: if should_log: log(event)
MCP IntegrationOne-line MCP server config for Playwright, Slack, GitHub, etc.HTTP client calls to each service: requests.post(slack_url, payload)
ConversationConversationBufferMemory, ConversationSummaryMemoryA messages list that persists outside the function
StateLangGraph state channels with typed reducersA dict updated inside the loop: state["turns"] += 1
MemoryVectorStoreRetrieverMemory, ConversationEntityMemoryA dict injected into the system prompt, saved via a remember() tool
GuardrailsOutputParser, PydanticOutputParser, custom validatorsTwo lists of lambda rules checked before and after the LLM call

What both do in plain Python

Every concept in the table above — agent, tools, loop, memory, state — maps to a handful of Python primitives: a function, a dict, a list, and a while loop. Both Anthropic Agent SDK and LangChain wrap these primitives in their own class hierarchies and APIs. The underlying pattern is the same ~60 lines of code. The difference is how much ceremony each framework adds on top.

When to use Anthropic Agent SDK

The Anthropic Agent SDK's real value is packaging Claude Code's battle-tested agent loop with built-in tools and MCP integration. If you want a production agent that reads files, runs commands, and connects to services, it saves significant plumbing. For understanding how agents work, the plain version is more instructive.

What the Anthropic Agent SDK does

The Anthropic Agent SDK takes Claude Code — the coding agent used by hundreds of thousands of developers — and ships it as a Python and TypeScript library. You get the same agent loop, built-in tools (bash execution, file read/write, web search), and context management that Claude Code uses internally. The standout feature is MCP (Model Context Protocol) integration: connect Playwright, Slack, GitHub, databases, and hundreds of other servers with a single config line. The SDK also provides 18 lifecycle hooks that let you intercept tool calls, messages, errors, and other events. This gives you fine-grained control over agent behavior without modifying the core loop. It's less a framework and more a productized agent runtime.

The plain Python equivalent

The agent loop is a while loop that POSTs to the /messages API, checks for tool_use blocks in the response, executes the matching function from a tools dict, appends the result to messages, and repeats. Built-in tools are just functions: bash is subprocess.run(), file reading is open().read(), web search is an HTTP call to a search API. MCP integration is HTTP client calls to each service — there's nothing magical about connecting to Slack or GitHub beyond knowing their API endpoints. Lifecycle hooks are if/else checks at specific points in your loop. The entire agent — tool dispatch, sub-agent delegation, logging — fits in about 60 lines. The SDK's value isn't in the pattern (which is simple) but in the pre-built tool implementations and MCP plumbing.

Full Anthropic Agent SDK comparison →

When to use LangChain

LangChain adds value when you need production integrations (vector stores, specific LLM providers, deployment tooling). But if you want to understand what's happening — or your use case is straightforward — the plain Python version is easier to debug, modify, and reason about.

What LangChain does

LangChain provides a unifying interface across LLM providers, a class hierarchy for tools and memory, and orchestration via AgentExecutor and LangGraph. The core value proposition is interchangeable components: swap OpenAI for Anthropic by changing one class, plug in a vector store for retrieval, add memory without rewriting your loop. It also ships with dozens of integrations — document loaders, text splitters, embedding models, vector stores — that save you from writing boilerplate HTTP calls. For teams that need to compose many integrations quickly, this catalog is genuinely useful. The tradeoff is that you inherit a large dependency tree and a set of abstractions that sit between you and the actual API calls.

The plain Python equivalent

Every LangChain abstraction maps to a small piece of plain Python. AgentExecutor is a while loop that calls the LLM, checks for tool_calls in the response, executes the matching function from a tools dict, appends the result to a messages array, and repeats. Memory is a dict you inject into the system prompt. Output parsing is a function that validates the LLM's response before returning it. The entire agent — tool dispatch, conversation history, state tracking, guardrails — fits in about 60 lines of Python. No base classes, no decorators, no chain composition. Just a function, a dict, a list, and a loop. When something breaks, you read your 60 lines instead of navigating a class hierarchy.

Full LangChain comparison →

Or build your own in 60 lines

Both Anthropic Agent SDK and LangChain implement the same 8 patterns. An agent is a function. Tools are a dict. The loop is a while loop. The whole thing composes in ~60 lines of Python.

No framework. No dependencies. No opinions. Just the code.

Build it from scratch →