Comparisons / LangChain vs Smolagents

LangChain vs Smolagents: Which Agent Framework to Use?

LangChain langchain is the most popular agent framework. Smolagents smolagents is huggingface's minimalist agent library. Here is how they compare — and what the same patterns look like in plain Python.

By the numbers

LangChain

GitHub Stars

132.3k

Forks

21.8k

Language

Python

License

MIT

Created

2022-10-17

Created by

Harrison Chase

Backed by

Sequoia Capital, Benchmark

Funding

$25M Series A (2023), $25M Series B (2024)

Weekly downloads

3.5M

Cloud/SaaS

LangSmith (observability), LangServe (deployment)

Production ready

Yes

Used by: Notion, Elastic, Instacart

github.com/langchain-ai/langchain

Smolagents

GitHub Stars

26.4k

Forks

2.4k

Language

Python

License

Apache-2.0

Created

2024-12-05

Created by

Hugging Face

github.com/huggingface/smolagents

GitHub stats as of April 2026. Stars indicate community interest, not necessarily quality or fit for your use case.

ConceptLangChainSmolagentsPlain Python
AgentAgentExecutor with LLMChain, PromptTemplate, OutputParserCodeAgent or ToolCallingAgent with model and tools listA function that POSTs to /chat/completions and returns the response
Tools@tool decorator, StructuredTool, BaseTool class hierarchy@tool decorator or Tool class with name, description, and callableA dict of callables: tools = {"add": lambda a, b: a + b}
Agent LoopAgentExecutor.invoke() with internal iterationInternal loop: think (LLM reasons), act (code/tool call), observe (result)A while loop: call LLM, check for tool_calls, execute, repeat
ConversationConversationBufferMemory, ConversationSummaryMemoryA messages list that persists outside the function
StateLangGraph state channels with typed reducersA dict updated inside the loop: state["turns"] += 1
MemoryVectorStoreRetrieverMemory, ConversationEntityMemoryA dict injected into the system prompt, saved via a remember() tool
GuardrailsOutputParser, PydanticOutputParser, custom validatorsTwo lists of lambda rules checked before and after the LLM call
Code ActionsCodeAgent writes Python code as its action, executed in sandboxLLM returns code string, you run exec(code, {"tools": tools})
SandboxE2B, Docker, Modal, or Pyodide sandbox for safe code executionsubprocess.run() in a Docker container, or restricted exec() with limited globals
Model SupportHuggingFace Hub models, OpenAI, Anthropic, local via LiteLLMAn HTTP POST to whichever provider's API you choose

What both do in plain Python

Every concept in the table above — agent, tools, loop, memory, state — maps to a handful of Python primitives: a function, a dict, a list, and a while loop. Both LangChain and Smolagents wrap these primitives in their own class hierarchies and APIs. The underlying pattern is the same ~60 lines of code. The difference is how much ceremony each framework adds on top.

When to use LangChain

LangChain adds value when you need production integrations (vector stores, specific LLM providers, deployment tooling). But if you want to understand what's happening — or your use case is straightforward — the plain Python version is easier to debug, modify, and reason about.

What LangChain does

LangChain provides a unifying interface across LLM providers, a class hierarchy for tools and memory, and orchestration via AgentExecutor and LangGraph. The core value proposition is interchangeable components: swap OpenAI for Anthropic by changing one class, plug in a vector store for retrieval, add memory without rewriting your loop. It also ships with dozens of integrations — document loaders, text splitters, embedding models, vector stores — that save you from writing boilerplate HTTP calls. For teams that need to compose many integrations quickly, this catalog is genuinely useful. The tradeoff is that you inherit a large dependency tree and a set of abstractions that sit between you and the actual API calls.

The plain Python equivalent

Every LangChain abstraction maps to a small piece of plain Python. AgentExecutor is a while loop that calls the LLM, checks for tool_calls in the response, executes the matching function from a tools dict, appends the result to a messages array, and repeats. Memory is a dict you inject into the system prompt. Output parsing is a function that validates the LLM's response before returning it. The entire agent — tool dispatch, conversation history, state tracking, guardrails — fits in about 60 lines of Python. No base classes, no decorators, no chain composition. Just a function, a dict, a list, and a loop. When something breaks, you read your 60 lines instead of navigating a class hierarchy.

Full LangChain comparison →

When to use Smolagents

Smolagents lives up to its name — it's genuinely minimal and the code-agent approach is a real innovation that reduces LLM calls by ~30%. If you want a lightweight agent library with HuggingFace ecosystem access, it's excellent. For understanding the fundamentals, the plain version is even simpler.

What Smolagents does

Smolagents takes a distinct approach: instead of having the LLM emit structured JSON tool calls, the CodeAgent has the LLM write Python code that performs the action directly. The model reasons about the task, writes code that calls available tools, and the framework executes that code in a sandboxed environment. This reduces the number of LLM calls by about 30% compared to traditional tool-calling agents, because the model can chain multiple operations in a single code block. The framework provides two agent types: CodeAgent for the code-writing approach and ToolCallingAgent for traditional structured tool calls. Sandbox options include E2B, Docker, Modal, and Pyodide for secure execution. The core library is deliberately minimal — about 1,000 lines of logic with few abstractions over raw Python.

The plain Python equivalent

A code agent is an LLM call that returns a Python code string, which you execute with exec() in a restricted namespace. You pass available tools as the namespace globals, the LLM writes code that calls them, and you capture the output. The sandbox is a Docker container or a restricted exec() with limited builtins. The agent loop is identical to every other framework: call the LLM, execute the action (code or tool call), append the result as an observation, repeat until done. Tool definitions are functions in a dict. Model support is an HTTP POST to an API endpoint. The entire code-agent pattern — including sandbox setup — fits in about 70 lines. Smolagents wraps this cleanly, but the underlying mechanic is straightforward exec() with safety guards.

Full Smolagents comparison →

Or build your own in 60 lines

Both LangChain and Smolagents implement the same 8 patterns. An agent is a function. Tools are a dict. The loop is a while loop. The whole thing composes in ~60 lines of Python.

No framework. No dependencies. No opinions. Just the code.

Build it from scratch →