Comparisons / LangChain
LangChain vs Building from Scratch
LangChain is the most popular agent framework. It provides AgentExecutor, tool decorators, memory classes, and output parsers. But every one of these maps to a few lines of plain Python. Here's what each abstraction actually does.
| Concept | LangChain | Plain Python |
|---|---|---|
| Agent | AgentExecutor with LLMChain, PromptTemplate, OutputParser | A function that POSTs to /chat/completions and returns the response |
| Tools | @tool decorator, StructuredTool, BaseTool class hierarchy | A dict of callables: tools = {"add": lambda a, b: a + b} |
| Agent Loop | AgentExecutor.invoke() with internal iteration | A while loop: call LLM, check for tool_calls, execute, repeat |
| Conversation | ConversationBufferMemory, ConversationSummaryMemory | A messages list that persists outside the function |
| State | LangGraph state channels with typed reducers | A dict updated inside the loop: state["turns"] += 1 |
| Memory | VectorStoreRetrieverMemory, ConversationEntityMemory | A dict injected into the system prompt, saved via a remember() tool |
| Guardrails | OutputParser, PydanticOutputParser, custom validators | Two lists of lambda rules checked before and after the LLM call |
The verdict
LangChain adds value when you need production integrations (vector stores, specific LLM providers, deployment tooling). But if you want to understand what's happening — or your use case is straightforward — the plain Python version is easier to debug, modify, and reason about.