A Tour of Agents

Comparisons / LangChain

LangChain vs Building from Scratch

LangChain is the most popular agent framework. It provides AgentExecutor, tool decorators, memory classes, and output parsers. But every one of these maps to a few lines of plain Python. Here's what each abstraction actually does.

ConceptLangChainPlain Python
AgentAgentExecutor with LLMChain, PromptTemplate, OutputParserA function that POSTs to /chat/completions and returns the response
Tools@tool decorator, StructuredTool, BaseTool class hierarchyA dict of callables: tools = {"add": lambda a, b: a + b}
Agent LoopAgentExecutor.invoke() with internal iterationA while loop: call LLM, check for tool_calls, execute, repeat
ConversationConversationBufferMemory, ConversationSummaryMemoryA messages list that persists outside the function
StateLangGraph state channels with typed reducersA dict updated inside the loop: state["turns"] += 1
MemoryVectorStoreRetrieverMemory, ConversationEntityMemoryA dict injected into the system prompt, saved via a remember() tool
GuardrailsOutputParser, PydanticOutputParser, custom validatorsTwo lists of lambda rules checked before and after the LLM call

The verdict

LangChain adds value when you need production integrations (vector stores, specific LLM providers, deployment tooling). But if you want to understand what's happening — or your use case is straightforward — the plain Python version is easier to debug, modify, and reason about.