Build an AI Agent from Scratch
9 lessons that teach you how LangChain, CrewAI, and AutoGen work under the hood — by building the same thing in ~60 lines of Python.
The Agent Function
An AI agent is just a function that POSTs to an LLM API. Build one from scratch in Python and see what LangChain AgentExecutor, CrewAI, and AutoGen abstract away.
Tools = Dict
LLM tool calling is a dictionary lookup: tools[name](**args). Build function calling from scratch in Python and see what LangChain's @tool decorator hides.
The Agent Loop
The agent loop is a while loop: call the LLM, execute tools, repeat until done. Build the core loop behind LangChain AgentExecutor and OpenAI Agents SDK.
Conversation = Messages Array
ChatGPT remembers context because conversation is a messages array. Build multi-turn chat from scratch and see how context windows actually work.
State = Dict
Track structured data alongside LLM conversations with a plain Python dict. Build what LangGraph state channels do — no framework required.
Memory Across Runs
ChatGPT Memory persists facts across chats using key-value storage. Build persistent agent memory from scratch — what Mem0, Zep, and LangChain do under the hood.
Policy = Guardrails
ChatGPT refuses harmful requests using input and output gates. Build AI guardrails in a few lines of Python — what Guardrails AI and NeMo Guardrails abstract away.
Self-Scheduling
ChatGPT deep research spawns sub-tasks autonomously. Build a self-scheduling agent with a task queue and budget — what CrewAI task delegation does.
The Whole Thing
Build a complete AI agent in ~60 lines of Python: tools, memory, guardrails, self-scheduling. Everything LangChain, CrewAI, and AutoGen do — no framework needed.
Prefer interactive?
These lessons also run as interactive exercises — write and execute Python in your browser with no setup.
Start the interactive course