A Tour of Agents / Lesson 5 of 9
State = Dict
How Claude shows "Searched 5 files" — structured tracking alongside the chat.
Framework parallel: LangGraph state channels, Redux store — structured data alongside the conversation.
State = Dict
You know how Claude shows "Searched 5 files" or ChatGPT shows "Analyzed data" with a little summary? That's not from the messages — it's state tracked alongside the conversation.
The messages array is the raw tape. But you often need structured answers: *which tools ran? how many turns? what were the results?* That's state — a dict updated inside the loop, returned alongside the answer.
Framework parallel: LangGraph calls these "state channels" with typed reducers. Strip the abstraction: it's a dict updated in a loop.
Step 1: Tools + ask_llm
Same L3 ingredients. No changes.
tools = {"add": lambda a, b: a + b, "upper": lambda text: text.upper()}
TOOL_DEFS = [
{"type": "function", "function": {"name": "add", "description": "Add two numbers",
"parameters": {"type": "object",
"properties": {"a": {"type": "number"}, "b": {"type": "number"}}}}},
{"type": "function", "function": {"name": "upper", "description": "Uppercase text",
"parameters": {"type": "object",
"properties": {"text": {"type": "string"}}}}},
]
async def ask_llm(messages):
resp = await pyfetch(f"{LLM_BASE_URL}/chat/completions",
method="POST",
headers={"Authorization": f"Bearer {LLM_API_KEY}",
"Content-Type": "application/json"},
body=json.dumps({"model": LLM_MODEL, "messages": messages, "tools": TOOL_DEFS}))
return json.loads(await resp.string())["choices"][0]["message"]Step 2: The loop with state tracking
Same L3 loop. One addition: a state dict that records every tool call and result as the loop runs. The agent returns state instead of just the answer string.
This gives you a structured audit trail — not just "the answer was 15", but *which tools ran, with what args, producing what results, in how many turns.*
async def agent(task, max_turns=5):
state = {"turns": 0, "tool_calls": [], "results": []}
messages = [
{"role": "system", "content": "Use tools to answer. Be concise."},
{"role": "user", "content": task},
]
for turn in range(max_turns):
state["turns"] += 1
trace("llm_call", f"Turn {state['turns']}")
msg = await ask_llm(messages)
if not msg.get("tool_calls"):
state["answer"] = msg.get("content", "")
trace("agent_end", f"Done in {state['turns']} turns")
return state
messages.append(msg)
for tc in msg["tool_calls"]:
name = tc["function"]["name"]
args = json.loads(tc["function"]["arguments"])
result = tools[name](**args)
state["tool_calls"].append({"tool": name, "args": args})
state["results"].append(result)
trace("tool_result", f"{name}({args}) → {result}")
messages.append({"role": "tool", "tool_call_id": tc["id"], "content": str(result)})
state["answer"] = "Max turns reached"
return stateTry it
Try *"add 10 and 5, then uppercase hello"*. You'll see the full state: which tools ran, what they returned, how many turns it took. This is observability — you can log it, store it, debug with it.
result = await agent(USER_INPUT)
print(f">> {result['answer']}")
print(f"Tools used: {result['tool_calls']}")
print(f"Results: {result['results']}")
print(f"Turns: {result['turns']}")