A Tour of Agents

A Tour of Agents / Lesson 1 of 9

The Agent Function

Every ChatGPT message is one HTTP POST. That's all an agent is.

agentfunctionHTTP POSTsystem promptmessages

Framework parallel: LangChain's AgentExecutor, CrewAI's Agent, AutoGen's ConversableAgent — wrappers around one function.

An Agent is a Function

Every time you send a message in ChatGPT or Claude, here's what actually happens: your browser sends an HTTP POST to an API, and a response comes back. That's it. The fancy UI, the streaming text, the typing indicator — all cosmetics around one function call.

Strip away LangChain's AgentExecutor, CrewAI's Agent, AutoGen's ConversableAgent. At the bottom of every one: a function that sends an HTTP POST and returns the response.

That's what you'll build now.

Step 1: POST to the LLM

This is the raw call every SDK wraps — and exactly what happens when you hit Enter in ChatGPT. Two things to notice:

  • messages is an array — system sets behavior (like ChatGPT's "Custom Instructions"), user sends your input
  • The response lives at choices[0].message.content
  • Everything else is HTTP boilerplate.

    SYSTEM = "You are a concise expert. Answer in 1-2 sentences max."
    
    async def ask_llm(message):
        trace("llm_call", f"Asking: {message}")
        resp = await pyfetch(f"{LLM_BASE_URL}/chat/completions",
            method="POST",
            headers={"Authorization": f"Bearer {LLM_API_KEY}",
                     "Content-Type": "application/json"},
            body=json.dumps({
                "model": LLM_MODEL,
                "messages": [
                    {"role": "system", "content": SYSTEM},
                    {"role": "user", "content": message},
                ]
            }))
        data = json.loads(await resp.string())
        return data["choices"][0]["message"]["content"]

    Step 2: Wrap it

    The agent is the thinnest possible wrapper. String in, string out. Change the system prompt → different behavior from the same input. This is why ChatGPT and Claude behave differently — different system prompts. That's all "prompt engineering" is.

    async def agent(message):
        trace("agent_start", f"Input: {message}")
        response = await ask_llm(message)
        trace("agent_end", f"Output: {response}")
        return response

    Try it

    Send anything. Watch the diagram above — your message flows through agent() to the API and back. One function, one HTTP POST, one response.

    print(f">> {await agent(USER_INPUT)}")