Comparisons / Google ADK vs LangChain

Google ADK vs LangChain: Which Agent Framework to Use?

Google ADK google's agent development kit (adk) is an open-source framework for building multi-agent systems. LangChain langchain is the most popular agent framework. Here is how they compare — and what the same patterns look like in plain Python.

By the numbers

Google ADK

GitHub Stars

18.7k

Forks

3.2k

Language

Python

License

Apache-2.0

Created

2025-04-01

Created by

Google

Backed by

Google/Alphabet

Cloud/SaaS

Vertex AI

Production ready

Yes

github.com/google/adk-python

LangChain

GitHub Stars

132.3k

Forks

21.8k

Language

Python

License

MIT

Created

2022-10-17

Created by

Harrison Chase

Backed by

Sequoia Capital, Benchmark

Funding

$25M Series A (2023), $25M Series B (2024)

Weekly downloads

3.5M

Cloud/SaaS

LangSmith (observability), LangServe (deployment)

Production ready

Yes

Used by: Notion, Elastic, Instacart

github.com/langchain-ai/langchain

GitHub stats as of April 2026. Stars indicate community interest, not necessarily quality or fit for your use case.

ConceptGoogle ADKLangChainPlain Python
AgentLlmAgent class with model, instructions, and sub_agents listAgentExecutor with LLMChain, PromptTemplate, OutputParserA function that POSTs to /chat/completions and returns the response
ToolsFunctionTool, built-in tools (Search, Code Exec), third-party integrations@tool decorator, StructuredTool, BaseTool class hierarchyA dict of callables: tools = {"search": search_web}
Agent LoopRunner.run() with automatic tool dispatch and sub-agent delegationAgentExecutor.invoke() with internal iterationA while loop: call LLM, check for tool_calls, execute, repeat
Multi-AgentHierarchical agent tree with root agent delegating to specialized sub-agentsFunctions calling other functions: research = agent(prompt, tools=research_tools)
WorkflowsSequentialAgent, ParallelAgent, LoopAgent workflow primitivesSequential: call functions in order. Parallel: asyncio.gather(). Loop: while condition
SessionSession and State service with typed channels and persistenceA dict passed between function calls: state = {"turns": 0, "context": []}
ConversationConversationBufferMemory, ConversationSummaryMemoryA messages list that persists outside the function
StateLangGraph state channels with typed reducersA dict updated inside the loop: state["turns"] += 1
MemoryVectorStoreRetrieverMemory, ConversationEntityMemoryA dict injected into the system prompt, saved via a remember() tool
GuardrailsOutputParser, PydanticOutputParser, custom validatorsTwo lists of lambda rules checked before and after the LLM call

What both do in plain Python

Every concept in the table above — agent, tools, loop, memory, state — maps to a handful of Python primitives: a function, a dict, a list, and a while loop. Both Google ADK and LangChain wrap these primitives in their own class hierarchies and APIs. The underlying pattern is the same ~60 lines of code. The difference is how much ceremony each framework adds on top.

When to use Google ADK

ADK earns its complexity when you need multi-agent orchestration on Google Cloud with Vertex AI deployment. If you're using Gemini and need production-grade agent infrastructure, it's well-designed. For single-agent use cases or non-Google stacks, plain Python keeps things simpler.

What Google ADK does

Google ADK provides a code-first framework for building agents that can delegate work to other agents in a hierarchy. You define an LlmAgent with a model, instructions, tools, and optionally a list of sub-agents. The root agent decides when to hand off tasks to specialized children. ADK ships with workflow primitives — SequentialAgent runs steps in order, ParallelAgent fans out concurrently, LoopAgent repeats until a condition is met. The framework handles session management, state persistence, and streaming out of the box. It's optimized for Gemini models and Vertex AI but works with other providers. For teams already on Google Cloud, the deployment story is seamless: containerize your agent and deploy to Vertex AI Agent Engine or Cloud Run.

The plain Python equivalent

A hierarchical agent is just functions calling other functions. Your root agent calls the LLM, and if the response indicates a sub-task, you call a different function with its own system prompt and tool set. Workflow orchestration is equally straightforward: sequential is calling functions in order, parallel is asyncio.gather(), and looping is a while loop with a condition check. Session state is a dict you pass between calls and optionally serialize to disk or a database. The entire pattern — root agent, sub-agents, workflows, state — fits in about 80 lines of Python. No class hierarchies, no Runner abstraction, no Agent Engine. When your agent misbehaves, you read your functions instead of tracing through framework internals.

Full Google ADK comparison →

When to use LangChain

LangChain adds value when you need production integrations (vector stores, specific LLM providers, deployment tooling). But if you want to understand what's happening — or your use case is straightforward — the plain Python version is easier to debug, modify, and reason about.

What LangChain does

LangChain provides a unifying interface across LLM providers, a class hierarchy for tools and memory, and orchestration via AgentExecutor and LangGraph. The core value proposition is interchangeable components: swap OpenAI for Anthropic by changing one class, plug in a vector store for retrieval, add memory without rewriting your loop. It also ships with dozens of integrations — document loaders, text splitters, embedding models, vector stores — that save you from writing boilerplate HTTP calls. For teams that need to compose many integrations quickly, this catalog is genuinely useful. The tradeoff is that you inherit a large dependency tree and a set of abstractions that sit between you and the actual API calls.

The plain Python equivalent

Every LangChain abstraction maps to a small piece of plain Python. AgentExecutor is a while loop that calls the LLM, checks for tool_calls in the response, executes the matching function from a tools dict, appends the result to a messages array, and repeats. Memory is a dict you inject into the system prompt. Output parsing is a function that validates the LLM's response before returning it. The entire agent — tool dispatch, conversation history, state tracking, guardrails — fits in about 60 lines of Python. No base classes, no decorators, no chain composition. Just a function, a dict, a list, and a loop. When something breaks, you read your 60 lines instead of navigating a class hierarchy.

Full LangChain comparison →

Or build your own in 60 lines

Both Google ADK and LangChain implement the same 8 patterns. An agent is a function. Tools are a dict. The loop is a while loop. The whole thing composes in ~60 lines of Python.

No framework. No dependencies. No opinions. Just the code.

Build it from scratch →