Comparisons / LangChain vs Semantic Kernel
LangChain vs Semantic Kernel: Which Agent Framework to Use?
LangChain langchain is the most popular agent framework. Semantic Kernel semantic kernel is microsoft's enterprise sdk for building ai agents. Here is how they compare — and what the same patterns look like in plain Python.
By the numbers
LangChain
132.3k
21.8k
Python
MIT
2022-10-17
Harrison Chase
Sequoia Capital, Benchmark
$25M Series A (2023), $25M Series B (2024)
3.5M
LangSmith (observability), LangServe (deployment)
Yes
Used by: Notion, Elastic, Instacart
github.com/langchain-ai/langchain →Semantic Kernel
27.6k
4.5k
C#
MIT
2023-02-27
Microsoft
GitHub stats as of April 2026. Stars indicate community interest, not necessarily quality or fit for your use case.
| Concept | LangChain | Semantic Kernel | Plain Python |
|---|---|---|---|
| Agent | AgentExecutor with LLMChain, PromptTemplate, OutputParser | ChatCompletionAgent with Kernel, instructions, and service config | A function that POSTs to /chat/completions and returns the response |
| Tools | @tool decorator, StructuredTool, BaseTool class hierarchy | — | A dict of callables: tools = {"add": lambda a, b: a + b} |
| Agent Loop | AgentExecutor.invoke() with internal iteration | — | A while loop: call LLM, check for tool_calls, execute, repeat |
| Conversation | ConversationBufferMemory, ConversationSummaryMemory | — | A messages list that persists outside the function |
| State | LangGraph state channels with typed reducers | — | A dict updated inside the loop: state["turns"] += 1 |
| Memory | VectorStoreRetrieverMemory, ConversationEntityMemory | SemanticTextMemory with embeddings and vector stores | A dict injected into the system prompt, saved via a remember() tool |
| Guardrails | OutputParser, PydanticOutputParser, custom validators | — | Two lists of lambda rules checked before and after the LLM call |
| Tools / Plugins | — | KernelPlugin with @kernel_function decorators, typed parameters | A dict of callables: tools = {"search": lambda q: ...} |
| Planning | — | StepwisePlanner, HandlebarsPlanner for multi-step decomposition | A system prompt that says 'break this into steps' — the LLM plans natively |
| Orchestration | — | Kernel.invoke() with plugin resolution and filter pipeline | A while loop: call LLM, check for tool_calls, dispatch, repeat |
| Multi-Language | — | C#, Python, Java SDKs with shared abstractions | The HTTP API is the same in every language — just POST JSON |
What both do in plain Python
Every concept in the table above — agent, tools, loop, memory, state — maps to a handful of Python primitives: a function, a dict, a list, and a while loop. Both LangChain and Semantic Kernel wrap these primitives in their own class hierarchies and APIs. The underlying pattern is the same ~60 lines of code. The difference is how much ceremony each framework adds on top.
When to use LangChain
LangChain adds value when you need production integrations (vector stores, specific LLM providers, deployment tooling). But if you want to understand what's happening — or your use case is straightforward — the plain Python version is easier to debug, modify, and reason about.
What LangChain does
LangChain provides a unifying interface across LLM providers, a class hierarchy for tools and memory, and orchestration via AgentExecutor and LangGraph. The core value proposition is interchangeable components: swap OpenAI for Anthropic by changing one class, plug in a vector store for retrieval, add memory without rewriting your loop. It also ships with dozens of integrations — document loaders, text splitters, embedding models, vector stores — that save you from writing boilerplate HTTP calls. For teams that need to compose many integrations quickly, this catalog is genuinely useful. The tradeoff is that you inherit a large dependency tree and a set of abstractions that sit between you and the actual API calls.
The plain Python equivalent
Every LangChain abstraction maps to a small piece of plain Python. AgentExecutor is a while loop that calls the LLM, checks for tool_calls in the response, executes the matching function from a tools dict, appends the result to a messages array, and repeats. Memory is a dict you inject into the system prompt. Output parsing is a function that validates the LLM's response before returning it. The entire agent — tool dispatch, conversation history, state tracking, guardrails — fits in about 60 lines of Python. No base classes, no decorators, no chain composition. Just a function, a dict, a list, and a loop. When something breaks, you read your 60 lines instead of navigating a class hierarchy.
When to use Semantic Kernel
Semantic Kernel earns its complexity in enterprise environments with Azure OpenAI, .NET backends, and existing Microsoft infrastructure. But the core agent pattern — LLM call, tool dispatch, loop — is identical to what you can build in 60 lines of Python.
What Semantic Kernel does
Semantic Kernel is Microsoft's SDK for building AI-powered applications. The central object is the Kernel — it holds your AI service connections, plugins, and configuration. Plugins are collections of KernelFunctions (decorated Python/C# methods) that the LLM can call as tools. Planners like StepwisePlanner break complex goals into multi-step plans, choosing which plugins to invoke at each step. The SDK provides deep integration with Azure OpenAI, including managed identity auth, content filtering, and deployment management. It also ships memory connectors for vector stores (Azure AI Search, Qdrant, Pinecone) and supports filters — middleware that runs before and after each function invocation. For teams already on Azure with .NET backends, it fits naturally into the existing stack.
The plain Python equivalent
The Kernel is a config object that holds your API key and a dict of tools. A KernelFunction is a regular function in that dict. The Planner is a system prompt instruction — tell the LLM to break the task into steps and it will, no planner class needed. Memory is a list of strings you embed and search, or just a dict you inject into the prompt. Orchestration is the same while loop every agent uses: call the LLM, check if the response has tool_calls, look up the function in your tools dict, call it, append the result, repeat. The filter pipeline is a try/except around your function calls. The entire agent — including plugin dispatch, planning, and memory — is about 60 lines. No Kernel object, no plugin registry, no planner hierarchy.
Or build your own in 60 lines
Both LangChain and Semantic Kernel implement the same 8 patterns. An agent is a function. Tools are a dict. The loop is a while loop. The whole thing composes in ~60 lines of Python.
No framework. No dependencies. No opinions. Just the code.
Build it from scratch →