Vercel AI SDK vs Mastra vs LangChain.js: Which TypeScript AI Framework Should You Use
Monday 23/03/2026
·11 min readYou're about to build an AI feature in TypeScript and there are at least three serious frameworks competing for your attention. Vercel AI SDK has the momentum, LangChain.js has the ecosystem, and Mastra is the new kid promising better DX for agent workflows. Picking the wrong one means either migrating later or fighting the framework when your requirements get real.
The only way to actually compare these is to build the same thing in all three. So that's what we'll do: a tool-calling agent that searches a knowledge base, decides which tool to use, and streams the response back. Same feature, three frameworks, honest tradeoffs.
The setup: what we're building
A simple agent that has two tools: one to search a knowledge base (simulated vector search) and one to get the current weather. The agent decides which tool to call based on the user's question, executes it, and streams a final answer.
This covers the core framework surface area: model integration, tool definitions, agent loops, and streaming.
All three examples use Claude as the LLM. Install the shared dependency first:
pnpm install @anthropic-ai/sdk
Vercel AI SDK
pnpm install ai @ai-sdk/anthropic zod
The AI SDK uses a provider pattern — you pick a model from a provider package and pass it to framework functions. Tool definitions use Zod schemas directly, which means your tool parameters are type-safe at compile time.
// src/agents/vercel-agent.ts
import { generateText, tool } from 'ai'
import { anthropic } from '@ai-sdk/anthropic'
import { z } from 'zod'
const searchKnowledgeBase = tool({
description: 'Search the knowledge base for relevant information',
parameters: z.object({
query: z.string().describe('The search query'),
}),
execute: async ({ query }) => {
// Replace with your actual vector search
const results = [
{ title: 'Deployment Guide', content: `Steps for deploying: ${query}` },
{ title: 'FAQ', content: `Common questions about: ${query}` },
]
return JSON.stringify(results)
},
})
const getWeather = tool({
description: 'Get current weather for a location',
parameters: z.object({
location: z.string().describe('City name'),
}),
execute: async ({ location }) => {
return JSON.stringify({ location, temp: 22, condition: 'sunny' })
},
})
async function runAgent(userMessage: string) {
const result = await generateText({
model: anthropic('claude-sonnet-4-20250514'),
tools: { searchKnowledgeBase, getWeather },
maxSteps: 5,
system: 'You are a helpful assistant. Use tools when needed to answer questions.',
prompt: userMessage,
})
console.log(result.text)
console.log('Steps:', result.steps.length)
console.log('Tool calls:', result.steps.flatMap(s => s.toolCalls).map(t => t.toolName))
}
runAgent('What are the deployment steps for our app?')
The key thing here is maxSteps. The AI SDK handles the agent loop for you — it calls the model, executes any tool calls, feeds results back, and repeats until the model responds with text or hits the step limit. You don't write the loop yourself.
For streaming, swap generateText for streamText and you get a ReadableStream you can pipe directly to a Response in a Next.js route handler. The framework was designed for this.
What's good: Minimal boilerplate. The Zod-based tool definitions are clean. Streaming to React is first-class — useChat and useCompletion hooks handle the client side. If you're in Next.js, this is the path of least resistance.
What's not: The abstraction is opinionated. If you need custom retry logic, token counting mid-stream, or want to inspect raw API responses, you'll be fighting the framework. Provider support depends on community packages — if your model doesn't have an @ai-sdk/* package, you're writing an adapter.
Mastra
pnpm install mastra @mastra/core
Mastra positions itself as a framework for building AI agents and workflows. It has a different mental model: you define agents as persistent objects with tools, memory, and workflow capabilities.
// src/agents/mastra-agent.ts
import { Agent, createTool } from '@mastra/core'
import { anthropic } from '@ai-sdk/anthropic'
import { z } from 'zod'
const searchKnowledgeBase = createTool({
id: 'search-knowledge-base',
description: 'Search the knowledge base for relevant information',
inputSchema: z.object({
query: z.string().describe('The search query'),
}),
outputSchema: z.object({
results: z.array(z.object({
title: z.string(),
content: z.string(),
})),
}),
execute: async ({ context }) => {
const results = [
{ title: 'Deployment Guide', content: `Steps for deploying: ${context.query}` },
{ title: 'FAQ', content: `Common questions about: ${context.query}` },
]
return { results }
},
})
const getWeather = createTool({
id: 'get-weather',
description: 'Get current weather for a location',
inputSchema: z.object({
location: z.string().describe('City name'),
}),
outputSchema: z.object({
location: z.string(),
temp: z.number(),
condition: z.string(),
}),
execute: async ({ context }) => {
return { location: context.location, temp: 22, condition: 'sunny' }
},
})
const agent = new Agent({
name: 'support-agent',
instructions: 'You are a helpful assistant. Use tools when needed to answer questions.',
model: anthropic('claude-sonnet-4-20250514'),
tools: { searchKnowledgeBase, getWeather },
})
async function runAgent(userMessage: string) {
const response = await agent.generate(userMessage)
console.log(response.text)
}
runAgent('What are the deployment steps for our app?')
Mastra's tools have both input and output schemas, which means the framework validates what your tool returns — not just what it receives. That's a nice touch for production code where a tool might return unexpected shapes.
The agent is a persistent object you can reuse across requests. Mastra also has a workflow system for multi-step pipelines where you need conditional logic, branching, or human-in-the-loop approvals beyond what a simple agent loop provides.
What's good: Output schemas on tools catch bugs early. The workflow engine is genuinely useful for complex multi-step processes — think "if the search returns nothing, try a different strategy, then summarize." Built-in support for agent memory and evaluation. The framework is thinking about production concerns that others skip.
What's not: The ecosystem is young. Documentation has gaps, and you'll be reading source code when the docs don't cover your use case. The abstraction layers add concepts you need to learn (syncs, workflows, evaluations) even if you just want a simple agent. It also uses @ai-sdk/anthropic under the hood for model providers, so you're taking a dependency on the Vercel AI SDK anyway.
LangChain.js
pnpm install langchain @langchain/anthropic @langchain/core
LangChain is the most established framework here. It has the largest ecosystem of integrations — vector stores, document loaders, retrievers, output parsers — but it also carries the most historical baggage.
// src/agents/langchain-agent.ts
import { ChatAnthropic } from '@langchain/anthropic'
import { DynamicStructuredTool } from '@langchain/core/tools'
import { createReactAgent } from '@langchain/langgraph/prebuilt'
import { z } from 'zod'
const searchKnowledgeBase = new DynamicStructuredTool({
name: 'search_knowledge_base',
description: 'Search the knowledge base for relevant information',
schema: z.object({
query: z.string().describe('The search query'),
}),
func: async ({ query }) => {
const results = [
{ title: 'Deployment Guide', content: `Steps for deploying: ${query}` },
{ title: 'FAQ', content: `Common questions about: ${query}` },
]
return JSON.stringify(results)
},
})
const getWeather = new DynamicStructuredTool({
name: 'get_weather',
description: 'Get current weather for a location',
schema: z.object({
location: z.string().describe('City name'),
}),
func: async ({ location }) => {
return JSON.stringify({ location, temp: 22, condition: 'sunny' })
},
})
async function runAgent(userMessage: string) {
const model = new ChatAnthropic({
modelName: 'claude-sonnet-4-20250514',
})
const agent = createReactAgent({
llm: model,
tools: [searchKnowledgeBase, getWeather],
})
const result = await agent.invoke({
messages: [{ role: 'user', content: userMessage }],
})
const lastMessage = result.messages[result.messages.length - 1]
console.log(lastMessage.content)
}
runAgent('What are the deployment steps for our app?')
LangChain recently shifted its agent story to LangGraph, which uses a graph-based execution model. The createReactAgent function is a pre-built graph that implements the standard ReAct pattern. If you need something custom, you build your own graph with nodes and edges.
What's good: The integration ecosystem is unmatched. Need to load PDFs, connect to Pinecone, parse HTML, or use a specific embedding model? There's probably a LangChain integration for it. LangGraph's graph model is powerful for complex agent architectures — conditional routing, parallel tool execution, sub-agents. LangSmith provides tracing and evaluation out of the box.
What's not: The abstraction overhead is real. You're importing from @langchain/core, @langchain/anthropic, and @langchain/langgraph for a basic agent. The class hierarchy is deep — understanding what DynamicStructuredTool does vs StructuredTool vs Tool requires reading docs that aren't always clear. Bundle size is significantly larger than the other two options. And the API has changed enough times that Stack Overflow answers from six months ago might not work.
The comparison that matters
Here's what actually differs when you're making a decision:
Developer experience
Vercel AI SDK wins for simplicity. Tool definitions are functions, not classes. The streaming primitives are elegant. If you're building a Next.js app with a chat interface, you can go from zero to streaming in under 50 lines.
Mastra is in the middle. More structure than the AI SDK, but that structure pays off when your agent gets complex. The workflow engine is something the others don't have at the framework level.
LangChain.js has the steepest learning curve. The class-based API and deep import paths feel heavy compared to the functional style of the other two. But once you learn the patterns, the ecosystem makes complex integrations faster.
Streaming
Vercel AI SDK was built for streaming. streamText, streamObject, the React hooks — it's the core use case. Streaming to a Next.js UI is seamless.
Mastra supports streaming through the AI SDK's primitives (since it uses them internally). It works, but it's not as polished on the client integration side.
LangChain.js supports streaming via LangGraph's .stream() and .streamEvents() methods. It works but requires more manual handling on the frontend. There's no built-in React integration — you're wiring up EventSource or ReadableStream yourself.
Production readiness
Vercel AI SDK is solid for request-response AI features. It's less opinionated about observability, caching, or evaluation — you bring your own solutions.
Mastra is explicitly targeting production use cases. Built-in evaluation, agent memory, and workflow persistence are things you'd otherwise build yourself or bolt on from separate libraries.
LangChain.js has LangSmith for tracing and evaluation, which is genuinely good. The downside is it's a separate paid service. The framework itself is production-tested at scale by many companies, but the frequent API changes mean upgrades require attention.
Bundle size and dependencies
This matters more than people think, especially for serverless deployments:
Vercel AI SDK: Minimal. The core package is small, and you only install the provider you need.
Mastra: Medium. It pulls in more dependencies for its workflow and memory systems, even if you don't use them.
LangChain.js: Large. A basic agent setup pulls in several packages, and transitive dependencies add up. This can matter for cold start times in serverless environments.
When to use which
Choose Vercel AI SDK when:
- You're building a Next.js app with streaming AI features
- You want the simplest possible setup for tool-calling agents
- Your team values small bundle size and minimal abstractions
- You don't need complex multi-step workflows
Choose Mastra when:
- You're building agents with complex workflows (conditional logic, branching, human-in-the-loop)
- You want built-in evaluation and memory without bolting on separate services
- You're comfortable with a newer framework that's still filling in documentation gaps
- Your agent needs to persist state across conversations
Choose LangChain.js when:
- You need integrations with specific vector stores, document loaders, or retrieval systems
- You're building a complex agent architecture with sub-agents and custom graph execution
- You want LangSmith's tracing and evaluation ecosystem
- Your team already knows LangChain from Python and wants a familiar API
The honest take
If you're building a typical AI feature in a Next.js app — chatbot, summarization, structured extraction — Vercel AI SDK is the right default. It's the simplest, the smallest, and it handles 80% of use cases with minimal code.
If your use case is genuinely complex — multi-agent orchestration, stateful workflows, production evaluation — look at Mastra or LangChain.js depending on whether you prefer a newer opinionated framework or a mature ecosystem with more integration options.
The worst choice is picking a framework "just in case" you'll need its advanced features. Start simple. You can always add complexity. You can rarely remove it.
One thing I'd avoid: mixing frameworks. I've seen codebases that use LangChain for RAG and the AI SDK for streaming in the same app. It works, but you end up with two sets of abstractions, two ways to define tools, and double the API surface your team needs to understand. Pick one and commit.
What's next
If you want to see what's possible when AI runs entirely client-side — no API calls, no server, no cost per request — check out our upcoming post on running AI models directly in the browser with WebLLM and WebGPU. It's a different tradeoff entirely, but worth understanding as WebGPU support expands.