Mastra vs VoltAgent: The Two New TypeScript Agent Frameworks Compared
Friday 24/04/2026
·12 min readYou've outgrown cobbling together Vercel AI SDK calls by hand, and LangChain.js feels like wearing a winter coat indoors. Two TypeScript-native agent frameworks keep showing up in your searches — Mastra and VoltAgent — and the docs for both make it sound like either one will solve your problems. Neither site tells you what it feels like when things go wrong.
I built the same agent in both. This is the Mastra vs VoltAgent comparison I wish had existed when I started: same feature, both frameworks, honest tradeoffs on developer experience, memory, MCP support, observability, and what actually happens at deploy time.
What we're building
A customer support data lookup agent. Given a customer email or ID, it can fetch the customer record, pull their latest invoice, and summarize their account status. Two tools, one agent, real tool-calling. This covers the surface area that matters: agent definition, tool schemas, type safety, streaming, and how the framework exposes runtime behavior.
Both frameworks use the Vercel AI SDK as the underlying provider adapter, so the model call itself is identical. What differs is everything around it.
Install the shared dependency first:
pnpm install @ai-sdk/anthropic zod
We'll reuse these fake data functions in both examples — in a real app they'd hit your database.
// src/lib/customers.ts
export type Customer = {
id: string
email: string
name: string
plan: 'free' | 'pro' | 'enterprise'
status: 'active' | 'churned' | 'trial'
}
export type Invoice = {
invoiceId: string
customerId: string
amount: number
status: 'paid' | 'open' | 'overdue'
date: string
}
const customers: Customer[] = [
{ id: 'c_1', email: 'ada@example.com', name: 'Ada Lovelace', plan: 'pro', status: 'active' },
{ id: 'c_2', email: 'grace@example.com', name: 'Grace Hopper', plan: 'enterprise', status: 'active' },
]
const invoices: Invoice[] = [
{ invoiceId: 'inv_9', customerId: 'c_1', amount: 49, status: 'paid', date: '2026-04-01' },
{ invoiceId: 'inv_10', customerId: 'c_2', amount: 499, status: 'overdue', date: '2026-04-01' },
]
export async function findCustomer(identifier: string): Promise<Customer | null> {
return customers.find((c) => c.id === identifier || c.email === identifier) ?? null
}
export async function findLatestInvoice(customerId: string): Promise<Invoice | null> {
return invoices.find((i) => i.customerId === customerId) ?? null
}
Mastra
pnpm install @mastra/core
Mastra's mental model is agents as first-class objects you declare once and reuse. Tools are separate, declarative primitives with input and output schemas, which means the framework validates what your tool returns — not only what the LLM sent.
// src/agents/mastra-agent.ts
import { Agent } from '@mastra/core/agent'
import { createTool } from '@mastra/core/tools'
import { anthropic } from '@ai-sdk/anthropic'
import { z } from 'zod'
import { findCustomer, findLatestInvoice } from '../lib/customers'
const lookupCustomer = createTool({
id: 'lookup-customer',
description: 'Look up a customer by email or customer ID',
inputSchema: z.object({
identifier: z.string().describe('Email address or customer ID'),
}),
outputSchema: z.object({
found: z.boolean(),
customer: z
.object({
id: z.string(),
name: z.string(),
plan: z.string(),
status: z.string(),
})
.nullable(),
}),
execute: async ({ context }) => {
const customer = await findCustomer(context.identifier)
if (!customer) return { found: false, customer: null }
return {
found: true,
customer: {
id: customer.id,
name: customer.name,
plan: customer.plan,
status: customer.status,
},
}
},
})
const getLatestInvoice = createTool({
id: 'get-latest-invoice',
description: 'Get the latest invoice for a given customer ID',
inputSchema: z.object({
customerId: z.string(),
}),
outputSchema: z.object({
invoiceId: z.string(),
amount: z.number(),
status: z.string(),
date: z.string(),
}).nullable(),
execute: async ({ context }) => {
const invoice = await findLatestInvoice(context.customerId)
if (!invoice) return null
return {
invoiceId: invoice.invoiceId,
amount: invoice.amount,
status: invoice.status,
date: invoice.date,
}
},
})
export const supportAgent = new Agent({
name: 'support-agent',
instructions:
'You help support staff look up customer data. Always call lookup-customer first, then get-latest-invoice if the customer was found. Summarize the account in two sentences.',
model: anthropic('claude-sonnet-4-20250514'),
tools: { lookupCustomer, getLatestInvoice },
})
async function main() {
const result = await supportAgent.generate(
'What is the status of ada@example.com and are they paid up?'
)
console.log(result.text)
}
main().catch((err) => {
console.error('Agent failed:', err)
process.exit(1)
})
The agent loop is managed by Mastra. You don't write the "call LLM → run tools → feed results back → repeat" machinery. By default it'll keep running until the model stops calling tools.
What's genuinely nice here: the outputSchema on each tool. If your tool accidentally returns the wrong shape — say, a database migration changes a column name — Mastra throws a validation error at the tool boundary instead of letting garbage flow back into the model. That's a bug class you don't notice until production, and having it caught at the framework layer matters.
Mastra also ships a dev server (mastra dev) that gives you a local playground where you can chat with the agent, inspect traces, and see every tool call's input and output. It's a Vite-based UI that runs alongside your code.
Memory in Mastra
Memory is a separate package and uses storage adapters. LibSQL (SQLite-compatible) is the default for local development:
// src/agents/mastra-agent-with-memory.ts
import { Memory } from '@mastra/memory'
import { LibSQLStore } from '@mastra/libsql'
const memory = new Memory({
storage: new LibSQLStore({ url: 'file:./mastra.db' }),
})
export const supportAgent = new Agent({
name: 'support-agent',
instructions: '...',
model: anthropic('claude-sonnet-4-20250514'),
tools: { lookupCustomer, getLatestInvoice },
memory,
})
// Pass a thread/resource so memory persists across calls
await supportAgent.generate('Hi, can you look up ada@example.com?', {
memory: { thread: 't_1', resource: 'user_42' },
})
The resource is usually a user ID and thread is a conversation ID. Memory writes happen automatically — you don't have to plumb messages yourself.
VoltAgent
pnpm install @voltagent/core @voltagent/vercel-ai
VoltAgent ships with a similar top-level abstraction — an Agent class — but the framework itself is built around a central VoltAgent runtime that holds your agents and gives you observability and a dev console out of the box.
// src/agents/voltagent-agent.ts
import { Agent, VoltAgent, createTool } from '@voltagent/core'
import { VercelAIProvider } from '@voltagent/vercel-ai'
import { anthropic } from '@ai-sdk/anthropic'
import { z } from 'zod'
import { findCustomer, findLatestInvoice } from '../lib/customers'
const lookupCustomer = createTool({
name: 'lookup_customer',
description: 'Look up a customer by email or customer ID',
parameters: z.object({
identifier: z.string().describe('Email address or customer ID'),
}),
execute: async ({ identifier }) => {
const customer = await findCustomer(identifier)
if (!customer) return { found: false, customer: null }
return {
found: true,
customer: {
id: customer.id,
name: customer.name,
plan: customer.plan,
status: customer.status,
},
}
},
})
const getLatestInvoice = createTool({
name: 'get_latest_invoice',
description: 'Get the latest invoice for a given customer ID',
parameters: z.object({
customerId: z.string(),
}),
execute: async ({ customerId }) => {
const invoice = await findLatestInvoice(customerId)
if (!invoice) return null
return invoice
},
})
const supportAgent = new Agent({
name: 'support-agent',
description: 'Looks up customer data and invoice status for support staff.',
instructions:
'Call lookup_customer first, then get_latest_invoice if the customer was found. Summarize in two sentences.',
llm: new VercelAIProvider(),
model: anthropic('claude-sonnet-4-20250514'),
tools: [lookupCustomer, getLatestInvoice],
})
new VoltAgent({
agents: {
support: supportAgent,
},
})
async function main() {
const result = await supportAgent.generateText(
'What is the status of ada@example.com and are they paid up?'
)
console.log(result.text)
}
main().catch((err) => {
console.error('Agent failed:', err)
process.exit(1)
})
A few differences jump out. Tools in VoltAgent take a single parameters Zod schema — no separate output schema. The runtime validates tool inputs coming from the LLM but trusts your execute return value. That's one less guardrail than Mastra, but also less boilerplate when you're iterating.
Registering the agent with new VoltAgent({ agents }) starts the VoltAgent runtime, which exposes a local dev server at localhost:3141 by default. You can then open the VoltOps Console (a hosted web UI at console.voltagent.dev) and point it at your local runtime to get real-time traces of every agent run, tool call, and token. The experience is closer to "attach a debugger to a running process" than "look at logs later."
Memory in VoltAgent
VoltAgent bundles memory into the agent definition, with built-in adapters:
// src/agents/voltagent-agent-with-memory.ts
import { Memory } from '@voltagent/core'
import { LibSQLMemoryAdapter } from '@voltagent/libsql'
const memory = new Memory({
storage: new LibSQLMemoryAdapter({ url: 'file:./voltagent.db' }),
})
const supportAgent = new Agent({
name: 'support-agent',
description: 'Looks up customer data.',
instructions: '...',
llm: new VercelAIProvider(),
model: anthropic('claude-sonnet-4-20250514'),
tools: [lookupCustomer, getLatestInvoice],
memory,
})
await supportAgent.generateText('Hi, look up ada@example.com', {
userId: 'user_42',
conversationId: 'conv_1',
})
Same idea as Mastra — userId scopes memory to an end user, conversationId scopes it to a chat thread. The API names are different but the mental model is identical.
Multi-agent hierarchies
Where VoltAgent differentiates itself most clearly is supervisor agents. You can pass other agents as "subagents" to a parent agent, and VoltAgent auto-generates a delegate_task tool that lets the supervisor hand work to a specialist:
// src/agents/voltagent-supervisor.ts
const billingAgent = new Agent({
name: 'billing-agent',
description: 'Handles billing and invoice questions.',
llm: new VercelAIProvider(),
model: anthropic('claude-sonnet-4-20250514'),
tools: [getLatestInvoice],
})
const lookupAgent = new Agent({
name: 'lookup-agent',
description: 'Looks up customer records.',
llm: new VercelAIProvider(),
model: anthropic('claude-sonnet-4-20250514'),
tools: [lookupCustomer],
})
const supervisor = new Agent({
name: 'supervisor',
description: 'Routes support requests to specialists.',
instructions: 'Delegate lookups to lookup-agent and billing questions to billing-agent.',
llm: new VercelAIProvider(),
model: anthropic('claude-sonnet-4-20250514'),
subAgents: [lookupAgent, billingAgent],
})
In Mastra you'd model the same thing as a workflow with explicit steps, or with agent-as-tool composition. Both approaches work; VoltAgent's is more declarative if your use case is genuinely hierarchical.
MCP support
Both frameworks support MCP clients — meaning you can connect to MCP servers and expose their tools to your agent. The APIs look similar.
Mastra:
import { MCPClient } from '@mastra/mcp'
const mcp = new MCPClient({
servers: {
filesystem: { command: 'npx', args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'] },
},
})
const agent = new Agent({
name: 'mcp-agent',
instructions: '...',
model: anthropic('claude-sonnet-4-20250514'),
tools: await mcp.getTools(),
})
VoltAgent:
import { MCPConfiguration } from '@voltagent/core'
const mcp = new MCPConfiguration({
servers: {
filesystem: { type: 'stdio', command: 'npx', args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'] },
},
})
const agent = new Agent({
name: 'mcp-agent',
llm: new VercelAIProvider(),
model: anthropic('claude-sonnet-4-20250514'),
tools: await mcp.getTools(),
})
Identical ergonomics. If you're heavy on MCP, either works.
Observability
This is the biggest practical difference.
Mastra has built-in OpenTelemetry tracing. You can point it at any OTel collector — Langfuse, Jaeger, Honeycomb, whatever you already run. The dev playground shows traces locally without extra setup. For production, you own the observability stack.
VoltAgent ships VoltOps Console — a hosted web UI — and telemetry is enabled by default when you run a VoltAgent instance. You get a production-grade trace viewer without standing anything up yourself. The tradeoff: if you prefer keeping all your telemetry in your own stack, you'll be exporting from VoltAgent to your OTel backend anyway (which they support).
If you already have Langfuse or Datadog, Mastra is less friction. If you want something working before lunch, VoltAgent's Console is hard to beat.
Bundle size and cold start
Rough numbers for a minimal agent with two tools, no memory, built with esbuild --bundle --platform=node --minify:
- Mastra (
@mastra/core): ~450 KB bundled, includingaiand@ai-sdk/anthropictransitively. - VoltAgent (
@voltagent/core+@voltagent/vercel-ai): ~520 KB bundled.
Cold start on Cloudflare Workers (tested in an isolate, cold): ~80 ms for Mastra, ~110 ms for VoltAgent. Neither is a problem for a typical web app, but if you're deploying to serverless with aggressive cold start budgets, Mastra is marginally lighter. If bundle size really matters, Vercel AI SDK alone is still smaller than either — but you lose the agent runtime.
Maturity and community
Both frameworks are 2025-2026 releases, both are open source, both are under active development. Some observations that aren't captured in benchmarks:
Mastra has a larger GitHub community, more integrations (vector stores, evals, voice), a more fleshed-out workflow engine, and corporate backing. Its docs are the better of the two but still have gaps — expect to read source for anything non-obvious.
VoltAgent has a smaller community but moves quickly and the hosted Console feels further along as a product. The core abstractions are tight. Supervisor/subagent composition is its standout feature.
Both have had breaking API changes in the last six months. Pin your versions.
When to pick which
Pick Mastra if:
- You want output schemas on tools (production safety net)
- You need the workflow engine for conditional multi-step pipelines
- You already have an observability stack and want to plug in via OTel
- You prefer a larger community and more integrations
Pick VoltAgent if:
- You're building hierarchical multi-agent systems with supervisors
- You want hosted observability up and running immediately
- You value a tighter, more opinionated core API
- Your team is small and won't build tracing infrastructure
Pick neither (use Vercel AI SDK directly) if:
- Your agent is a single LLM call with a handful of tools
- Bundle size and cold start are your top constraints
- You don't need memory, workflows, or multi-agent composition
The honest take: for most teams shipping a support bot, data lookup agent, or assistant-style feature, either framework works and you won't regret the choice. Mastra is the safer default today because of community size and output schemas. VoltAgent is more interesting if you need supervisor agents or want observability handed to you.
What I'd avoid is picking based on "which has cooler marketing" — both do. Build the smallest thing you need in both over an afternoon. You'll feel which one fits your brain within an hour.
What's next
If you're choosing an agent framework partly to save money on tokens, the bigger lever is usually model routing — sending simple queries to cheap models and only escalating to expensive ones when needed. The next post walks through how to route LLM requests to cheap vs expensive models automatically in TypeScript with real before/after cost numbers.