Build an MCP Client in TypeScript That Connects to Multiple Tool Servers
Monday 20/04/2026
·9 min readYou've built an MCP server. Maybe two. You can plug them into Claude Desktop and they work. But now you want to build your own app that talks to those servers — a custom agent, a CLI, a backend service — and the client side of MCP is where the tutorials go silent. Every guide out there shows you how to build a server; almost none show you how to write the thing that actually uses it.
It gets worse when you need more than one server. A real agent needs database access and file system access and web search. That means three separate MCP connections, managed concurrently, with tools merged into a single interface the LLM can understand. Here's how to build an MCP client in TypeScript that connects to multiple tool servers and exposes all of their capabilities in one conversation.
If you haven't built an MCP server yet, start with How to Build an MCP Server in TypeScript from Scratch — this post assumes you already have working servers to connect to.
Why build your own MCP client
Claude Desktop is a great MCP client for personal use, but it's a desktop app. If you want MCP servers to power a backend agent, a CI workflow, a Slack bot, or any custom TypeScript app, you need your own client.
Writing one is not hard. The MCP protocol is well-specified and the official @modelcontextprotocol/sdk package handles the JSON-RPC transport for you. What is hard is the plumbing around it: connection lifecycle, capability discovery across multiple servers, tool name collisions, routing tool calls back to the right server, and cleanup when things go wrong.
That's what we're building.
What we're connecting to
Three MCP servers running locally:
- db-server — exposes tools for querying a PostgreSQL database (
db.query,db.schema) - fs-server — exposes tools for reading/writing files (
fs.read,fs.write,fs.list) - search-server — exposes a
web.searchtool backed by Brave or SerpAPI
We'll use the stdio transport because it's the simplest for local servers. The SDK also supports Streamable HTTP if your servers are remote.
Install the SDK
pnpm install @modelcontextprotocol/sdk @anthropic-ai/sdk zod
The client wrapper: one server, one class
Start small. Here's a thin wrapper around a single MCP connection that we'll compose later.
// src/mcp/server-connection.ts
import { Client } from '@modelcontextprotocol/sdk/client/index.js'
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js'
import type { Tool } from '@modelcontextprotocol/sdk/types.js'
export type ServerConfig = {
name: string
command: string
args: string[]
env?: Record<string, string>
}
export class ServerConnection {
private client: Client
private transport: StdioClientTransport
private connected = false
public tools: Tool[] = []
constructor(public readonly config: ServerConfig) {
this.client = new Client(
{ name: 'multi-mcp-client', version: '1.0.0' },
{ capabilities: {} }
)
this.transport = new StdioClientTransport({
command: config.command,
args: config.args,
env: { ...process.env, ...config.env } as Record<string, string>,
})
}
async connect(): Promise<void> {
await this.client.connect(this.transport)
const { tools } = await this.client.listTools()
this.tools = tools
this.connected = true
}
async callTool(name: string, args: Record<string, unknown>) {
if (!this.connected) throw new Error(`${this.config.name} not connected`)
return this.client.callTool({ name, arguments: args })
}
async close(): Promise<void> {
if (this.connected) {
await this.client.close()
this.connected = false
}
}
}
Two things worth calling out. First, listTools() is how capability discovery works — the server tells you what it can do at runtime, so your client doesn't need hardcoded knowledge of each server's API. Second, we store the config's name as a stable identifier we'll use for namespacing later.
The multi-server manager
Now compose many connections into one interface. This is where the real work happens.
// src/mcp/multi-client.ts
import { ServerConnection, ServerConfig } from './server-connection'
import type { Tool } from '@modelcontextprotocol/sdk/types.js'
export type NamespacedTool = Tool & {
server: string
originalName: string
}
export class MultiMCPClient {
private servers = new Map<string, ServerConnection>()
async addServer(config: ServerConfig): Promise<void> {
if (this.servers.has(config.name)) {
throw new Error(`Server "${config.name}" already registered`)
}
const conn = new ServerConnection(config)
try {
await conn.connect()
this.servers.set(config.name, conn)
} catch (err) {
await conn.close().catch(() => {})
throw new Error(
`Failed to connect to ${config.name}: ${(err as Error).message}`
)
}
}
listTools(): NamespacedTool[] {
const all: NamespacedTool[] = []
for (const [serverName, conn] of this.servers) {
for (const tool of conn.tools) {
all.push({
...tool,
name: `${serverName}__${tool.name}`,
server: serverName,
originalName: tool.name,
})
}
}
return all
}
async callTool(namespacedName: string, args: Record<string, unknown>) {
const [serverName, ...rest] = namespacedName.split('__')
const originalName = rest.join('__')
const conn = this.servers.get(serverName)
if (!conn) throw new Error(`No server registered: ${serverName}`)
return conn.callTool(originalName, args)
}
async closeAll(): Promise<void> {
const closings = [...this.servers.values()].map((c) =>
c.close().catch((e) => console.error(`Close failed: ${e.message}`))
)
await Promise.all(closings)
this.servers.clear()
}
}
The key design choice: tool names get a serverName__ prefix. Without this, two servers that both expose a search tool would collide, and the LLM would have no way to pick. With namespacing, db__query and fs__list are unambiguous, and callTool can route by splitting the prefix back off.
Gotcha: MCP tool names are restricted to ^[a-zA-Z0-9_-]{1,64}$. Double underscore works; a single dot like db.query would be rejected downstream when you forward tools to Claude. Pick your separator carefully.
Wire it to an LLM
The whole point of a multi-server client is giving an LLM access to all those tools at once. Here's the agent loop using Claude.
// src/agent/run.ts
import Anthropic from '@anthropic-ai/sdk'
import { MultiMCPClient } from '../mcp/multi-client'
const anthropic = new Anthropic()
export async function runAgent(mcp: MultiMCPClient, userMessage: string) {
const tools = mcp.listTools().map((t) => ({
name: t.name,
description: t.description ?? '',
input_schema: t.inputSchema as Anthropic.Tool.InputSchema,
}))
const messages: Anthropic.MessageParam[] = [
{ role: 'user', content: userMessage },
]
while (true) {
const response = await anthropic.messages.create({
model: 'claude-sonnet-4-6',
max_tokens: 4096,
tools,
messages,
})
messages.push({ role: 'assistant', content: response.content })
if (response.stop_reason !== 'tool_use') {
const textBlock = response.content.find((b) => b.type === 'text')
return textBlock?.type === 'text' ? textBlock.text : ''
}
const toolResults: Anthropic.ToolResultBlockParam[] = []
for (const block of response.content) {
if (block.type !== 'tool_use') continue
try {
const result = await mcp.callTool(
block.name,
block.input as Record<string, unknown>
)
toolResults.push({
type: 'tool_result',
tool_use_id: block.id,
content: JSON.stringify(result.content),
})
} catch (err) {
toolResults.push({
type: 'tool_result',
tool_use_id: block.id,
content: `Error: ${(err as Error).message}`,
is_error: true,
})
}
}
messages.push({ role: 'user', content: toolResults })
}
}
This is the standard Claude tool-use loop: call the model, if it wants tools, execute them, feed results back, repeat until it produces a final text answer. The is_error: true flag is important — it lets Claude recover from a failed tool call instead of looping forever.
Putting it together
// src/index.ts
import { MultiMCPClient } from './mcp/multi-client'
import { runAgent } from './agent/run'
async function main() {
const mcp = new MultiMCPClient()
await mcp.addServer({
name: 'db',
command: 'node',
args: ['./servers/db-server.js'],
env: { DATABASE_URL: process.env.DATABASE_URL! },
})
await mcp.addServer({
name: 'fs',
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-filesystem', './workspace'],
})
await mcp.addServer({
name: 'search',
command: 'node',
args: ['./servers/search-server.js'],
env: { BRAVE_API_KEY: process.env.BRAVE_API_KEY! },
})
try {
const answer = await runAgent(
mcp,
'Find all customers who signed up last week, save the list to customers.csv, and look up news about our top three accounts.'
)
console.log(answer)
} finally {
await mcp.closeAll()
}
}
main().catch((err) => {
console.error(err)
process.exit(1)
})
Notice the try/finally wrapping closeAll(). This matters more than it looks. MCP connections spawn child processes — if your app crashes or exits without calling close(), you leak processes. On a long-running server, that's a slow OOM in the making.
Things that will trip you up
Partial connection failures. If server #2 fails to start, do you fail the whole app or proceed with the working servers? addServer throws, so the current design fails loudly. That's usually what you want in a backend agent — but for a CLI tool, you might prefer degraded mode with a warning.
Tool schema drift. Servers can restart and change their tool list. The current design caches tools at connect time. If you're building a long-running service, you'll want to re-call listTools() periodically or listen for the notifications/tools/list_changed event the SDK exposes.
Stdio buffering. The stdio transport is fine for local dev but has hard limits on message size. If a tool returns a huge payload — say fs.read on a 50MB log file — you can deadlock. For production, switch to Streamable HTTP and let the OS handle backpressure.
Environment leakage. We pass ...process.env to child processes. If your server doesn't need your entire parent environment, scope it down. I've seen prod incidents where an MCP server child process had access to every secret the parent had, for no reason.
Race on close. Promise.all on close() is fine, but if any server hangs on shutdown, the whole process hangs. Add a timeout wrapper in production: Promise.race([conn.close(), sleep(5000)]).
Why this beats a monolithic client
You could bypass MCP entirely and just write tool functions directly in your agent. For one server, that's honestly fine. Where this architecture earns its keep is when you want to:
- Swap a local filesystem server for a sandboxed S3 one without touching your agent code
- Share the same server across a CLI, a backend, and Claude Desktop
- Let another team ship a new tool server without redeploying your agent
- Test agents against mock MCP servers instead of real databases
The protocol boundary is doing real work. That's what you're paying for with this extra layer.
What's next
Once you've got multi-server orchestration working, the next step is picking an agent framework that wraps all this for you — or building a multi-agent system where different agents use different subsets of your MCP tools. Next up: Google Agent Development Kit for TypeScript: Build a Multi-Agent System from Scratch — how Google's new 2026 ADK handles tool access, agent coordination, and deployment in a code-first way.