How to Build an MCP Server in TypeScript from Scratch
Wednesday 18/03/2026
·12 min readYou want to give an LLM access to your data. Maybe it's a database, a file system, an internal API. The problem is every AI tool has its own integration format — Claude plugins, OpenAI function calling, custom agents — and you end up writing glue code for each one. Then you change your data source and everything breaks everywhere.
The Model Context Protocol (MCP) solves this by standardizing how LLMs connect to external tools and data. You build one MCP server, and any MCP-compatible client (Claude Desktop, Cursor, Claude Code, custom apps) can use it. Think of it like a USB port for AI — one interface, many devices. Here's how to build an MCP server in TypeScript that connects a PostgreSQL database to any LLM, from scratch.
What MCP actually is (30-second version)
MCP defines three things your server can expose to an LLM:
- Tools — functions the LLM can call (like "run this SQL query" or "insert a record")
- Resources — data the LLM can read (like "here's the database schema" or "here's a config file")
- Prompts — reusable prompt templates the client can offer to users
The server runs as a standalone process. Clients connect to it over stdio (for local tools like Claude Desktop) or Streamable HTTP (for remote/web clients). We'll support both.
Setting up the project
mkdir mcp-postgres-server && cd mcp-postgres-server
pnpm init
pnpm add @modelcontextprotocol/sdk pg zod
pnpm add -D typescript @types/node @types/pg tsx
// tsconfig.json
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
},
"include": ["src/**/*"]
}
Add these scripts to your package.json:
// package.json (scripts only)
{
"scripts": {
"dev": "tsx src/index.ts",
"build": "tsc",
"start": "node dist/index.js"
}
}
Defining the database layer
Before touching MCP, let's write a thin wrapper around pg that we'll expose through our server. This keeps the MCP code clean.
// src/db.ts
import pg from 'pg'
const pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
})
export interface TableInfo {
tableName: string
columns: { name: string; type: string; nullable: boolean }[]
}
export async function getSchema(): Promise<TableInfo[]> {
const result = await pool.query(`
SELECT table_name, column_name, data_type, is_nullable
FROM information_schema.columns
WHERE table_schema = 'public'
ORDER BY table_name, ordinal_position
`)
const tables = new Map<string, TableInfo>()
for (const row of result.rows) {
if (!tables.has(row.table_name)) {
tables.set(row.table_name, {
tableName: row.table_name,
columns: [],
})
}
tables.get(row.table_name)!.columns.push({
name: row.column_name,
type: row.data_type,
nullable: row.is_nullable === 'YES',
})
}
return Array.from(tables.values())
}
export async function runQuery(sql: string): Promise<Record<string, unknown>[]> {
// Safety: only allow SELECT queries
const trimmed = sql.trim().toUpperCase()
if (!trimmed.startsWith('SELECT')) {
throw new Error('Only SELECT queries are allowed. Use the insert/update tools for writes.')
}
const result = await pool.query(sql)
return result.rows
}
export async function insertRecord(
table: string,
data: Record<string, unknown>
): Promise<Record<string, unknown>> {
const columns = Object.keys(data)
const values = Object.values(data)
const placeholders = columns.map((_, i) => `$${i + 1}`)
const result = await pool.query(
`INSERT INTO ${pg.Client.prototype.escapeIdentifier(table)}
(${columns.map((c) => pg.Client.prototype.escapeIdentifier(c)).join(', ')})
VALUES (${placeholders.join(', ')})
RETURNING *`,
values
)
return result.rows[0]
}
export async function shutdown(): Promise<void> {
await pool.end()
}
Gotcha: The escapeIdentifier call is important. Without it, you're wide open to SQL injection through table and column names. The parameterized $1 placeholders handle value injection, but identifiers need separate escaping.
Building the MCP server
Here's where MCP comes in. The @modelcontextprotocol/sdk package gives you a McpServer class that handles the protocol — you just register your tools and resources.
// src/server.ts
import { McpServer, ResourceTemplate } from '@modelcontextprotocol/sdk/server/mcp.js'
import { z } from 'zod'
import { getSchema, runQuery, insertRecord } from './db.js'
export function createServer(): McpServer {
const server = new McpServer({
name: 'postgres-explorer',
version: '1.0.0',
})
// Resource: expose the database schema so the LLM knows what tables exist
server.resource('db-schema', 'db://schema', async (uri) => {
const schema = await getSchema()
const formatted = schema
.map((table) => {
const cols = table.columns
.map(
(c) =>
` ${c.name} (${c.type}${c.nullable ? ', nullable' : ''})`
)
.join('\n')
return `${table.tableName}:\n${cols}`
})
.join('\n\n')
return {
contents: [
{
uri: uri.href,
mimeType: 'text/plain',
text: formatted,
},
],
}
})
// Resource template: read a specific table's schema
server.resource(
'table-schema',
new ResourceTemplate('db://schema/{tableName}', { list: undefined }),
async (uri, { tableName }) => {
const schema = await getSchema()
const table = schema.find((t) => t.tableName === tableName)
if (!table) {
throw new Error(`Table '${tableName}' not found`)
}
const formatted = table.columns
.map(
(c) =>
`${c.name} (${c.type}${c.nullable ? ', nullable' : ''})`
)
.join('\n')
return {
contents: [
{
uri: uri.href,
mimeType: 'text/plain',
text: `${table.tableName}:\n${formatted}`,
},
],
}
}
)
// Tool: query the database
server.tool(
'query',
'Run a read-only SQL query against the database and return results as JSON',
{ sql: z.string().describe('A SELECT SQL query to execute') },
async ({ sql }) => {
try {
const rows = await runQuery(sql)
return {
content: [
{
type: 'text' as const,
text: JSON.stringify(rows, null, 2),
},
],
}
} catch (error) {
const message =
error instanceof Error ? error.message : 'Unknown error'
return {
content: [{ type: 'text' as const, text: `Error: ${message}` }],
isError: true,
}
}
}
)
// Tool: insert a record
server.tool(
'insert',
'Insert a new record into a database table',
{
table: z.string().describe('The table name to insert into'),
data: z
.record(z.unknown())
.describe('Key-value pairs of column names and values'),
},
async ({ table, data }) => {
try {
const row = await insertRecord(table, data)
return {
content: [
{
type: 'text' as const,
text: `Inserted successfully:\n${JSON.stringify(row, null, 2)}`,
},
],
}
} catch (error) {
const message =
error instanceof Error ? error.message : 'Unknown error'
return {
content: [{ type: 'text' as const, text: `Error: ${message}` }],
isError: true,
}
}
}
)
// Tool: list tables
server.tool(
'list-tables',
'List all tables in the database with their column info',
{},
async () => {
const schema = await getSchema()
return {
content: [
{
type: 'text' as const,
text: JSON.stringify(
schema.map((t) => ({
table: t.tableName,
columns: t.columns.map((c) => c.name),
})),
null,
2
),
},
],
}
}
)
return server
}
A few things to notice:
-
Resources vs. Tools — Resources are data the client reads automatically to give the LLM context. Tools are actions the LLM actively decides to call. The schema resource means the LLM always knows what tables exist. The query tool lets it explore further.
-
Error handling returns errors, doesn't throw — When a tool fails, you return
isError: truewith the message. This lets the LLM see the error and decide what to do (retry with a different query, tell the user, etc.). If you throw, the whole request fails. -
Zod schemas — The SDK uses Zod to define tool input schemas. These get converted to JSON Schema and sent to the client, so the LLM knows exactly what parameters to pass.
Wiring up transports
MCP supports two transports. Stdio is for local tools — the client spawns your server as a subprocess and communicates over stdin/stdout. Streamable HTTP is for remote deployments where clients connect over the network.
// src/index.ts
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js'
import { createServer } from './server.js'
import { shutdown } from './db.js'
import http from 'node:http'
const transportType = process.argv[2] || 'stdio'
async function main(): Promise<void> {
const server = createServer()
if (transportType === 'http') {
await startHttpTransport(server)
} else {
await startStdioTransport(server)
}
}
async function startStdioTransport(server: ReturnType<typeof createServer>): Promise<void> {
const transport = new StdioServerTransport()
await server.connect(transport)
console.error('MCP server running on stdio') // stderr so it doesn't interfere with protocol
process.on('SIGINT', async () => {
await shutdown()
process.exit(0)
})
}
async function startHttpTransport(server: ReturnType<typeof createServer>): Promise<void> {
const PORT = parseInt(process.env.PORT || '3100', 10)
const transport = new StreamableHTTPServerTransport({
sessionIdGenerator: () => crypto.randomUUID(),
})
await server.connect(transport)
const httpServer = http.createServer(async (req, res) => {
// Only handle /mcp endpoint
if (req.url !== '/mcp') {
res.writeHead(404)
res.end('Not found')
return
}
await transport.handleRequest(req, res)
})
httpServer.listen(PORT, () => {
console.error(`MCP server running on http://localhost:${PORT}/mcp`)
})
process.on('SIGINT', async () => {
httpServer.close()
await shutdown()
process.exit(0)
})
}
main().catch((error) => {
console.error('Failed to start server:', error)
process.exit(1)
})
Important: When using stdio transport, all your logs must go to stderr. Anything on stdout is treated as MCP protocol data. This is the #1 debugging headache with MCP — your console.log calls will corrupt the protocol stream. Use console.error for debug output.
Testing with Claude Desktop
The fastest way to test your MCP server is with Claude Desktop. Add this to your Claude Desktop config file:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
// claude_desktop_config.json
{
"mcpServers": {
"postgres-explorer": {
"command": "npx",
"args": ["tsx", "/absolute/path/to/mcp-postgres-server/src/index.ts"],
"env": {
"DATABASE_URL": "postgresql://user:password@localhost:5432/mydb"
}
}
}
}
Restart Claude Desktop. You should see a hammer icon in the chat input — that's your tools. Try asking: "What tables are in my database?" or "Show me the last 10 orders with customer names."
Claude will read the schema resource automatically, then call the query tool to run SQL against your database. You'll see each tool call in the UI and can approve or deny them.
Testing with Claude Code
If you use Claude Code (the CLI), add the server to your project's .mcp.json:
// .mcp.json (project root)
{
"mcpServers": {
"postgres-explorer": {
"command": "npx",
"args": ["tsx", "/absolute/path/to/mcp-postgres-server/src/index.ts"],
"env": {
"DATABASE_URL": "postgresql://user:password@localhost:5432/mydb"
}
}
}
}
Now Claude Code can query your database directly while you're working. Ask it to "check what's in the users table" and it'll call your MCP server.
Testing programmatically
Don't just click around — write actual tests. The MCP SDK lets you create an in-memory client-server pair:
// src/server.test.ts
import { Client } from '@modelcontextprotocol/sdk/client/index.js'
import { InMemoryTransport } from '@modelcontextprotocol/sdk/inMemory.js'
import { createServer } from './server.js'
import { describe, it, expect, beforeAll } from 'vitest'
describe('MCP Postgres Server', () => {
let client: Client
beforeAll(async () => {
const server = createServer()
const [clientTransport, serverTransport] =
InMemoryTransport.createLinkedPair()
await server.connect(serverTransport)
client = new Client({ name: 'test-client', version: '1.0.0' })
await client.connect(clientTransport)
})
it('should list available tools', async () => {
const { tools } = await client.listTools()
const toolNames = tools.map((t) => t.name)
expect(toolNames).toContain('query')
expect(toolNames).toContain('insert')
expect(toolNames).toContain('list-tables')
})
it('should expose the schema resource', async () => {
const { resources } = await client.listResources()
expect(resources.some((r) => r.uri === 'db://schema')).toBe(true)
})
it('should reject non-SELECT queries', async () => {
const result = await client.callTool({
name: 'query',
arguments: { sql: 'DROP TABLE users' },
})
expect(result.isError).toBe(true)
const text = (result.content as Array<{ type: string; text: string }>)[0]
.text
expect(text).toContain('Only SELECT queries are allowed')
})
})
The InMemoryTransport.createLinkedPair() approach is great — no network, no subprocess, just direct in-process communication. Your tests run in milliseconds.
Adding authentication for remote deployments
If you're running the HTTP transport on a server, you need auth. Here's a simple Bearer token approach:
// src/auth.ts
export function validateToken(req: http.IncomingMessage): boolean {
const auth = req.headers.authorization
if (!auth || !auth.startsWith('Bearer ')) {
return false
}
const token = auth.slice(7)
return token === process.env.MCP_AUTH_TOKEN
}
Then wrap the HTTP handler:
// in src/index.ts, inside startHttpTransport
const httpServer = http.createServer(async (req, res) => {
if (req.url !== '/mcp') {
res.writeHead(404)
res.end('Not found')
return
}
if (!validateToken(req)) {
res.writeHead(401, { 'Content-Type': 'application/json' })
res.end(JSON.stringify({ error: 'Unauthorized' }))
return
}
await transport.handleRequest(req, res)
})
For production, you'd want proper OAuth or API key management. But for internal tools, a shared secret in an environment variable works fine.
Common pitfalls
stdout corruption: Any console.log in stdio mode will break the protocol. Use console.error or a file-based logger. This will cost you an hour of debugging if you forget.
Tool descriptions matter more than you think. The LLM reads them to decide which tool to call and how. "Run SQL" is worse than "Run a read-only SQL query against the database and return results as JSON." Be specific about what the tool does, what it accepts, and what it returns.
Long-running queries: If a query takes 30+ seconds, some clients will time out. Add a statement_timeout to your PostgreSQL connection config:
// src/db.ts
const pool = new pg.Pool({
connectionString: process.env.DATABASE_URL,
statement_timeout: 10000, // 10 seconds
})
Don't expose write tools carelessly. Our insert tool is fine for demo purposes, but in production you might want the LLM to propose changes that a human approves. MCP supports this through the client UI (Claude Desktop shows tool calls before executing), but think about what happens if someone connects a fully autonomous client.
What's next
This server exposes a database, but MCP can wrap anything — REST APIs, file systems, Kubernetes clusters, monitoring dashboards. The pattern is always the same: define tools with Zod schemas, return structured results, let the LLM figure out how to use them.
If you want to take this further, check out our post on building a multi-step AI agent with tool use in TypeScript — the agent patterns there combine naturally with MCP servers. And keep an eye out for our upcoming post on structured output with Zod, where we'll cover how to force reliable JSON from any LLM call — a technique that pairs well with MCP tool responses.