Claude vs OpenAI API: A Practical Comparison for JavaScript Developers

Wednesday 11/02/2026

·11 min read
Share:

You're starting a new project that needs an LLM, and you're staring at two tabs: the Anthropic docs and the OpenAI docs. Both APIs do roughly the same thing, but the code to call them is different enough that you can't just swap one for the other. Every comparison article you find reads like a press release — benchmarks, vibes, and zero code.

This is the comparison I wanted when I was choosing between the Claude API and the OpenAI API for a JavaScript project. Side-by-side TypeScript examples for the things you actually do: chat completions, streaming, tool use, and vision. Plus the pricing math so you can estimate real costs.

Setup and installation

Both SDKs follow the same pattern — install from npm, set an env var, instantiate a client.

# Anthropic
pnpm add @anthropic-ai/sdk

# OpenAI
pnpm add openai
// src/lib/clients.ts
import Anthropic from '@anthropic-ai/sdk'
import OpenAI from 'openai'

// Both auto-read from env vars if you don't pass apiKey
const claude = new Anthropic() // reads ANTHROPIC_API_KEY
const openai = new OpenAI() // reads OPENAI_API_KEY

That's the last time they'll look the same.

Basic chat completion

Here's where the API design philosophy diverges. Claude keeps the system prompt as a separate top-level parameter. OpenAI puts it in the messages array.

Claude:

// src/examples/claude-basic.ts
import Anthropic from '@anthropic-ai/sdk'

const client = new Anthropic()

const message = await client.messages.create({
    model: 'claude-sonnet-4-5-20250929',
    max_tokens: 1024,
    system: 'You are a senior TypeScript developer.',
    messages: [{ role: 'user', content: 'Explain the difference between type and interface.' }],
})

// Response is an array of content blocks
const text = message.content[0].type === 'text' ? message.content[0].text : ''
console.log(text)

OpenAI:

// src/examples/openai-basic.ts
import OpenAI from 'openai'

const client = new OpenAI()

const completion = await client.chat.completions.create({
    model: 'gpt-4.1',
    messages: [
        { role: 'developer', content: 'You are a senior TypeScript developer.' },
        { role: 'user', content: 'Explain the difference between type and interface.' },
    ],
})

// Response is a plain string
const text = completion.choices[0].message.content
console.log(text)

Two things jump out immediately:

  1. Claude requires max_tokens. OpenAI makes it optional. Forget it with Claude and you get an error. This is actually nice — it forces you to think about cost per request instead of accidentally letting a model dump 4000 tokens when you only needed 200.

  2. Claude returns an array of content blocks, not a string. This feels annoying at first, but it makes more sense when you use tool calls — the response can contain both text and tool use blocks in a single message.

OpenAI recently renamed the system role to developer in their newer models (GPT-4.1 onwards). The old system role still works but developer is the recommended name going forward. Claude just uses a dedicated system parameter and avoids this altogether.

Streaming responses

Both SDKs support async iteration for streaming. The event shapes are different but the pattern is the same.

Claude:

// src/examples/claude-stream.ts
import Anthropic from '@anthropic-ai/sdk'

const client = new Anthropic()

const stream = client.messages.stream({
    model: 'claude-sonnet-4-5-20250929',
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Write a haiku about TypeScript.' }],
})

stream.on('text', (text) => {
    process.stdout.write(text)
})

const finalMessage = await stream.finalMessage()
console.log('\n\nTotal tokens:', finalMessage.usage.input_tokens + finalMessage.usage.output_tokens)

OpenAI:

// src/examples/openai-stream.ts
import OpenAI from 'openai'

const client = new OpenAI()

const stream = await client.chat.completions.create({
    model: 'gpt-4.1',
    messages: [{ role: 'user', content: 'Write a haiku about TypeScript.' }],
    stream: true,
})

for await (const chunk of stream) {
    const content = chunk.choices[0]?.delta?.content
    if (content) process.stdout.write(content)
}

Claude's .stream() helper gives you an event emitter with .on('text', ...) plus a .finalMessage() that resolves when the stream is done — handy for grabbing usage stats after streaming. OpenAI gives you a raw async iterator and you pull delta content from each chunk. Both work fine, but Claude's API is a bit more ergonomic here.

If you want the raw event stream from Claude (like OpenAI's approach), you can also use stream: true in messages.create() and iterate with for await:

// src/examples/claude-stream-raw.ts
const rawStream = await client.messages.create({
    model: 'claude-sonnet-4-5-20250929',
    max_tokens: 1024,
    messages: [{ role: 'user', content: 'Hello' }],
    stream: true,
})

for await (const event of rawStream) {
    if (event.type === 'content_block_delta' && event.delta.type === 'text_delta') {
        process.stdout.write(event.delta.text)
    }
}

Tool use (function calling)

This is where the two APIs differ the most. Same concept — you define functions the model can call, it returns structured data saying "I want to call this function with these arguments," you execute it and send the result back. But the shapes are annoyingly different.

Claude:

// src/examples/claude-tools.ts
import Anthropic from '@anthropic-ai/sdk'

const client = new Anthropic()

const response = await client.messages.create({
    model: 'claude-sonnet-4-5-20250929',
    max_tokens: 1024,
    tools: [
        {
            name: 'get_stock_price',
            description: 'Get the current stock price for a given ticker symbol',
            input_schema: {
                type: 'object' as const,
                properties: {
                    ticker: { type: 'string', description: 'Stock ticker symbol, e.g. AAPL' },
                },
                required: ['ticker'],
            },
        },
    ],
    messages: [{ role: 'user', content: "What's Apple's stock price?" }],
})

// Find the tool use block in the response
const toolUse = response.content.find((block) => block.type === 'tool_use')
if (toolUse && toolUse.type === 'tool_use') {
    console.log(toolUse.name) // 'get_stock_price'
    console.log(toolUse.input) // { ticker: 'AAPL' } — already parsed!

    // Send the result back
    const result = await client.messages.create({
        model: 'claude-sonnet-4-5-20250929',
        max_tokens: 1024,
        tools: [
            /* same tools array */
        ],
        messages: [
            { role: 'user', content: "What's Apple's stock price?" },
            { role: 'assistant', content: response.content },
            {
                role: 'user',
                content: [
                    {
                        type: 'tool_result',
                        tool_use_id: toolUse.id,
                        content: JSON.stringify({ price: 237.5, currency: 'USD' }),
                    },
                ],
            },
        ],
    })
}

OpenAI:

// src/examples/openai-tools.ts
import OpenAI from 'openai'

const client = new OpenAI()

const completion = await client.chat.completions.create({
    model: 'gpt-4.1',
    tools: [
        {
            type: 'function',
            function: {
                name: 'get_stock_price',
                description: 'Get the current stock price for a given ticker symbol',
                parameters: {
                    type: 'object',
                    properties: {
                        ticker: { type: 'string', description: 'Stock ticker symbol, e.g. AAPL' },
                    },
                    required: ['ticker'],
                },
            },
        },
    ],
    messages: [{ role: 'user', content: "What's Apple's stock price?" }],
})

const toolCall = completion.choices[0].message.tool_calls?.[0]
if (toolCall) {
    console.log(toolCall.function.name) // 'get_stock_price'
    console.log(JSON.parse(toolCall.function.arguments)) // must parse the JSON string!

    // Send the result back
    const result = await client.chat.completions.create({
        model: 'gpt-4.1',
        tools: [
            /* same tools array */
        ],
        messages: [
            { role: 'user', content: "What's Apple's stock price?" },
            completion.choices[0].message,
            {
                role: 'tool',
                tool_call_id: toolCall.id,
                content: JSON.stringify({ price: 237.5, currency: 'USD' }),
            },
        ],
    })
}

Here's the key differences table:

| | Claude | OpenAI | |---|---|---| | Tool schema key | input_schema | function.parameters | | Tool args format | Parsed object | JSON string (you parse it) | | Tool result role | role: 'user' with type: 'tool_result' | role: 'tool' | | Stop reason | stop_reason: 'tool_use' | finish_reason: 'tool_calls' |

The biggest gotcha: OpenAI returns tool arguments as a JSON string, not a parsed object. You need JSON.parse(toolCall.function.arguments) every time. Claude gives you a parsed object directly. This is a small thing but it's bitten me more than once — especially when streaming tool use, where OpenAI sends the arguments as partial JSON strings that you need to concatenate before parsing.

Claude also has a built-in toolRunner() helper that automatically executes tools and manages the back-and-forth loop. OpenAI doesn't have an equivalent in the SDK — you'd need a library or build your own loop.

Vision (image input)

Both APIs can process images. The content block format is different.

Claude:

// src/examples/claude-vision.ts
import Anthropic from '@anthropic-ai/sdk'
import fs from 'fs'

const client = new Anthropic()

const imageData = fs.readFileSync('screenshot.png').toString('base64')

const message = await client.messages.create({
    model: 'claude-sonnet-4-5-20250929',
    max_tokens: 1024,
    messages: [
        {
            role: 'user',
            content: [
                {
                    type: 'image',
                    source: {
                        type: 'base64',
                        media_type: 'image/png',
                        data: imageData,
                    },
                },
                { type: 'text', text: 'What does this UI look like? Any accessibility issues?' },
            ],
        },
    ],
})

OpenAI:

// src/examples/openai-vision.ts
import OpenAI from 'openai'
import fs from 'fs'

const client = new OpenAI()

const imageData = fs.readFileSync('screenshot.png').toString('base64')

const completion = await client.chat.completions.create({
    model: 'gpt-4.1',
    messages: [
        {
            role: 'user',
            content: [
                {
                    type: 'image_url',
                    image_url: {
                        url: `data:image/png;base64,${imageData}`,
                        detail: 'high',
                    },
                },
                { type: 'text', text: 'What does this UI look like? Any accessibility issues?' },
            ],
        },
    ],
})

Claude uses type: 'image' with a structured source object that has separate media_type and data fields. OpenAI uses type: 'image_url' and shoves the base64 data into a data URL string. OpenAI also gives you a detail parameter (low, high, auto) to control how much the model zooms in — Claude doesn't have this, it auto-optimizes.

Both support URL-based images too. Claude uses source: { type: 'url', url: '...' }, OpenAI uses image_url: { url: '...' }.

Pricing comparison

Here's where things get practical. Prices as of February 2026 (per million tokens):

| Model | Input | Output | Best for | |---|---|---|---| | Claude Haiku 4.5 | $1.00 | $5.00 | High-volume, simple tasks | | Claude Sonnet 4.5 | $3.00 | $15.00 | General-purpose, best price/performance | | Claude Opus 4.6 | $5.00 | $25.00 | Complex reasoning, agentic tasks | | GPT-4.1 nano | $0.10 | $0.40 | Cheapest option, classification tasks | | GPT-4.1 mini | $0.40 | $1.60 | Budget general-purpose | | GPT-4.1 | $2.00 | $8.00 | General-purpose | | GPT-4o | $2.50 | $10.00 | Multimodal |

Let's do the math on a real scenario. Say you're building a chatbot that handles 10,000 conversations per day, averaging 2,000 input tokens and 500 output tokens per conversation.

Daily token usage: 20M input + 5M output

| Model | Daily cost | Monthly cost | |---|---|---| | Claude Sonnet 4.5 | $60 + $75 = $135 | $4,050 | | Claude Haiku 4.5 | $20 + $25 = $45 | $1,350 | | GPT-4.1 | $40 + $40 = $80 | $2,400 | | GPT-4.1 mini | $8 + $8 = $16 | $480 |

GPT-4.1 mini is hard to beat on pure cost. But the quality gap matters — if your chatbot gives worse answers and users churn, you're not saving money. Test both with your actual prompts before deciding purely on price.

Watch out for reasoning model costs

OpenAI's o-series models (o3, o4-mini) use hidden "reasoning tokens" that count as output tokens but don't appear in the response. A response that looks like 500 tokens might actually consume 2,000+ output tokens. This can blow your budget if you're not tracking it. Claude's extended thinking is more transparent — you explicitly opt into it and can see the thinking output.

Which one should you pick?

Skip the benchmarks. Here's my opinionated take based on actually building with both:

Pick Claude if:

  • You're building agents with tool use — the parsed arguments and toolRunner() helper save real development time
  • You want a cleaner streaming API with event emitters
  • You prefer the system prompt as a separate parameter (better for prompt management)
  • You need long-context processing (200K standard, 1M in beta)

Pick OpenAI if:

  • Cost is your primary constraint — GPT-4.1 mini and nano are significantly cheaper than anything Anthropic offers
  • You need the broader ecosystem — more tutorials, more integrations, more community packages
  • You're already using OpenAI for embeddings (Anthropic doesn't have an embedding model)
  • You want the new Responses API's built-in conversation state management

Or use both. There's nothing stopping you from using Claude for complex reasoning tasks and GPT-4.1 mini for high-volume simple stuff. The SDKs install side by side. Build a thin wrapper that normalizes the message format and you can swap models per use case.

What's next

Now that you know how both APIs work, the next step is using them in production without going broke. In an upcoming post, I'll cover how to add AI search to any website with embeddings and Supabase — building a "search your docs" feature with vector embeddings, pgvector, and a React frontend.

Share:
VA

Vadim Alakhverdov

Software developer writing about JavaScript, web development, and developer tools.

Related Posts