Build a Slack Bot That Answers Questions About Your Codebase

Monday 02/03/2026

·11 min read
Share:

Your team asks the same questions every sprint: "Where does the billing logic live?" "How does the webhook retry system work?" "Who wrote this middleware and why?" These answers exist in the code, but finding them means grep-ing through hundreds of files or pinging the one senior dev who's been around since day one.

What if you could just ask a Slack bot and get a real answer, grounded in your actual source code? That's what we're building — a Slack bot that indexes your repository, creates embeddings from your codebase, and uses Claude to answer natural language questions about your code. The whole thing runs in about 300 lines of TypeScript.

How it works: the architecture

The AI Slack bot for codebase questions has three parts:

  1. Indexer — walks your repo, chunks source files, and creates embeddings stored in Supabase pgvector
  2. Query engine — takes a question, finds the most relevant code chunks via similarity search, and sends them to Claude as context
  3. Slack integration — listens for mentions, passes questions to the query engine, and posts answers back

Let's build each piece.

Setting up the project

mkdir codebase-bot && cd codebase-bot
pnpm init
pnpm add @anthropic-ai/sdk @slack/bolt @supabase/supabase-js openai glob
pnpm add -D typescript @types/node tsx
npx tsc --init

You'll need these environment variables:

# .env
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...           # for embeddings
SUPABASE_URL=https://xxx.supabase.co
SUPABASE_SERVICE_KEY=eyJ...
SLACK_BOT_TOKEN=xoxb-...
SLACK_APP_TOKEN=xapp-...        # for Socket Mode
SLACK_SIGNING_SECRET=...
REPO_PATH=/path/to/your/repo

Why OpenAI for embeddings and Claude for answering? OpenAI's text-embedding-3-small is cheap ($0.02/1M tokens) and well-supported by pgvector. Claude is better at reasoning over code. Use the best tool for each job.

Step 1: The database schema

Create this table in your Supabase SQL editor:

-- supabase/migrations/001_code_chunks.sql
create extension if not exists vector;

create table code_chunks (
  id uuid primary key default gen_random_uuid(),
  file_path text not null,
  chunk_index int not null,
  content text not null,
  embedding vector(1536),
  language text,
  repo text not null,
  indexed_at timestamptz default now(),
  unique(repo, file_path, chunk_index)
);

create index on code_chunks
  using ivfflat (embedding vector_cosine_ops)
  with (lists = 100);

create or replace function match_code_chunks(
  query_embedding vector(1536),
  match_threshold float,
  match_count int,
  target_repo text
)
returns table (
  id uuid,
  file_path text,
  content text,
  similarity float
)
language sql stable
as $$
  select
    id,
    file_path,
    content,
    1 - (embedding <=> query_embedding) as similarity
  from code_chunks
  where
    repo = target_repo
    and 1 - (embedding <=> query_embedding) > match_threshold
  order by embedding <=> query_embedding
  limit match_count;
$$;

Step 2: The codebase indexer

This is the part that reads your source files, splits them into chunks, generates embeddings, and stores everything in the database.

// src/indexer.ts
import fs from 'fs'
import path from 'path'
import { glob } from 'glob'
import OpenAI from 'openai'
import { createClient } from '@supabase/supabase-js'

const openai = new OpenAI()
const supabase = createClient(
    process.env.SUPABASE_URL!,
    process.env.SUPABASE_SERVICE_KEY!
)

interface CodeChunk {
    filePath: string
    chunkIndex: number
    content: string
    language: string
}

const LANGUAGE_MAP: Record<string, string> = {
    '.ts': 'typescript',
    '.tsx': 'typescript',
    '.js': 'javascript',
    '.jsx': 'javascript',
    '.py': 'python',
    '.go': 'go',
    '.rs': 'rust',
    '.md': 'markdown',
}

function chunkFile(filePath: string, content: string, maxChunkSize = 1500): CodeChunk[] {
    const ext = path.extname(filePath)
    const language = LANGUAGE_MAP[ext] || 'text'
    const lines = content.split('\n')
    const chunks: CodeChunk[] = []
    let currentChunk = ''
    let chunkIndex = 0

    for (const line of lines) {
        if (currentChunk.length + line.length > maxChunkSize && currentChunk.length > 0) {
            chunks.push({
                filePath,
                chunkIndex,
                content: `// File: ${filePath}\n${currentChunk}`,
                language,
            })
            chunkIndex++
            currentChunk = ''
        }
        currentChunk += line + '\n'
    }

    if (currentChunk.trim()) {
        chunks.push({
            filePath,
            chunkIndex,
            content: `// File: ${filePath}\n${currentChunk}`,
            language,
        })
    }

    return chunks
}

async function getEmbeddings(texts: string[]): Promise<number[][]> {
    const response = await openai.embeddings.create({
        model: 'text-embedding-3-small',
        input: texts,
    })
    return response.data.map((d) => d.embedding)
}

export async function indexRepository(repoPath: string, repoName: string): Promise<number> {
    const files = await glob('**/*.{ts,tsx,js,jsx,py,go,rs,md}', {
        cwd: repoPath,
        ignore: [
            '**/node_modules/**',
            '**/dist/**',
            '**/build/**',
            '**/.next/**',
            '**/coverage/**',
            '**/*.min.*',
            '**/package-lock.json',
            '**/pnpm-lock.yaml',
        ],
    })

    console.log(`Found ${files.length} files to index`)

    // Clear old entries for this repo
    await supabase.from('code_chunks').delete().eq('repo', repoName)

    let totalChunks = 0
    const BATCH_SIZE = 20

    for (let i = 0; i < files.length; i += BATCH_SIZE) {
        const batch = files.slice(i, i + BATCH_SIZE)
        const allChunks: CodeChunk[] = []

        for (const file of batch) {
            const fullPath = path.join(repoPath, file)
            const content = fs.readFileSync(fullPath, 'utf-8')

            // Skip huge files — they're usually generated
            if (content.length > 50_000) {
                console.log(`Skipping ${file} (too large)`)
                continue
            }

            allChunks.push(...chunkFile(file, content))
        }

        if (allChunks.length === 0) continue

        const embeddings = await getEmbeddings(allChunks.map((c) => c.content))

        const rows = allChunks.map((chunk, idx) => ({
            file_path: chunk.filePath,
            chunk_index: chunk.chunkIndex,
            content: chunk.content,
            embedding: embeddings[idx],
            language: chunk.language,
            repo: repoName,
        }))

        const { error } = await supabase.from('code_chunks').upsert(rows, {
            onConflict: 'repo,file_path,chunk_index',
        })

        if (error) {
            console.error(`Failed to insert batch: ${error.message}`)
        }

        totalChunks += allChunks.length
        console.log(`Indexed ${totalChunks} chunks (${i + batch.length}/${files.length} files)`)
    }

    return totalChunks
}

A few things to note here:

  • Chunk size of 1500 characters is a sweet spot. Too small and you lose context. Too large and your similarity search gets noisy because each chunk covers too many topics.
  • The file path is prepended to each chunk (// File: src/auth/middleware.ts). This helps Claude reference specific files in its answer.
  • Batch embeddings calls instead of one-at-a-time. OpenAI's embedding API handles arrays, and batching cuts your indexing time by 10-20x.

Running the indexer

// src/run-indexer.ts
import 'dotenv/config'
import { indexRepository } from './indexer'

const repoPath = process.env.REPO_PATH!
const repoName = repoPath.split('/').pop()!

console.log(`Indexing ${repoName} from ${repoPath}...`)

indexRepository(repoPath, repoName).then((count) => {
    console.log(`Done! Indexed ${count} chunks.`)
    process.exit(0)
})
npx tsx src/run-indexer.ts

For a medium-sized repo (500 files), this takes about 2-3 minutes and costs roughly $0.05 in embedding API calls. Re-index weekly via a cron job or CI trigger.

Step 3: The query engine

This is the brain — it takes a natural language question, finds relevant code via embeddings, and asks Claude to synthesize an answer.

// src/query-engine.ts
import Anthropic from '@anthropic-ai/sdk'
import OpenAI from 'openai'
import { createClient } from '@supabase/supabase-js'

const anthropic = new Anthropic()
const openai = new OpenAI()
const supabase = createClient(
    process.env.SUPABASE_URL!,
    process.env.SUPABASE_SERVICE_KEY!
)

interface QueryResult {
    answer: string
    sources: string[]
}

async function findRelevantCode(
    question: string,
    repoName: string,
    maxChunks = 8
): Promise<{ content: string; filePath: string; similarity: number }[]> {
    const embeddingResponse = await openai.embeddings.create({
        model: 'text-embedding-3-small',
        input: question,
    })

    const queryEmbedding = embeddingResponse.data[0].embedding

    const { data, error } = await supabase.rpc('match_code_chunks', {
        query_embedding: queryEmbedding,
        match_threshold: 0.3,
        match_count: maxChunks,
        target_repo: repoName,
    })

    if (error) {
        throw new Error(`Similarity search failed: ${error.message}`)
    }

    return (data || []).map((row: { content: string; file_path: string; similarity: number }) => ({
        content: row.content,
        filePath: row.file_path,
        similarity: row.similarity,
    }))
}

export async function queryCodebase(
    question: string,
    repoName: string
): Promise<QueryResult> {
    const relevantChunks = await findRelevantCode(question, repoName)

    if (relevantChunks.length === 0) {
        return {
            answer: "I couldn't find any relevant code for that question. Try rephrasing, or the code might be in a file type I haven't indexed.",
            sources: [],
        }
    }

    const codeContext = relevantChunks
        .map((chunk, i) => `--- Chunk ${i + 1} (${chunk.filePath}, similarity: ${chunk.similarity.toFixed(3)}) ---\n${chunk.content}`)
        .join('\n\n')

    const response = await anthropic.messages.create({
        model: 'claude-sonnet-4-20250514',
        max_tokens: 1500,
        system: `You are a codebase assistant. You answer questions about a codebase using the provided code snippets as context.

Rules:
- Reference specific file paths when explaining where code lives
- If you're not sure about something, say so — don't make up code that isn't in the context
- Keep answers concise but complete
- Use code blocks with the correct language for any code you reference
- If the question can't be answered from the provided context, say so`,
        messages: [
            {
                role: 'user',
                content: `Question: ${question}\n\nRelevant code from the repository:\n\n${codeContext}`,
            },
        ],
    })

    const answer = response.content[0].type === 'text' ? response.content[0].text : ''
    const sources = [...new Set(relevantChunks.map((c) => c.filePath))]

    return { answer, sources }
}

Gotcha: the similarity threshold. I'm using 0.3 here, which is much lower than you'd use for caching (where I recommended 0.92 in my post on caching AI responses). For code search, you want to cast a wide net and let Claude figure out what's relevant. A threshold of 0.3 means "loosely related" — which is fine because we're sending 8 chunks and Claude can ignore the irrelevant ones.

Step 4: The Slack bot

Now let's wire it up to Slack. We're using Bolt (Slack's official framework) in Socket Mode — no need to expose a public URL.

First, create a Slack app at api.slack.com/apps:

  1. Enable Socket Mode (you'll get the SLACK_APP_TOKEN)
  2. Add the app_mentions:read and chat:write bot scopes under OAuth & Permissions
  3. Subscribe to the app_mention event under Event Subscriptions
  4. Install the app to your workspace
// src/bot.ts
import 'dotenv/config'
import { App } from '@slack/bolt'
import { queryCodebase } from './query-engine'

const app = new App({
    token: process.env.SLACK_BOT_TOKEN!,
    appToken: process.env.SLACK_APP_TOKEN!,
    signingSecret: process.env.SLACK_SIGNING_SECRET!,
    socketMode: true,
})

const repoName = process.env.REPO_PATH!.split('/').pop()!

app.event('app_mention', async ({ event, say }) => {
    // Strip the bot mention from the message to get the question
    const question = event.text.replace(/<@[A-Z0-9]+>/g, '').trim()

    if (!question) {
        await say({
            text: 'Ask me anything about the codebase! For example: "Where is the authentication middleware?"',
            thread_ts: event.ts,
        })
        return
    }

    // Post a "thinking" message
    const thinkingMsg = await say({
        text: ':mag: Looking through the codebase...',
        thread_ts: event.ts,
    })

    try {
        const result = await queryCodebase(question, repoName)

        const sourcesText =
            result.sources.length > 0
                ? `\n\n:file_folder: *Sources:* ${result.sources.map((s) => `\`${s}\``).join(', ')}`
                : ''

        await app.client.chat.update({
            channel: event.channel,
            ts: thinkingMsg.ts!,
            text: `${result.answer}${sourcesText}`,
        })
    } catch (error) {
        const errMessage = error instanceof Error ? error.message : 'Unknown error'
        console.error('Query failed:', errMessage)

        await app.client.chat.update({
            channel: event.channel,
            ts: thinkingMsg.ts!,
            text: `:warning: Something went wrong: ${errMessage}`,
        })
    }
})

async function start(): Promise<void> {
    await app.start()
    console.log(`Codebase bot is running! Repo: ${repoName}`)
}

start()
npx tsx src/bot.ts

Now mention your bot in any Slack channel: @CodebaseBot where is the auth logic? — and it'll search through your indexed repository and answer with references to specific files.

Improving answer quality

The basic version works, but here are three tweaks that made a big difference in my team's setup:

1. Add file-level summaries

When indexing, generate a one-line summary of each file using a fast model. Store it alongside the chunks. This helps when someone asks a broad question like "how does billing work?" — the file summaries match better than individual code chunks.

// src/summarize-file.ts
import Anthropic from '@anthropic-ai/sdk'

const anthropic = new Anthropic()

export async function summarizeFile(
    filePath: string,
    content: string
): Promise<string> {
    const response = await anthropic.messages.create({
        model: 'claude-haiku-4-5-20251001',
        max_tokens: 100,
        messages: [
            {
                role: 'user',
                content: `Summarize what this file does in one sentence. File: ${filePath}\n\n${content.slice(0, 3000)}`,
            },
        ],
    })

    return response.content[0].type === 'text' ? response.content[0].text : ''
}

2. Smarter chunking at function boundaries

Instead of splitting files at arbitrary character limits, split at function or class boundaries. This keeps logical units together:

// src/smart-chunker.ts
function chunkByFunctions(content: string, filePath: string): string[] {
    // Match function/class/method declarations
    const boundaries =
        /^(?:export\s+)?(?:async\s+)?(?:function|class|const\s+\w+\s*=\s*(?:async\s*)?\(|interface|type\s+\w+\s*=)/gm
    const chunks: string[] = []
    let lastIndex = 0
    let match: RegExpExecArray | null

    while ((match = boundaries.exec(content)) !== null) {
        if (match.index > lastIndex && match.index - lastIndex > 100) {
            chunks.push(content.slice(lastIndex, match.index))
            lastIndex = match.index
        }
    }

    if (lastIndex < content.length) {
        chunks.push(content.slice(lastIndex))
    }

    return chunks.map((c) => `// File: ${filePath}\n${c}`)
}

3. Re-index on push

Add a GitHub Action that re-indexes whenever code is pushed to main:

# .github/workflows/index-codebase.yml
name: Index Codebase
on:
  push:
    branches: [main]

jobs:
  index:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: pnpm/action-setup@v2
        with:
          version: 9
      - uses: actions/setup-node@v4
        with:
          node-version: 20
      - run: pnpm install
      - run: npx tsx src/run-indexer.ts
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
          SUPABASE_URL: ${{ secrets.SUPABASE_URL }}
          SUPABASE_SERVICE_KEY: ${{ secrets.SUPABASE_SERVICE_KEY }}
          REPO_PATH: ${{ github.workspace }}

What it costs

For a 500-file TypeScript repo:

  • Indexing: ~2,000 chunks × ~200 tokens each = 400K tokens → $0.008 per full re-index
  • Per question: 1 embedding lookup ($0.000001) + Claude Sonnet with ~3K context tokens ($0.009 input + ~$0.02 output) → ~$0.03 per question
  • Monthly estimate: 50 questions/day × 22 workdays = ~$33/month

That's less than half a developer-hour of salary. And you're not interrupting anyone's flow with "hey, where's the..." messages anymore.

What's next

This bot works great for answering questions, but how do you test that the answers are actually correct? Non-deterministic AI outputs make traditional unit tests useless. Next up: How to Test AI Features: Unit Testing LLM-Powered Code — where I'll show you patterns for mocking, snapshot testing, and eval frameworks that actually work with fuzzy AI outputs.

Share:
VA

Vadim Alakhverdov

Software developer writing about JavaScript, web development, and developer tools.

Related Posts