Skip to content

Vercel AI SDK Integration

Overview

The Vercel AI SDK works with Captain using a custom header approach for context.

Important: Vercel AI SDK requires context to be passed via a custom header X-Captain-Context that must be base64-encoded (HTTP headers cannot contain newlines).

⚠️ Context Size Limitation: HTTP headers have size limits (typically 4-8KB). For larger contexts, see Large Contexts below.

Installation

npm install @ai-sdk/openai ai

Basic Example

import { createOpenAI } from '@ai-sdk/openai';
import { streamText } from 'ai';

const context = `
Company Policies:
- Vacation: 20 days per year
- Remote work: 3 days per week
`;

// Base64 encode the context for header transmission (headers can't contain newlines)
const contextBase64 = Buffer.from(context).toString('base64');

const captain = createOpenAI({
  apiKey: process.env.CAPTAIN_API_KEY,
  baseURL: 'https://api.runcaptain.com/v1',
  headers: {
    'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
    'X-Captain-Context': contextBase64,  // Base64 encoded context
  },
});

const { textStream } = await streamText({
  model: captain.chat('captain-voyager-latest'),
  messages: [
    { role: 'user', content: 'What is the vacation policy?' }
  ],
});

for await (const chunk of textStream) {
  process.stdout.write(chunk);
}

Why base64 encoding? HTTP headers cannot contain newlines or special characters, so context must be base64-encoded before being sent in the X-Captain-Context header.

Large Contexts (>4KB)

For contexts larger than ~4KB after base64 encoding, headers may fail. Use the OpenAI SDK with extra_body instead, or see other integration methods.

Non-Streaming

import { generateText } from 'ai';

const context = `Your policy text here`;
const contextBase64 = Buffer.from(context).toString('base64');

const captain = createOpenAI({
  apiKey: process.env.CAPTAIN_API_KEY,
  baseURL: 'https://api.runcaptain.com/v1',
  headers: {
    'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
    'X-Captain-Context': contextBase64,
  },
});

const { text } = await generateText({
  model: captain.chat('captain-voyager-latest'),
  messages: [
    { role: 'user', content: "What's the policy?" }
  ],
});

console.log(text);

Large Context Processing

import { generateText } from 'ai';
import { readFileSync } from 'fs';

// Load any size document
const largeDocument = readFileSync('large-file.txt', 'utf-8');

// Base64 encode for header
const contextBase64 = Buffer.from(largeDocument).toString('base64');

const captain = createOpenAI({
  apiKey: process.env.CAPTAIN_API_KEY,
  baseURL: 'https://api.runcaptain.com/v1',
  headers: {
    'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
    'X-Captain-Context': contextBase64,
  },
});

const { text } = await generateText({
  model: captain.chat('captain-voyager-latest'),
  messages: [
    { role: 'user', content: 'Summarize the key findings' }
  ],
});

console.log(text);

Tool Calling

import { generateText } from 'ai';
import { z } from 'zod';

const tools = {
  get_inventory: {
    description: 'Get current inventory levels',
    parameters: z.object({
      product_id: z.string().describe('Product ID'),
    }),
    execute: async ({ product_id }) => ({
      stock: 45,
      location: 'Warehouse A'
    }),
  },
};

const context = `Product Catalog: SKU-001, SKU-002, SKU-003`;
const contextBase64 = Buffer.from(context).toString('base64');

const captain = createOpenAI({
  apiKey: process.env.CAPTAIN_API_KEY,
  baseURL: 'https://api.runcaptain.com/v1',
  headers: {
    'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
    'X-Captain-Context': contextBase64,
  },
});

const { text } = await generateText({
  model: captain.chat('captain-voyager-latest'),
  messages: [
    { role: 'user', content: "What's the inventory for SKU-001?" }
  ],
  tools,
  maxSteps: 5,
});

console.log(text);

Telemetry (Experimental)

Captain supports Vercel AI SDK's experimental_telemetry for tracking token usage, performance metrics, and request traces.

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

const result = await streamText({
  model: openai('captain-voyager-latest'),
  prompt: 'Analyze these server logs',
  experimental_telemetry: {
    isEnabled: true,
    recordInputs: true,
    recordOutputs: true,
    functionId: 'log-analyzer',
    metadata: {
      userId: 'user-123',
      sessionId: 'session-abc'
    }
  },
  experimental_providerOptions: {
    openai: {
      extra_body: {
        captain: {
          context: '... your large context ...'
        }
      }
    }
  }
});

What's Tracked: - Input/output token counts - Time to first chunk (streaming) - Total generation time - Throughput (tokens/second) - Custom function ID and metadata - OpenTelemetry span export

See Telemetry for full documentation.


Error Handling

try {
  const { text } = await generateText({
    model: captain.chat('captain-voyager-latest'),
    messages: [...],
  });
} catch (error) {
  console.error('API error:', error);
}

Large Contexts

For contexts larger than ~4KB after base64 encoding, the header approach may fail due to HTTP header size limits.

Options for large contexts: - Use the OpenAI SDK with extra_body (recommended) - Use the file upload endpoint (see API Reference)

Framework Compatibility

Works with Next.js, React, SvelteKit, Vue, Node.js, Deno, and Bun.


Next Steps


Support