JavaScript/TypeScript SDK Integration

Three Ways to Use Captain with JavaScript/TypeScript

Captain works with all major JavaScript SDKs! Choose the approach that fits your needs:

  1. OpenAI SDK (⭐ Recommended): Standard extra_body parameter - most reliable
  2. Vercel AI SDK: Custom header with base64-encoded context
  3. Direct Fetch: Maximum control with direct HTTP requests

The official OpenAI JavaScript SDK provides the most reliable Captain integration using the standard extra_body parameter.

Installation

$npm install openai

Basic Example

1import OpenAI from 'openai';
2
3const client = new OpenAI({
4 apiKey: process.env.CAPTAIN_API_KEY,
5 baseURL: 'https://api.runcaptain.com/v1',
6 defaultHeaders: {
7 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
8 },
9});
10
11const context = `
12Company Policies:
13- Remote work: Allowed 3 days/week
14- Vacation: 20 days per year
15- Health insurance: Provided
16`;
17
18const response = await client.chat.completions.create({
19 model: 'captain-voyager-latest',
20 messages: [
21 { role: 'user', content: "What's the remote work policy?" }
22 ],
23 extra_body: {
24 captain: {
25 context: context
26 }
27 },
28});
29
30console.log(response.choices[0].message.content);

Why recommended: Standard OpenAI SDK approach with reliable extra_body parameter support.

Streaming

1const stream = await client.chat.completions.create({
2 model: 'captain-voyager-latest',
3 messages: [
4 { role: 'user', content: "What's the policy?" }
5 ],
6 stream: true,
7 extra_body: {
8 captain: {
9 context: "Policy text here"
10 }
11 },
12});
13
14for await (const chunk of stream) {
15 process.stdout.write(chunk.choices[0]?.delta?.content || '');
16}

Large Context Processing

1import { readFileSync } from 'fs';
2
3// Load any size document - Captain automatically handles large contexts
4const largeDocument = readFileSync('large-file.txt', 'utf-8');
5
6const response = await client.chat.completions.create({
7 model: 'captain-voyager-latest',
8 messages: [
9 { role: 'system', content: 'You are a research analysis assistant.' },
10 { role: 'user', content: 'Summarize the key findings' }
11 ],
12 extra_body: {
13 captain: {
14 context: largeDocument
15 }
16 }
17});
18
19console.log(response.choices[0].message.content);

Tool Calling

1const response = await client.chat.completions.create({
2 model: 'captain-voyager-latest',
3 messages: [
4 { role: 'user', content: "What's the inventory for SKU-001?" }
5 ],
6 tools: [
7 {
8 type: 'function',
9 function: {
10 name: 'get_inventory',
11 description: 'Get current inventory levels',
12 parameters: {
13 type: 'object',
14 properties: {
15 product_id: {
16 type: 'string',
17 description: 'Product ID'
18 }
19 },
20 required: ['product_id']
21 },
22 strict: true
23 }
24 }
25 ],
26 extra_body: {
27 captain: {
28 context: 'Product Catalog: SKU-001, SKU-002, SKU-003'
29 }
30 },
31});
32
33console.log(response.choices[0].message.content);

Method 2: Vercel AI SDK with Custom Header

The Vercel AI SDK works with Captain using a custom header approach for context.

Important: Vercel AI SDK requires context to be passed via a custom header X-Captain-Context that must be base64-encoded (HTTP headers cannot contain newlines).

⚠️ Context Size Limitation: HTTP headers have size limits (typically 4-8KB). For contexts larger than ~4KB after base64 encoding, use the OpenAI SDK (Method 1) instead.

Installation

$npm install @ai-sdk/openai ai

Basic Example

1import { createOpenAI } from '@ai-sdk/openai';
2import { streamText } from 'ai';
3
4const context = `
5Company Policies:
6- Vacation: 20 days per year
7- Remote work: 3 days per week
8`;
9
10// Base64 encode the context for header transmission (headers can't contain newlines)
11const contextBase64 = Buffer.from(context).toString('base64');
12
13const captain = createOpenAI({
14 apiKey: process.env.CAPTAIN_API_KEY,
15 baseURL: 'https://api.runcaptain.com/v1',
16 headers: {
17 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
18 'X-Captain-Context': contextBase64, // Base64 encoded context
19 },
20});
21
22const { textStream } = await streamText({
23 model: captain.chat('captain-voyager-latest'),
24 messages: [
25 { role: 'user', content: 'What is the vacation policy?' }
26 ],
27});
28
29for await (const chunk of textStream) {
30 process.stdout.write(chunk);
31}

Why base64 encoding? HTTP headers cannot contain newlines or special characters, so context must be base64-encoded before being sent in the X-Captain-Context header.

Large Context with Upload Endpoint

For contexts larger than ~4KB, use the /v1/chat/completions/upload endpoint with FormData:

1const largeContext = `...your large document...`;
2
3// Prepare FormData
4const formData = new FormData();
5const blob = new Blob([largeContext], { type: 'text/plain' });
6formData.append('file', blob, 'context.txt');
7formData.append('messages', JSON.stringify([
8 { role: 'user', content: 'What are the main themes?' }
9]));
10formData.append('model', 'captain-voyager-latest');
11formData.append('stream', 'true');
12
13// Upload large context
14const response = await fetch('https://api.runcaptain.com/v1/chat/completions/upload', {
15 method: 'POST',
16 headers: {
17 'Authorization': `Bearer ${process.env.CAPTAIN_API_KEY}`,
18 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
19 },
20 body: formData
21});
22
23// Parse SSE stream
24const reader = response.body.getReader();
25const decoder = new TextDecoder();
26
27while (true) {
28 const { done, value } = await reader.read();
29 if (done) break;
30
31 const chunk = decoder.decode(value);
32 const lines = chunk.split('\n').filter(line => line.trim() !== '');
33
34 for (const line of lines) {
35 if (line.startsWith('data: ')) {
36 const data = line.slice(6);
37 if (data === '[DONE]') break;
38
39 try {
40 const parsed = JSON.parse(data);
41 const content = parsed.choices[0]?.delta?.content;
42 if (content) process.stdout.write(content);
43 } catch (e) {
44 // Skip parse errors
45 }
46 }
47 }
48}

This approach bypasses HTTP header size limits and works with any size context.

Non-Streaming

1import { generateText } from 'ai';
2
3const context = `Your policy text here`;
4const contextBase64 = Buffer.from(context).toString('base64');
5
6const captain = createOpenAI({
7 apiKey: process.env.CAPTAIN_API_KEY,
8 baseURL: 'https://api.runcaptain.com/v1',
9 headers: {
10 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
11 'X-Captain-Context': contextBase64,
12 },
13});
14
15const { text } = await generateText({
16 model: captain.chat('captain-voyager-latest'),
17 messages: [
18 { role: 'user', content: "What's the policy?" }
19 ],
20});
21
22console.log(text);

Large Context Processing

1import { generateText } from 'ai';
2import { readFileSync } from 'fs';
3
4// Load any size document
5const largeDocument = readFileSync('large-file.txt', 'utf-8');
6
7// Base64 encode for header
8const contextBase64 = Buffer.from(largeDocument).toString('base64');
9
10const captain = createOpenAI({
11 apiKey: process.env.CAPTAIN_API_KEY,
12 baseURL: 'https://api.runcaptain.com/v1',
13 headers: {
14 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
15 'X-Captain-Context': contextBase64,
16 },
17});
18
19const { text } = await generateText({
20 model: captain.chat('captain-voyager-latest'),
21 messages: [
22 { role: 'user', content: 'Summarize the key findings' }
23 ],
24});
25
26console.log(text);

Tool Calling

1import { generateText } from 'ai';
2import { z } from 'zod';
3
4const tools = {
5 get_inventory: {
6 description: 'Get current inventory levels',
7 parameters: z.object({
8 product_id: z.string().describe('Product ID'),
9 }),
10 execute: async ({ product_id }) => ({
11 stock: 45,
12 location: 'Warehouse A'
13 }),
14 },
15};
16
17const context = `Product Catalog: SKU-001, SKU-002, SKU-003`;
18const contextBase64 = Buffer.from(context).toString('base64');
19
20const captain = createOpenAI({
21 apiKey: process.env.CAPTAIN_API_KEY,
22 baseURL: 'https://api.runcaptain.com/v1',
23 headers: {
24 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
25 'X-Captain-Context': contextBase64,
26 },
27});
28
29const { text } = await generateText({
30 model: captain.chat('captain-voyager-latest'),
31 messages: [
32 { role: 'user', content: "What's the inventory for SKU-001?" }
33 ],
34 tools,
35 maxSteps: 5,
36});
37
38console.log(text);

Method 3: Direct Fetch with Captain Parameter

For maximum control, use direct HTTP requests with the captain parameter in the request body.

Basic Example

1const API_KEY = 'cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';
2const ORG_ID = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx';
3
4const context = `
5Company Policies:
6- Vacation: 20 days per year
7- Remote work: 3 days per week
8`;
9
10const response = await fetch('https://api.runcaptain.com/v1/chat/completions', {
11 method: 'POST',
12 headers: {
13 'Authorization': `Bearer ${API_KEY}`,
14 'X-Organization-ID': ORG_ID,
15 'Content-Type': 'application/json'
16 },
17 body: JSON.dumps({
18 model: 'captain-voyager-latest',
19 messages: [
20 { role: 'user', content: 'What is the vacation policy?' }
21 ],
22 captain: {
23 context: context
24 }
25 })
26});
27
28const result = await response.json();
29console.log(result.choices[0].message.content);

Streaming with Fetch

1const response = await fetch('https://api.runcaptain.com/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': `Bearer ${API_KEY}`,
5 'X-Organization-ID': ORG_ID,
6 'Content-Type': 'application/json'
7 },
8 body: JSON.stringify({
9 model: 'captain-voyager-latest',
10 messages: [
11 { role: 'user', content: 'Summarize this document' }
12 ],
13 stream: true,
14 captain: {
15 context: largeDocument
16 }
17 })
18});
19
20const reader = response.body.getReader();
21const decoder = new TextDecoder();
22
23while (true) {
24 const { done, value } = await reader.read();
25 if (done) break;
26
27 const chunk = decoder.decode(value);
28 const lines = chunk.split('\n').filter(line => line.trim() !== '');
29
30 for (const line of lines) {
31 if (line.startsWith('data: ')) {
32 const data = line.slice(6);
33 if (data === '[DONE]') break;
34
35 try {
36 const parsed = JSON.parse(data);
37 const content = parsed.choices[0]?.delta?.content;
38 if (content) process.stdout.write(content);
39 } catch (e) {
40 // Skip parse errors
41 }
42 }
43 }
44}

Alternative: Using extra_body

You can also pass context via extra_body with direct fetch:

1const response = await fetch('https://api.runcaptain.com/v1/chat/completions', {
2 method: 'POST',
3 headers: {
4 'Authorization': `Bearer ${API_KEY}`,
5 'X-Organization-ID': ORG_ID,
6 'Content-Type': 'application/json'
7 },
8 body: JSON.stringify({
9 model: 'captain-voyager-latest',
10 messages: [
11 { role: 'user', content: 'What is the vacation policy?' }
12 ],
13 extra_body: {
14 captain: {
15 context: context
16 }
17 }
18 })
19});

Comparison

FeatureOpenAI SDK ⭐Vercel AI SDKDirect Fetch
Context Methodextra_body parameterBase64 header OR upload endpointcaptain or extra_body in body
Setup ComplexityStandard OpenAI formatBase64 encoding OR FormDataManual HTTP requests
Production Ready✅ Yes (Recommended)✅ Yes✅ Yes
Tool Calling✅ JSON schemas✅ Zod schemas✅ JSON schemas
Streamingstream=truestreamText() OR manual SSE✅ SSE parsing required
Framework Integration✅ Any Node.js app✅ Next.js, React, etc.✅ Any HTTP client
Small Context (<4KB)extra_body✅ Base64 header✅ In request body
Large Context (>4KB)extra_body/upload endpoint/upload endpoint
Best ForMost projectsVercel/Next.js appsCustom implementations

Best Practices

1. Choose the Right Method

  • Use OpenAI SDK (⭐ Recommended) for most projects - standard API, reliable extra_body support
  • Use Vercel AI SDK if you’re building Next.js/Vercel apps and want framework integration
  • Use Direct Fetch for custom implementations or when you need maximum control

2. Handle Context Properly

OpenAI SDK (Recommended):

1// Standard extra_body parameter
2extra_body: {
3 captain: {
4 context: yourContext
5 }
6}

Vercel AI SDK:

1// Base64 encode for custom header
2const contextBase64 = Buffer.from(yourContext).toString('base64');
3headers: {
4 'X-Captain-Context': contextBase64
5}

Direct Fetch:

1// Can use either captain parameter or extra_body
2body: JSON.stringify({
3 messages: [...],
4 captain: { context: yourContext }
5 // OR
6 // extra_body: { captain: { context: yourContext } }
7})

3. Error Handling

1try {
2 const { text } = await generateText({
3 model: captain.chat('captain-voyager-latest'),
4 messages: [...],
5 });
6} catch (error) {
7 console.error('API error:', error);
8}

Framework Compatibility

Captain works with any framework:

  • Next.js - Server-side and client-side rendering
  • React - UI integration with streaming
  • SvelteKit - Full-stack applications
  • Vue - Progressive web apps
  • Node.js - Backend services
  • Deno - Modern JavaScript runtime
  • Bun - Fast all-in-one toolkit

Next Steps


Support