Getting Started with Captain

Welcome to Captain! Choose the path that matches your use case:


Choose Your Starting Point

Option 1: SDK Integration

For developers using OpenAI SDKs or Vercel AI SDK

Captain is a drop-in replacement for OpenAI - just change the base URL and start using unlimited context:

  • OpenAI SDK compatible - Use existing OpenAI code
  • Multiple languages - Python, JavaScript, TypeScript support
  • Unlimited context - Process millions of tokens in a single request
  • No code changes - Drop-in replacement
  • Real-time streaming - Familiar streaming interface

Start here if you:

  • Currently use the OpenAI SDK (Python, JavaScript/TypeScript) or Vercel AI SDK
  • Want the easiest migration path
  • Prefer the familiar SDK interface
  • Need unlimited context with minimal code changes

Get Started with SDK →


Option 2: HTTP API Integration

For developers making direct HTTP requests

Use Captain’s REST API directly with any HTTP client (requests, fetch, curl, etc.):

  • Simple HTTP API - Standard POST requests
  • Unlimited context - Process any amount of text
  • No database required - Instant processing without setup
  • Language agnostic - Use any programming language

Start here if you:

  • Prefer direct HTTP API calls over SDKs
  • Use languages without official SDK support
  • Want full control over requests
  • Don’t use the OpenAI SDK

Get Started with HTTP API →


Option 3: Data Lake Integration

For developers with AWS S3 or Google Cloud Storage

Index entire cloud storage buckets and query across thousands of files:

  • Connect AWS S3 or GCS - Index entire buckets automatically
  • Persistent databases - Query across thousands of files
  • File tracking - Know which files contain what information
  • Automatic updates - Re-index buckets as files change

Start here if you:

  • Have documents in AWS S3 or Google Cloud Storage
  • Need to query across multiple files
  • Want a searchable knowledge base
  • Require persistent indexed data

Get Started with Data Lake Integration →


Prerequisites

Get Your API Credentials

You’ll need:

  • API Key from Captain API Studio (format: cap_dev_..., cap_prod_...)
  • Organization ID (UUID format, also available in the Studio)

Store your API key securely, such as in an environment variable:

macOS / Linux

$export CAPTAIN_API_KEY="cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
>export CAPTAIN_ORG_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"

Windows

$set CAPTAIN_API_KEY=cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
>set CAPTAIN_ORG_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx

Getting Started with SDK

Captain provides OpenAI SDK compatibility. Choose your integration:

Python SDK

Perfect for developers already using the OpenAI Python SDK - Captain is a drop-in replacement.

Installation

$pip install openai

Quick Start: Your First Request

Important: Provide context via extra_body and use system messages for instructions:

1from openai import OpenAI
2
3client = OpenAI(
4 base_url="https://api.runcaptain.com/v1",
5 api_key="cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
6 default_headers={
7 "X-Organization-ID": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
8 }
9)
10
11# Your context can be any size - no token limits!
12context = """
13Product Catalog:
14- Widget A: $10, In stock: 50
15- Widget B: $15, In stock: 30
16- Widget C: $20, Out of stock
17"""
18
19response = client.chat.completions.create(
20 model="captain-voyager-latest",
21 messages=[
22 {"role": "system", "content": "You are a helpful product assistant."},
23 {"role": "user", "content": "Which widgets are in stock and under $20?"}
24 ],
25 extra_body={
26 "captain": {
27 "context": context
28 }
29 }
30)
31
32print(response.choices[0].message.content)

System Prompts: Custom Roles or Captain’s Default

Captain gives you full control over the AI’s persona and behavior through system prompts:

Option 1: Define Your Own Role - Use system messages to make the AI assume specific roles or behaviors:

1response = client.chat.completions.create(
2 model="captain-voyager-latest",
3 messages=[
4 {"role": "system", "content": "You are Luigi, a helpful assistant specialized in telling me the facts"},
5 {"role": "user", "content": "Who invented the light bulb?"}
6 ],
7 extra_body={
8 "captain": {
9 "context": "Thomas Edison patented the light bulb in 1879..."
10 }
11 }
12)
13# AI responds as Luigi with your custom instructions

Option 2: Use Captain’s Default - Omit the system message to use Captain’s built-in persona:

1response = client.chat.completions.create(
2 model="captain-voyager-latest",
3 messages=[
4 {"role": "user", "content": "Who invented the light bulb?"}
5 ],
6 extra_body={
7 "captain": {
8 "context": "Thomas Edison patented the light bulb in 1879..."
9 }
10 }
11)
12# AI responds with Captain's default helpful, informative persona

Key Points:

  • System messages = AI instructions (define role, tone, behavior)
  • User messages = Your actual questions or requests
  • extra_body.captain.context = Large documents/data to analyze
  • System prompts are completely optional - Captain has intelligent defaults

Streaming Responses

Get responses in real-time as they’re generated:

1response = client.chat.completions.create(
2 model="captain-voyager-latest",
3 messages=[
4 {"role": "system", "content": "You are a helpful assistant."},
5 {"role": "user", "content": "Write a short poem about coding"}
6 ],
7 stream=True
8)
9
10for chunk in response:
11 if chunk.choices[0].delta.content:
12 print(chunk.choices[0].delta.content, end="", flush=True)

Processing Large Text Documents

Captain handles unlimited context automatically - no size limits:

1# Load any size document - Captain automatically handles large contexts
2with open('large_document.txt', 'r') as f:
3 document_text = f.read()
4
5response = client.chat.completions.create(
6 model="captain-voyager-latest",
7 messages=[
8 {"role": "system", "content": "You are a research analysis assistant."},
9 {"role": "user", "content": "Summarize the key findings"}
10 ],
11 extra_body={
12 "captain": {
13 "context": document_text
14 }
15 }
16)
17
18print(response.choices[0].message.content)

Note: For processing PDFs, images, or other file formats, use Data Lake Integration which supports 30+ file types including PDF, DOCX, images, and more.


JavaScript/TypeScript SDK

Perfect for developers using Node.js, Deno, or Bun - Captain is a drop-in replacement for OpenAI.

Installation

Install the OpenAI SDK using npm or your preferred package manager:

$npm install openai

Quick Start: Your First Request

Important: Provide context via experimental_providerOptions.openai.extra_body.captain.context. Create a file called example.mjs with the following code:

1import OpenAI from "openai";
2
3const client = new OpenAI({
4 baseURL: "https://api.runcaptain.com/v1",
5 apiKey: "cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
6 defaultHeaders: {
7 "X-Organization-ID": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
8 }
9});
10
11// Your context can be any size - no token limits!
12const context = `
13Product Catalog:
14- Widget A: $10, In stock: 50
15- Widget B: $15, In stock: 30
16- Widget C: $20, Out of stock
17`;
18
19const response = await client.chat.completions.create({
20 model: "captain-voyager-latest",
21 messages: [
22 { role: "system", content: "You are a helpful product assistant." },
23 { role: "user", content: "Which widgets are in stock and under $20?" }
24 ],
25 extra_body: {
26 captain: {
27 context: context
28 }
29 }
30});
31
32console.log(response.choices[0].message.content);

Execute the code with node example.mjs (or the equivalent command for Deno or Bun).

Streaming Responses

Get responses in real-time as they’re generated:

1const context = "You are a helpful assistant.";
2
3const response = await client.chat.completions.create({
4 model: "captain-voyager-latest",
5 messages: [
6 { role: "system", content: "You are a helpful product assistant." },
7 { role: "user", content: "Write a short poem about coding" }
8 ],
9 stream: true
10});
11
12for await (const chunk of response) {
13 if (chunk.choices[0]?.delta?.content) {
14 process.stdout.write(chunk.choices[0].delta.content);
15 }
16}

Processing Large Text Documents

Captain handles unlimited context automatically - no size limits:

1import { readFileSync } from 'fs';
2
3// Load any size document - Captain automatically handles S3 upload for large contexts
4const documentText = readFileSync('large_document.txt', 'utf-8');
5
6
7const response = await client.chat.completions.create({
8 model: "captain-voyager-latest",
9 messages: [
10 { role: "system", content: "You are a research analysis assistant." },
11 { role: "user", content: "Summarize the key findings" }
12 ],
13 extra_body: {
14 captain: {
15 context: documentText
16 }
17 }
18});
19
20
21
22
23
24
25
26console.log(response.choices[0].message.content);

Note: For processing PDFs, images, or other file formats, use Data Lake Integration which supports 30+ file types including PDF, DOCX, images, and more.


Vercel AI SDK

Perfect for developers using Vercel’s AI SDK - Captain works seamlessly with the OpenAI provider.

Installation

$npm install @ai-sdk/openai ai

For tool calling, also install zod:

$npm install zod

Quick Start: Your First Request

Important: Vercel AI SDK requires context to be passed via a custom header X-Captain-Context that must be base64-encoded (HTTP headers cannot contain newlines).

1import { createOpenAI } from '@ai-sdk/openai';
2import { streamText } from 'ai';
3
4const context = `
5Product Catalog:
6- Widget A: $10, In stock: 50
7- Widget B: $15, In stock: 30
8- Widget C: $20, Out of stock
9`;
10
11// Base64 encode the context for header transmission (headers can't contain newlines)
12const contextBase64 = Buffer.from(context).toString('base64');
13
14const captain = createOpenAI({
15 apiKey: process.env.CAPTAIN_API_KEY,
16 baseURL: 'https://api.runcaptain.com/v1',
17 headers: {
18 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
19 'X-Captain-Context': contextBase64, // Base64 encoded context
20 },
21});
22
23const { textStream } = await streamText({
24 model: captain('captain-voyager-latest'),
25 messages: [
26 { role: 'user', content: 'Which widgets are in stock and under $20?' }
27 ],
28});
29
30for await (const chunk of textStream) {
31 process.stdout.write(chunk);
32}

Why base64 encoding? HTTP headers cannot contain newlines or special characters, so context must be base64-encoded before being sent in the X-Captain-Context header.

Alternative: For production use, we recommend the OpenAI SDK with extra_body parameter - it’s more reliable and doesn’t require base64 encoding.

Non-Streaming Responses

For non-streaming responses, use generateText():

1import { generateText } from 'ai';
2
3const context = `Product Catalog...`;
4const contextBase64 = Buffer.from(context).toString('base64');
5
6const captain = createOpenAI({
7 apiKey: process.env.CAPTAIN_API_KEY,
8 baseURL: 'https://api.runcaptain.com/v1',
9 headers: {
10 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
11 'X-Captain-Context': contextBase64,
12 },
13});
14
15const { text } = await generateText({
16 model: captain('captain-voyager-latest'),
17 messages: [
18 { role: 'user', content: 'Which widgets are in stock?' }
19 ],
20});
21
22console.log(text);

Tool Calling

Define tools with Vercel AI SDK’s zod schema format:

1import { generateText } from 'ai';
2import { z } from 'zod';
3
4const tools = {
5 get_inventory: {
6 description: 'Get current inventory levels',
7 parameters: z.object({
8 product_id: z.string().describe('Product identifier'),
9 }),
10 execute: async ({ product_id }) => {
11 // Your API call here
12 return { product_id, stock: 45 };
13 },
14 },
15};
16
17const context = `Product Catalog: SKU-001, SKU-002`;
18const contextBase64 = Buffer.from(context).toString('base64');
19
20const captain = createOpenAI({
21 apiKey: process.env.CAPTAIN_API_KEY,
22 baseURL: 'https://api.runcaptain.com/v1',
23 headers: {
24 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
25 'X-Captain-Context': contextBase64,
26 },
27});
28
29const { text } = await generateText({
30 model: captain('captain-voyager-latest'),
31 messages: [
32 { role: 'user', content: 'What is inventory for SKU-001?' }
33 ],
34 tools,
35 maxSteps: 5,
36});
37
38console.log(text);

Processing Large Contexts

⚠️ Important: HTTP headers have size limits (~4-8KB). For contexts larger than ~4KB after base64 encoding:

Option 1: Use the OpenAI JavaScript SDK with extra_body (recommended)

Option 2: Use the /v1/chat/completions/upload endpoint with FormData:

1import { readFileSync } from 'fs';
2
3const largeDocument = readFileSync('large-file.txt', 'utf-8');
4
5// Prepare FormData
6const formData = new FormData();
7const blob = new Blob([largeDocument], { type: 'text/plain' });
8formData.append('file', blob, 'context.txt');
9formData.append('messages', JSON.stringify([
10 { role: 'user', content: 'Summarize the key findings' }
11]));
12formData.append('model', 'captain-voyager-latest');
13formData.append('stream', 'true');
14
15// Upload large context
16const response = await fetch('https://api.runcaptain.com/v1/chat/completions/upload', {
17 method: 'POST',
18 headers: {
19 'Authorization': `Bearer ${process.env.CAPTAIN_API_KEY}`,
20 'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
21 },
22 body: formData
23});
24
25// Parse SSE stream
26const reader = response.body.getReader();
27const decoder = new TextDecoder();
28
29while (true) {
30 const { done, value } = await reader.read();
31 if (done) break;
32
33 const chunk = decoder.decode(value);
34 const lines = chunk.split('\n').filter(line => line.trim() !== '');
35
36 for (const line of lines) {
37 if (line.startsWith('data: ')) {
38 const data = line.slice(6);
39 if (data === '[DONE]') break;
40 try {
41 const parsed = JSON.parse(data);
42 const content = parsed.choices[0]?.delta?.content;
43 if (content) process.stdout.write(content);
44 } catch (e) {}
45 }
46 }
47}

For complete documentation, see Vercel AI SDK Guide.


Next Steps: SDK

  • Full SDK Documentation - Complete reference for Python, JavaScript, and Vercel AI SDK
  • Learn about all supported parameters
  • Explore advanced streaming options
  • Understand unlimited context processing

Getting Started with HTTP API

Perfect for developers making HTTP requests with any language or framework. The HTTP API provides direct access to Captain’s infinite context processing without requiring SDKs.

Authentication

All HTTP API requests require authentication via headers:

1Authorization: Bearer YOUR_API_KEY
2X-Organization-ID: YOUR_ORG_UUID

Quick Start: Your First Request

Use the /v1/responses endpoint to process text and ask questions:

1import requests
2
3# Setup credentials
4API_KEY = "cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
5ORG_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
6BASE_URL = "https://api.runcaptain.com"
7
8headers = {
9 "Authorization": f"Bearer {API_KEY}",
10 "X-Organization-ID": ORG_ID
11}
12
13# Make a request
14response = requests.post(
15 f"{BASE_URL}/v1/responses",
16 headers=headers,
17 data={
18 'input': 'The capital of France is Paris. It is known for the Eiffel Tower.',
19 'query': 'What is the capital of France?'
20 }
21)
22
23result = response.json()
24print(result['response'])

Key Parameters:

  • input: Your context/document text (required)
  • query: The question to ask about the context (required)
  • stream: Set to 'true' for real-time streaming (optional)

HTTP API in Different Languages

Python (requests):

1import requests
2
3context = """
4Sales Data Q1 2024:
5- January: $50,000
6- February: $65,000
7- March: $72,000
8"""
9
10headers = {
11 "Authorization": f"Bearer {API_KEY}",
12 "X-Organization-ID": ORG_ID
13}
14
15response = requests.post(
16 f"{BASE_URL}/v1/responses",
17 headers=headers,
18 data={
19 'input': context,
20 'query': 'What was the total revenue for Q1?'
21 }
22)
23
24result = response.json()
25print(result['response'])

JavaScript (fetch):

1const API_KEY = 'cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';
2const ORG_ID = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx';
3const BASE_URL = 'https://api.runcaptain.com';
4
5const context = `
6Sales Data Q1 2024:
7- January: $50,000
8- February: $65,000
9- March: $72,000
10`;
11
12const response = await fetch(`${BASE_URL}/v1/responses`, {
13 method: 'POST',
14 headers: {
15 'Authorization': `Bearer ${API_KEY}`,
16 'X-Organization-ID': ORG_ID,
17 'Content-Type': 'application/x-www-form-urlencoded'
18 },
19 body: new URLSearchParams({
20 'input': context,
21 'query': 'What was the total revenue for Q1?'
22 })
23});
24
25const result = await response.json();
26console.log(result.response);

cURL:

$curl -X POST https://api.runcaptain.com/v1/responses \
> -H "Authorization: Bearer cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
> -H "X-Organization-ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" \
> -d "input=Sales Data Q1 2024: January: \$50,000, February: \$65,000, March: \$72,000" \
> -d "query=What was the total revenue for Q1?"

Streaming Responses

Get responses in real-time as they’re generated using Server-Sent Events (SSE):

Python:

1response = requests.post(
2 f"{BASE_URL}/v1/responses",
3 headers=headers,
4 data={
5 'input': 'You are a helpful assistant.',
6 'query': 'Write a short poem about coding',
7 'stream': 'true'
8 },
9 stream=True # Important: Enable streaming in requests
10)
11
12for line in response.iter_lines():
13 if line:
14 line_text = line.decode('utf-8')
15 if line_text.startswith('data: '):
16 data = line_text[6:] # Remove 'data: ' prefix
17 print(data, end='', flush=True)

JavaScript:

1const response = await fetch(`${BASE_URL}/v1/responses`, {
2 method: 'POST',
3 headers: {
4 'Authorization': `Bearer ${API_KEY}`,
5 'X-Organization-ID': ORG_ID,
6 'Content-Type': 'application/x-www-form-urlencoded'
7 },
8 body: new URLSearchParams({
9 'input': 'You are a helpful assistant.',
10 'query': 'Write a short poem about coding',
11 'stream': 'true'
12 })
13});
14
15const reader = response.body.getReader();
16const decoder = new TextDecoder();
17
18while (true) {
19 const { done, value } = await reader.read();
20 if (done) break;
21
22 const text = decoder.decode(value);
23 const lines = text.split('\n');
24
25 for (const line of lines) {
26 if (line.startsWith('data: ')) {
27 const data = line.slice(6);
28 process.stdout.write(data);
29 }
30 }
31}

cURL:

$curl -N -X POST https://api.runcaptain.com/v1/responses \
> -H "Authorization: Bearer cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
> -H "X-Organization-ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" \
> -d "input=You are a helpful assistant." \
> -d "query=Write a short poem about coding" \
> -d "stream=true"

Processing Large Text Documents

Captain handles unlimited context - send text files of any size:

1# Read any size text document
2with open('large_report.txt', 'r') as f:
3 document_text = f.read()
4
5response = requests.post(
6 f"{BASE_URL}/v1/responses",
7 headers=headers,
8 data={
9 'input': document_text,
10 'query': 'Summarize the key findings'
11 }
12)
13
14result = response.json()
15print(result['response'])

Note: For processing PDFs, images, or other file formats, use Data Lake Integration which supports 30+ file types.

HTTP Response Formats

Non-Streaming Response:

1{
2 "status": "success",
3 "response": "The total revenue for Q1 2024 was $187,000.",
4 "request_id": "resp_1729876543_a1b2c3d4"
5}

Streaming Response (SSE):

data: {"type": "chunk", "data": "The total"}
data: {"type": "chunk", "data": " revenue for"}
data: {"type": "chunk", "data": " Q1 2024 was $187,000."}
event: complete
data: {"status": "success", "request_id": "resp_1729876543_a1b2c3d4"}

Error Response:

1{
2 "status": "error",
3 "message": "Input text is required",
4 "error_code": "MISSING_INPUT"
5}

Next Steps: HTTP API

  • Full HTTP API Documentation - Complete reference including /v1/responses endpoint
  • Learn about all available parameters
  • Explore error handling
  • Understand rate limits

Getting Started with Data Lake Integration

Perfect for indexing cloud storage buckets and querying across multiple files.

Step 1: Create a Database

Databases are containers for your indexed files. Each database is scoped to your organization and environment.

1import requests
2
3headers = {
4 "Authorization": f"Bearer {API_KEY}",
5 "X-Organization-ID": ORG_ID
6}
7
8response = requests.post(
9 f"{BASE_URL}/v1/create-database",
10 headers=headers,
11 data={
12 'database_name': 'my_documents'
13 }
14)
15
16print(response.json())
17# {"status": "success", "database_name": "my_documents", "database_id": "db_..."}

Step 2: Index Your Cloud Storage

Choose your cloud storage provider:

Option A: Index AWS S3 Bucket

1from urllib.parse import quote
2
3headers = {
4 "Authorization": f"Bearer {API_KEY}",
5 "X-Organization-ID": ORG_ID
6}
7
8response = requests.post(
9 f"{BASE_URL}/v1/index-s3",
10 headers=headers,
11 data={
12 'database_name': 'my_documents',
13 'bucket_name': 'my-s3-bucket',
14 'aws_access_key_id': 'AKIAIOSFODNN7EXAMPLE',
15 'aws_secret_access_key': quote('wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY', safe=''),
16 'bucket_region': 'us-east-1'
17 }
18)
19
20job_id = response.json()['job_id']
21print(f"Indexing started! Job ID: {job_id}")

Need AWS credentials? See the Cloud Credentials Guide for step-by-step instructions.

Option B: Index Google Cloud Storage Bucket

1import requests
2
3headers = {
4 "Authorization": f"Bearer {API_KEY}",
5 "X-Organization-ID": ORG_ID
6}
7
8# Load your service account JSON
9with open('service-account-key.json', 'r') as f:
10 service_account_json = f.read()
11
12response = requests.post(
13 f"{BASE_URL}/v1/index-gcs",
14 headers=headers,
15 data={
16 'database_name': 'my_documents',
17 'bucket_name': 'my-gcs-bucket',
18 'service_account_json': service_account_json
19 }
20)
21
22job_id = response.json()['job_id']
23print(f"Indexing started! Job ID: {job_id}")

Need GCS credentials? See the Cloud Credentials Guide for step-by-step instructions.

Step 3: Monitor Indexing Progress

1import time
2
3while True:
4 response = requests.get(
5 f"{BASE_URL}/v1/indexing-status/{job_id}",
6 headers={"Authorization": f"Bearer {API_KEY}"}
7 )
8
9 result = response.json()
10 if result.get('completed'):
11 print("✓ Indexing complete!")
12 break
13
14 print(f"Status: {result.get('status')} - {result.get('active_file_processing_workers')} workers active")
15 time.sleep(5)

Step 4: Query Your Indexed Data

Query your database with AI-generated answers:

1import uuid
2
3response = requests.post(
4 f"{BASE_URL}/v1/query",
5 headers={
6 "Authorization": f"Bearer {API_KEY}",
7 "X-Organization-ID": ORG_ID,
8 "Idempotency-Key": str(uuid.uuid4())
9 },
10 data={
11 'query': 'What are the revenue projections for Q4?',
12 'database_name': 'my_documents',
13 'include_files': 'true', # Returns which files were used
14 'inference': 'true' # Get AI-generated answer (default)
15 }
16)
17
18result = response.json()
19print("Answer:", result['response'])
20print("\nRelevant Files:")
21for file in result.get('relevant_files', []):
22 print(f" - {file['file_name']} (relevancy: {file['relevancy_score']})")

Or get raw search results (for custom RAG pipelines):

1response = requests.post(
2 f"{BASE_URL}/v1/query",
3 headers={
4 "Authorization": f"Bearer {API_KEY}",
5 "X-Organization-ID": ORG_ID
6 },
7 data={
8 'query': 'What are the revenue projections for Q4?',
9 'database_name': 'my_documents',
10 'inference': 'false', # Return raw chunks instead of AI answer
11 'topK': '20' # Get top 20 results (default: 80)
12 }
13)
14
15result = response.json()
16print(f"Found {result['total_results']} relevant chunks:")
17for chunk in result['search_results']:
18 print(f"\n{chunk['fileName']}:")
19 print(chunk['content'][:200]) # First 200 chars

Step 5: Query with Streaming (Optional)

Get real-time responses as they’re generated:

1response = requests.post(
2 f"{BASE_URL}/v1/query",
3 headers={
4 "Authorization": f"Bearer {API_KEY}",
5 "X-Organization-ID": ORG_ID
6 },
7 data={
8 'query': 'Summarize all security incidents mentioned',
9 'database_name': 'my_documents',
10 'stream': 'true'
11 },
12 stream=True # Important: enable streaming
13)
14
15# Process streamed response
16for line in response.iter_lines():
17 if line:
18 line_text = line.decode('utf-8')
19 if line_text.startswith('data: '):
20 print(line_text[6:], end='', flush=True)

Next Steps: Data Lake Integration


Important Concepts

Environment Scoping

API keys are scoped to environments:

  • Development (cap_dev_*) - For testing and development
  • Staging (cap_stage_*) - For pre-production testing
  • Production (cap_prod_*) - For production use

Databases created with a development key can only be accessed with development keys from the same organization.

Supported File Types

Captain supports 30+ file types including:

Documents: PDF, DOCX, TXT, MD, RTF, ODT Spreadsheets: XLSX, XLS, CSV Presentations: PPTX, PPT Images: JPG, PNG (with OCR) Code: PY, JS, TS, HTML, CSS, PHP, JAVA Data: JSON, XML

See the complete file type list in the Data Lake Integration docs.

Rate Limits

TierRequests/Min (Captain API)Requests/Min (Query)Indexing Jobs/Hour
Standard101010
Premium6060Unlimited

Contact support@runcaptain.com to upgrade.


Comparison: SDK vs HTTP API vs Data Lake

FeatureSDK (Python/JS)HTTP APIData Lake Integration
Setup RequiredNoneNoneCreate database + index files
InterfaceOpenAI SDKHTTP APIHTTP API
LanguagesPython, JavaScript/TypeScriptAny languageAny language
Input MethodMessages arrayQuery + Input paramsIndex cloud storage
PersistenceNoNoYes (persistent database)
Query Across FilesSingle requestSingle requestThousands of files
Use CaseDrop-in OpenAI replacementCustom integrationsKnowledge base
OpenAI Compatible✓ Compatible✗ Different interface✗ Different interface
Streaming✓ Real-time✓ Real-time✓ Real-time
Max Input SizeUnlimitedUnlimitedUnlimited (per file)
File TrackingNoNoYes (which files contain what)
Re-query Same DataRe-send requiredRe-send requiredInstant (already indexed)

Using the Demo Client

We provide a comprehensive demo client that showcases all Captain features:

$# Download the demo client
>wget https://raw.githubusercontent.com/runcaptain/demo/main/captain_demo.py
>
># Run the interactive demo
>python captain_demo.py

The demo client includes examples for:

  • Creating databases
  • Indexing S3 and GCS buckets
  • Querying indexed data
  • Processing large context with Captain API
  • Streaming responses

Next Steps

For SDK Users:

  1. Read the Full SDK Documentation
  2. Explore streaming and advanced features
  3. Learn about context handling options
  4. Migrate your existing OpenAI code (Python or JavaScript)

For HTTP API Users:

  1. Read the Full HTTP API Documentation
  2. Explore all available endpoints
  3. Learn about error handling and rate limits
  4. Implement in your preferred language

For Data Lake Users:

  1. Read the Data Lake Integration Documentation
  2. Get your Cloud Storage Credentials
  3. Index your first bucket
  4. Start querying your data

Additional Resources:


Getting Help

Need assistance? We’re here to help!