Getting Started with captain
Welcome to Captain! Choose the path that matches your use case:
Choose Your Starting Point
Option 1: SDK Integration
For developers using OpenAI SDKs or Vercel AI SDK
Captain is a drop-in replacement for OpenAI - just change the base URL and start using unlimited context:
- OpenAI SDK compatible - Use existing OpenAI code
- Multiple languages - Python, JavaScript, TypeScript support
- Unlimited context - Process millions of tokens in a single request
- No code changes - Drop-in replacement
- Real-time streaming - Familiar streaming interface
Start here if you: - Currently use the OpenAI SDK (Python, JavaScript/TypeScript) or Vercel AI SDK - Want the easiest migration path - Prefer the familiar SDK interface - Need unlimited context with minimal code changes
Option 2: HTTP API Integration
For developers making direct HTTP requests
Use Captain's REST API directly with any HTTP client (requests, fetch, curl, etc.):
- Simple HTTP API - Standard POST requests
- Unlimited context - Process any amount of text
- No database required - Instant processing without setup
- Language agnostic - Use any programming language
Start here if you: - Prefer direct HTTP API calls over SDKs - Use languages without official SDK support - Want full control over requests - Don't use the OpenAI SDK
Option 3: Data Lake Integration
For developers with AWS S3 or Google Cloud Storage
Index entire cloud storage buckets and query across thousands of files:
- Connect AWS S3 or GCS - Index entire buckets automatically
- Persistent databases - Query across thousands of files
- File tracking - Know which files contain what information
- Automatic updates - Re-index buckets as files change
Start here if you: - Have documents in AWS S3 or Google Cloud Storage - Need to query across multiple files - Want a searchable knowledge base - Require persistent indexed data
Get Started with Data Lake Integration →
Prerequisites
Get Your API Credentials
You'll need:
- API Key from Captain API Studio (format: cap_dev_..., cap_prod_...)
- Organization ID (UUID format, also available in the Studio)
Store your API key securely, such as in an environment variable:
macOS / Linux
export CAPTAIN_API_KEY="cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export CAPTAIN_ORG_ID="xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
Windows
set CAPTAIN_API_KEY=cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
set CAPTAIN_ORG_ID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Getting Started with SDK
Captain provides OpenAI SDK compatibility. Choose your integration:
Python SDK
Perfect for developers already using the OpenAI Python SDK - Captain is a drop-in replacement.
Installation
Quick Start: Your First Request
Important: Provide context via extra_body and use system messages for instructions:
from openai import OpenAI
client = OpenAI(
base_url="https://api.runcaptain.com/v1",
api_key="cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
default_headers={
"X-Organization-ID": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
)
# Your context can be any size - no token limits!
context = """
Product Catalog:
- Widget A: $10, In stock: 50
- Widget B: $15, In stock: 30
- Widget C: $20, Out of stock
"""
response = client.chat.completions.create(
model="captain-voyager-latest",
messages=[
{"role": "system", "content": "You are a helpful product assistant."},
{"role": "user", "content": "Which widgets are in stock and under $20?"}
],
extra_body={
"captain": {
"context": context
}
}
)
print(response.choices[0].message.content)
System Prompts: Custom Roles or Captain's Default
Captain gives you full control over the AI's persona and behavior through system prompts:
Option 1: Define Your Own Role - Use system messages to make the AI assume specific roles or behaviors:
response = client.chat.completions.create(
model="captain-voyager-latest",
messages=[
{"role": "system", "content": "You are Luigi, a helpful assistant specialized in telling me the facts"},
{"role": "user", "content": "Who invented the light bulb?"}
],
extra_body={
"captain": {
"context": "Thomas Edison patented the light bulb in 1879..."
}
}
)
# AI responds as Luigi with your custom instructions
Option 2: Use Captain's Default - Omit the system message to use Captain's built-in persona:
response = client.chat.completions.create(
model="captain-voyager-latest",
messages=[
{"role": "user", "content": "Who invented the light bulb?"}
],
extra_body={
"captain": {
"context": "Thomas Edison patented the light bulb in 1879..."
}
}
)
# AI responds with Captain's default helpful, informative persona
Key Points:
- System messages = AI instructions (define role, tone, behavior)
- User messages = Your actual questions or requests
extra_body.captain.context= Large documents/data to analyze- System prompts are completely optional - Captain has intelligent defaults
Streaming Responses
Get responses in real-time as they're generated:
response = client.chat.completions.create(
model="captain-voyager-latest",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a short poem about coding"}
],
stream=True
)
for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
Processing Large Text Documents
Captain handles unlimited context automatically - no size limits:
# Load any size document - Captain automatically handles large contexts
with open('large_document.txt', 'r') as f:
document_text = f.read()
response = client.chat.completions.create(
model="captain-voyager-latest",
messages=[
{"role": "system", "content": "You are a research analysis assistant."},
{"role": "user", "content": "Summarize the key findings"}
],
extra_body={
"captain": {
"context": document_text
}
}
)
print(response.choices[0].message.content)
Note: For processing PDFs, images, or other file formats, use Data Lake Integration which supports 30+ file types including PDF, DOCX, images, and more.
JavaScript/TypeScript SDK
Perfect for developers using Node.js, Deno, or Bun - Captain is a drop-in replacement for OpenAI.
Installation
Install the OpenAI SDK using npm or your preferred package manager:
Quick Start: Your First Request
Important: Provide context via experimental_providerOptions.openai.extra_body.captain.context. Create a file called example.mjs with the following code:
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.runcaptain.com/v1",
apiKey: "cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
defaultHeaders: {
"X-Organization-ID": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
}
});
// Your context can be any size - no token limits!
const context = `
Product Catalog:
- Widget A: $10, In stock: 50
- Widget B: $15, In stock: 30
- Widget C: $20, Out of stock
`;
const response = await client.chat.completions.create({
model: "captain-voyager-latest",
messages: [
{ role: "system", content: "You are a helpful product assistant." },
{ role: "user", content: "Which widgets are in stock and under $20?" }
],
extra_body: {
captain: {
context: context
}
}
});
console.log(response.choices[0].message.content);
Execute the code with node example.mjs (or the equivalent command for Deno or Bun).
Streaming Responses
Get responses in real-time as they're generated:
const context = "You are a helpful assistant.";
const response = await client.chat.completions.create({
model: "captain-voyager-latest",
messages: [
{ role: "system", content: "You are a helpful product assistant." },
{ role: "user", content: "Write a short poem about coding" }
],
stream: true
});
for await (const chunk of response) {
if (chunk.choices[0]?.delta?.content) {
process.stdout.write(chunk.choices[0].delta.content);
}
}
Processing Large Text Documents
Captain handles unlimited context automatically - no size limits:
import { readFileSync } from 'fs';
// Load any size document - Captain automatically handles S3 upload for large contexts
const documentText = readFileSync('large_document.txt', 'utf-8');
const response = await client.chat.completions.create({
model: "captain-voyager-latest",
messages: [
{ role: "system", content: "You are a research analysis assistant." },
{ role: "user", content: "Summarize the key findings" }
],
extra_body: {
captain: {
context: documentText
}
}
});
console.log(response.choices[0].message.content);
Note: For processing PDFs, images, or other file formats, use Data Lake Integration which supports 30+ file types including PDF, DOCX, images, and more.
Vercel AI SDK
Perfect for developers using Vercel's AI SDK - Captain works seamlessly with the OpenAI provider.
Installation
For tool calling, also install zod:
Quick Start: Your First Request
Important: Vercel AI SDK requires context to be passed via a custom header X-Captain-Context that must be base64-encoded (HTTP headers cannot contain newlines).
import { createOpenAI } from '@ai-sdk/openai';
import { streamText } from 'ai';
const context = `
Product Catalog:
- Widget A: $10, In stock: 50
- Widget B: $15, In stock: 30
- Widget C: $20, Out of stock
`;
// Base64 encode the context for header transmission (headers can't contain newlines)
const contextBase64 = Buffer.from(context).toString('base64');
const captain = createOpenAI({
apiKey: process.env.CAPTAIN_API_KEY,
baseURL: 'https://api.runcaptain.com/v1',
headers: {
'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
'X-Captain-Context': contextBase64, // Base64 encoded context
},
});
const { textStream } = await streamText({
model: captain('captain-voyager-latest'),
messages: [
{ role: 'user', content: 'Which widgets are in stock and under $20?' }
],
});
for await (const chunk of textStream) {
process.stdout.write(chunk);
}
Why base64 encoding? HTTP headers cannot contain newlines or special characters, so context must be base64-encoded before being sent in the X-Captain-Context header.
Alternative: For production use, we recommend the OpenAI SDK with extra_body parameter - it's more reliable and doesn't require base64 encoding.
Non-Streaming Responses
For non-streaming responses, use generateText():
import { generateText } from 'ai';
const context = `Product Catalog...`;
const contextBase64 = Buffer.from(context).toString('base64');
const captain = createOpenAI({
apiKey: process.env.CAPTAIN_API_KEY,
baseURL: 'https://api.runcaptain.com/v1',
headers: {
'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
'X-Captain-Context': contextBase64,
},
});
const { text } = await generateText({
model: captain('captain-voyager-latest'),
messages: [
{ role: 'user', content: 'Which widgets are in stock?' }
],
});
console.log(text);
Tool Calling
Define tools with Vercel AI SDK's zod schema format:
import { generateText } from 'ai';
import { z } from 'zod';
const tools = {
get_inventory: {
description: 'Get current inventory levels',
parameters: z.object({
product_id: z.string().describe('Product identifier'),
}),
execute: async ({ product_id }) => {
// Your API call here
return { product_id, stock: 45 };
},
},
};
const context = `Product Catalog: SKU-001, SKU-002`;
const contextBase64 = Buffer.from(context).toString('base64');
const captain = createOpenAI({
apiKey: process.env.CAPTAIN_API_KEY,
baseURL: 'https://api.runcaptain.com/v1',
headers: {
'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
'X-Captain-Context': contextBase64,
},
});
const { text } = await generateText({
model: captain('captain-voyager-latest'),
messages: [
{ role: 'user', content: 'What is inventory for SKU-001?' }
],
tools,
maxSteps: 5,
});
console.log(text);
Processing Large Contexts
⚠️ Important: HTTP headers have size limits (~4-8KB). For contexts larger than ~4KB after base64 encoding:
Option 1: Use the OpenAI JavaScript SDK with extra_body (recommended)
Option 2: Use the /v1/chat/completions/upload endpoint with FormData:
import { readFileSync } from 'fs';
const largeDocument = readFileSync('large-file.txt', 'utf-8');
// Prepare FormData
const formData = new FormData();
const blob = new Blob([largeDocument], { type: 'text/plain' });
formData.append('file', blob, 'context.txt');
formData.append('messages', JSON.stringify([
{ role: 'user', content: 'Summarize the key findings' }
]));
formData.append('model', 'captain-voyager-latest');
formData.append('stream', 'true');
// Upload large context
const response = await fetch('https://api.runcaptain.com/v1/chat/completions/upload', {
method: 'POST',
headers: {
'Authorization': `Bearer ${process.env.CAPTAIN_API_KEY}`,
'X-Organization-ID': process.env.CAPTAIN_ORG_ID,
},
body: formData
});
// Parse SSE stream
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(line => line.trim() !== '');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') break;
try {
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) process.stdout.write(content);
} catch (e) {}
}
}
}
For complete documentation, see Vercel AI SDK Guide.
Next Steps: SDK
- Full SDK Documentation - Complete reference for Python, JavaScript, and Vercel AI SDK
- Learn about all supported parameters
- Explore advanced streaming options
- Understand unlimited context processing
Getting Started with HTTP API
Perfect for developers making HTTP requests with any language or framework. The HTTP API provides direct access to Captain's infinite context processing without requiring SDKs.
Authentication
All HTTP API requests require authentication via headers:
Quick Start: Your First Request
Use the /v1/responses endpoint to process text and ask questions:
import requests
# Setup credentials
API_KEY = "cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
ORG_ID = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
BASE_URL = "https://api.runcaptain.com"
headers = {
"Authorization": f"Bearer {API_KEY}",
"X-Organization-ID": ORG_ID
}
# Make a request
response = requests.post(
f"{BASE_URL}/v1/responses",
headers=headers,
data={
'input': 'The capital of France is Paris. It is known for the Eiffel Tower.',
'query': 'What is the capital of France?'
}
)
result = response.json()
print(result['response'])
Key Parameters:
- input: Your context/document text (required)
- query: The question to ask about the context (required)
- stream: Set to 'true' for real-time streaming (optional)
HTTP API in Different Languages
Python (requests):
import requests
context = """
Sales Data Q1 2024:
- January: $50,000
- February: $65,000
- March: $72,000
"""
headers = {
"Authorization": f"Bearer {API_KEY}",
"X-Organization-ID": ORG_ID
}
response = requests.post(
f"{BASE_URL}/v1/responses",
headers=headers,
data={
'input': context,
'query': 'What was the total revenue for Q1?'
}
)
result = response.json()
print(result['response'])
JavaScript (fetch):
const API_KEY = 'cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx';
const ORG_ID = 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx';
const BASE_URL = 'https://api.runcaptain.com';
const context = `
Sales Data Q1 2024:
- January: $50,000
- February: $65,000
- March: $72,000
`;
const response = await fetch(`${BASE_URL}/v1/responses`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'X-Organization-ID': ORG_ID,
'Content-Type': 'application/x-www-form-urlencoded'
},
body: new URLSearchParams({
'input': context,
'query': 'What was the total revenue for Q1?'
})
});
const result = await response.json();
console.log(result.response);
cURL:
curl -X POST https://api.runcaptain.com/v1/responses \
-H "Authorization: Bearer cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
-H "X-Organization-ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" \
-d "input=Sales Data Q1 2024: January: \$50,000, February: \$65,000, March: \$72,000" \
-d "query=What was the total revenue for Q1?"
Streaming Responses
Get responses in real-time as they're generated using Server-Sent Events (SSE):
Python:
response = requests.post(
f"{BASE_URL}/v1/responses",
headers=headers,
data={
'input': 'You are a helpful assistant.',
'query': 'Write a short poem about coding',
'stream': 'true'
},
stream=True # Important: Enable streaming in requests
)
for line in response.iter_lines():
if line:
line_text = line.decode('utf-8')
if line_text.startswith('data: '):
data = line_text[6:] # Remove 'data: ' prefix
print(data, end='', flush=True)
JavaScript:
const response = await fetch(`${BASE_URL}/v1/responses`, {
method: 'POST',
headers: {
'Authorization': `Bearer ${API_KEY}`,
'X-Organization-ID': ORG_ID,
'Content-Type': 'application/x-www-form-urlencoded'
},
body: new URLSearchParams({
'input': 'You are a helpful assistant.',
'query': 'Write a short poem about coding',
'stream': 'true'
})
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const text = decoder.decode(value);
const lines = text.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
process.stdout.write(data);
}
}
}
cURL:
curl -N -X POST https://api.runcaptain.com/v1/responses \
-H "Authorization: Bearer cap_prod_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
-H "X-Organization-ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" \
-d "input=You are a helpful assistant." \
-d "query=Write a short poem about coding" \
-d "stream=true"
Processing Large Text Documents
Captain handles unlimited context - send text files of any size:
# Read any size text document
with open('large_report.txt', 'r') as f:
document_text = f.read()
response = requests.post(
f"{BASE_URL}/v1/responses",
headers=headers,
data={
'input': document_text,
'query': 'Summarize the key findings'
}
)
result = response.json()
print(result['response'])
Note: For processing PDFs, images, or other file formats, use Data Lake Integration which supports 30+ file types.
HTTP Response Formats
Non-Streaming Response:
{
"status": "success",
"response": "The total revenue for Q1 2024 was $187,000.",
"request_id": "resp_1729876543_a1b2c3d4"
}
Streaming Response (SSE):
data: {"type": "chunk", "data": "The total"}
data: {"type": "chunk", "data": " revenue for"}
data: {"type": "chunk", "data": " Q1 2024 was $187,000."}
event: complete
data: {"status": "success", "request_id": "resp_1729876543_a1b2c3d4"}
Error Response:
Next Steps: HTTP API
- Full HTTP API Documentation - Complete reference including
/v1/responsesendpoint - Learn about all available parameters
- Explore error handling
- Understand rate limits
Getting Started with Data Lake Integration
Perfect for indexing cloud storage buckets and querying across multiple files.
Step 1: Create a Database
Databases are containers for your indexed files. Each database is scoped to your organization and environment.
import requests
headers = {
"Authorization": f"Bearer {API_KEY}",
"X-Organization-ID": ORG_ID
}
response = requests.post(
f"{BASE_URL}/v1/create-database",
headers=headers,
data={
'database_name': 'my_documents'
}
)
print(response.json())
# {"status": "success", "database_name": "my_documents", "database_id": "db_..."}
Step 2: Index Your Cloud Storage
Choose your cloud storage provider:
Option A: Index AWS S3 Bucket
from urllib.parse import quote
headers = {
"Authorization": f"Bearer {API_KEY}",
"X-Organization-ID": ORG_ID
}
response = requests.post(
f"{BASE_URL}/v1/index-s3",
headers=headers,
data={
'database_name': 'my_documents',
'bucket_name': 'my-s3-bucket',
'aws_access_key_id': 'AKIAIOSFODNN7EXAMPLE',
'aws_secret_access_key': quote('wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY', safe=''),
'bucket_region': 'us-east-1'
}
)
job_id = response.json()['job_id']
print(f"Indexing started! Job ID: {job_id}")
Need AWS credentials? See the Cloud Credentials Guide for step-by-step instructions.
Option B: Index Google Cloud Storage Bucket
import requests
headers = {
"Authorization": f"Bearer {API_KEY}",
"X-Organization-ID": ORG_ID
}
# Load your service account JSON
with open('service-account-key.json', 'r') as f:
service_account_json = f.read()
response = requests.post(
f"{BASE_URL}/v1/index-gcs",
headers=headers,
data={
'database_name': 'my_documents',
'bucket_name': 'my-gcs-bucket',
'service_account_json': service_account_json
}
)
job_id = response.json()['job_id']
print(f"Indexing started! Job ID: {job_id}")
Need GCS credentials? See the Cloud Credentials Guide for step-by-step instructions.
Step 3: Monitor Indexing Progress
import time
while True:
response = requests.get(
f"{BASE_URL}/v1/indexing-status/{job_id}",
headers={"Authorization": f"Bearer {API_KEY}"}
)
result = response.json()
if result.get('completed'):
print("✓ Indexing complete!")
break
print(f"Status: {result.get('status')} - {result.get('active_file_processing_workers')} workers active")
time.sleep(5)
Step 4: Query Your Indexed Data
import uuid
response = requests.post(
f"{BASE_URL}/v1/query",
headers={
"Authorization": f"Bearer {API_KEY}",
"X-Organization-ID": ORG_ID,
"Idempotency-Key": str(uuid.uuid4())
},
data={
'query': 'What are the revenue projections for Q4?',
'database_name': 'my_documents',
'include_files': 'true' # Returns which files were used
}
)
result = response.json()
print("Answer:", result['response'])
print("\nRelevant Files:")
for file in result.get('relevant_files', []):
print(f" - {file['file_name']} (relevancy: {file['relevancy_score']})")
Step 5: Query with Streaming (Optional)
Get real-time responses as they're generated:
response = requests.post(
f"{BASE_URL}/v1/query",
headers={
"Authorization": f"Bearer {API_KEY}",
"X-Organization-ID": ORG_ID
},
data={
'query': 'Summarize all security incidents mentioned',
'database_name': 'my_documents',
'stream': 'true'
},
stream=True # Important: enable streaming
)
# Process streamed response
for line in response.iter_lines():
if line:
line_text = line.decode('utf-8')
if line_text.startswith('data: '):
print(line_text[6:], end='', flush=True)
Next Steps: Data Lake Integration
- Full Data Lake Integration Documentation - Complete reference
- Learn about database management
- Explore file-level operations
- Understand re-indexing behavior
- Monitor indexing jobs
Important Concepts
Environment Scoping
API keys are scoped to environments:
- Development (cap_dev_*) - For testing and development
- Staging (cap_stage_*) - For pre-production testing
- Production (cap_prod_*) - For production use
Databases created with a development key can only be accessed with development keys from the same organization.
Supported File Types
Captain supports 30+ file types including:
Documents: PDF, DOCX, TXT, MD, RTF, ODT Spreadsheets: XLSX, XLS, CSV Presentations: PPTX, PPT Images: JPG, PNG (with OCR) Code: PY, JS, TS, HTML, CSS, PHP, JAVA Data: JSON, XML
See the complete file type list in the Data Lake Integration docs.
Rate Limits
| Tier | Requests/Min (Captain API) | Requests/Min (Query) | Indexing Jobs/Hour |
|---|---|---|---|
| Standard | 10 | 10 | 10 |
| Premium | 60 | 60 | Unlimited |
Contact support@runcaptain.com to upgrade.
Comparison: SDK vs HTTP API vs Data Lake
| Feature | SDK (Python/JS) | HTTP API | Data Lake Integration |
|---|---|---|---|
| Setup Required | None | None | Create database + index files |
| Interface | OpenAI SDK | HTTP API | HTTP API |
| Languages | Python, JavaScript/TypeScript | Any language | Any language |
| Input Method | Messages array | Query + Input params | Index cloud storage |
| Persistence | No | No | Yes (persistent database) |
| Query Across Files | Single request | Single request | Thousands of files |
| Use Case | Drop-in OpenAI replacement | Custom integrations | Knowledge base |
| OpenAI Compatible | ✓ Compatible | ✗ Different interface | ✗ Different interface |
| Streaming | ✓ Real-time | ✓ Real-time | ✓ Real-time |
| Max Input Size | Unlimited | Unlimited | Unlimited (per file) |
| File Tracking | No | No | Yes (which files contain what) |
| Re-query Same Data | Re-send required | Re-send required | Instant (already indexed) |
Using the Demo Client
We provide a comprehensive demo client that showcases all Captain features:
# Download the demo client
wget https://raw.githubusercontent.com/runcaptain/demo/main/captain_demo.py
# Run the interactive demo
python captain_demo.py
The demo client includes examples for: - Creating databases - Indexing S3 and GCS buckets - Querying indexed data - Processing large context with Captain API - Streaming responses
Next Steps
For SDK Users:
- Read the Full SDK Documentation
- Explore streaming and advanced features
- Learn about context handling options
- Migrate your existing OpenAI code (Python or JavaScript)
For HTTP API Users:
- Read the Full HTTP API Documentation
- Explore all available endpoints
- Learn about error handling and rate limits
- Implement in your preferred language
For Data Lake Users:
- Read the Data Lake Integration Documentation
- Get your Cloud Storage Credentials
- Index your first bucket
- Start querying your data
Additional Resources:
Getting Help
Need assistance? We're here to help!
- Email: support@runcaptain.com
- Phone: +1 (260) CAP-TAIN
- Documentation: docs.runcaptain.com
- Status Page: status.runcaptain.com