Skip to content

Tool Calling (Function Calling)

OpenAI-Compatible Tool Calling

Captain supports OpenAI-compatible function calling, allowing your AI to use tools and execute code client-side. Tools are executed on the client side, giving you full control over security and data access.

Overview

Tool calling (also known as function calling) allows the AI model to request the execution of specific functions when it needs to perform actions like:

  • 🧮 Calculations: Perform complex math operations
  • 🔍 Data Retrieval: Look up information from databases or APIs
  • 📊 Data Processing: Transform or analyze data
  • 🌐 External APIs: Call third-party services
  • 📁 File Operations: Read or write files (client-side)

How It Works

  1. Define tools with their parameters and descriptions
  2. Send request to Captain API with tools included
  3. Model decides if it needs to use a tool
  4. API returns tool call request (not executed yet)
  5. Client executes the tool locally
  6. Continue conversation with tool results (optional)

Client-Side Execution

Tools are never executed on Captain's servers. The API only returns tool call requests - you execute them in your own environment with full control.

Quick Start

Python Example

from openai import OpenAI

client = OpenAI(
    base_url="https://api.runcaptain.com/v1",
    api_key="your_api_key",
    default_headers={"X-Organization-ID": "your_org_id"}
)

# Define a tool
tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City name, e.g. San Francisco"
                },
                "unit": {
                    "type": "string",
                    "enum": ["celsius", "fahrenheit"]
                }
            },
            "required": ["location"]
        },
        "strict": True
    }
}]

# Make request with tools
response = client.chat.completions.create(
    model="captain-voyager-latest",
    messages=[
        {"role": "user", "content": "What's the weather in Tokyo?"}
    ],
    tools=tools
)

# Check if model wants to use a tool
if response.choices[0].finish_reason == "tool_calls":
    tool_call = response.choices[0].message.tool_calls[0]

    # Execute tool client-side
    import json
    args = json.loads(tool_call.function.arguments)
    result = get_weather(args["location"], args.get("unit", "celsius"))

    print(f"Tool used: {tool_call.function.name}")
    print(f"Result: {result}")

JavaScript/TypeScript with Vercel AI SDK

import { createOpenAI } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { z } from 'zod';

const captain = createOpenAI({
  apiKey: process.env.CAPTAIN_API_KEY,
  baseURL: 'https://api.runcaptain.com/v1',
  headers: {
    'X-Organization-ID': process.env.CAPTAIN_ORG_ID
  }
});

const result = await generateText({
  model: captain.chat('captain-voyager-latest'),
  messages: [
    { role: 'user', content: "What's the weather in Tokyo?" }
  ],
  tools: {
    getWeather: {
      description: 'Get current weather for a location',
      parameters: z.object({
        location: z.string().describe('City name'),
        unit: z.enum(['celsius', 'fahrenheit']).optional()
      }),
      execute: async ({ location, unit }) => {
        // Execute tool client-side
        const weather = await fetchWeather(location, unit);
        return weather;
      }
    }
  },
  maxSteps: 5  // Allow multiple tool calls
});

console.log(result.text);

Complete Examples

Python: Calculator Tool

from openai import OpenAI
import json

client = OpenAI(
    base_url="https://api.runcaptain.com/v1",
    api_key="your_api_key",
    default_headers={"X-Organization-ID": "your_org_id"}
)

def calculate(operation, a, b):
    """Execute calculation client-side"""
    operations = {
        "add": lambda x, y: x + y,
        "subtract": lambda x, y: x - y,
        "multiply": lambda x, y: x * y,
        "divide": lambda x, y: x / y if y != 0 else "Error: Division by zero"
    }
    return {"result": operations[operation](a, b)}

tools = [{
    "type": "function",
    "function": {
        "name": "calculate",
        "description": "Perform arithmetic calculations",
        "parameters": {
            "type": "object",
            "properties": {
                "operation": {
                    "type": "string",
                    "enum": ["add", "subtract", "multiply", "divide"],
                    "description": "The operation to perform"
                },
                "a": {"type": "number", "description": "First number"},
                "b": {"type": "number", "description": "Second number"}
            },
            "required": ["operation", "a", "b"]
        },
        "strict": True
    }
}]

response = client.chat.completions.create(
    model="captain-voyager-latest",
    messages=[
        {"role": "system", "content": "You are a helpful assistant with access to a calculator."},
        {"role": "user", "content": "What is 156 multiplied by 243?"}
    ],
    tools=tools
)

if response.choices[0].finish_reason == "tool_calls":
    tool_call = response.choices[0].message.tool_calls[0]
    args = json.loads(tool_call.function.arguments)

    # Execute tool
    result = calculate(args["operation"], args["a"], args["b"])
    print(f"Calculation result: {result['result']}")

TypeScript: Database Query Tool

import { createOpenAI } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { z } from 'zod';

const captain = createOpenAI({
  apiKey: process.env.CAPTAIN_API_KEY!,
  baseURL: 'https://api.runcaptain.com/v1',
  headers: {
    'X-Organization-ID': process.env.CAPTAIN_ORG_ID!
  }
});

// Tool definition with execution
const tools = {
  queryDatabase: {
    description: 'Query the customer database',
    parameters: z.object({
      customerId: z.string().describe('Customer ID to look up'),
      fields: z.array(z.string()).describe('Fields to retrieve')
    }),
    execute: async ({ customerId, fields }) => {
      // Execute query client-side (your database)
      const customer = await db.customers.findOne({ id: customerId });

      return {
        customer: {
          id: customer.id,
          ...fields.reduce((acc, field) => ({
            ...acc,
            [field]: customer[field]
          }), {})
        }
      };
    }
  }
};

const result = await generateText({
  model: captain.chat('captain-voyager-latest'),
  messages: [
    {
      role: 'user',
      content: 'Get the email and phone number for customer CUST-12345'
    }
  ],
  tools,
  maxSteps: 5
});

console.log(result.text);

Tool Calling with Context

Combine tool calling with Captain's infinite context processing:

Python: Analyze Document with Tools

from openai import OpenAI

client = OpenAI(
    base_url="https://api.runcaptain.com/v1",
    api_key="your_api_key",
    default_headers={"X-Organization-ID": "your_org_id"}
)

# Read large document
with open('financial_report.txt', 'r') as f:
    document = f.read()

tools = [{
    "type": "function",
    "function": {
        "name": "calculate_total",
        "description": "Calculate sum of numbers",
        "parameters": {
            "type": "object",
            "properties": {
                "numbers": {
                    "type": "array",
                    "items": {"type": "number"},
                    "description": "List of numbers to sum"
                }
            },
            "required": ["numbers"]
        },
        "strict": True
    }
}]

response = client.chat.completions.create(
    model="captain-voyager-latest",
    messages=[
        {"role": "user", "content": "What is the total revenue from all quarters?"}
    ],
    tools=tools,
    extra_body={
        "captain": {
            "context": document  # Large document context
        }
    }
)

if response.choices[0].finish_reason == "tool_calls":
    tool_call = response.choices[0].message.tool_calls[0]
    args = json.loads(tool_call.function.arguments)

    # Execute calculation
    total = sum(args["numbers"])
    print(f"Total revenue: ${total:,.2f}")

TypeScript: Search Documents with API Calls

import { createOpenAI } from '@ai-sdk/openai';
import { generateText } from 'ai';
import { z } from 'zod';

const captain = createOpenAI({
  apiKey: process.env.CAPTAIN_API_KEY!,
  baseURL: 'https://api.runcaptain.com/v1',
  headers: {
    'X-Organization-ID': process.env.CAPTAIN_ORG_ID!
  }
});

const largeDataset = await fs.readFile('company_data.json', 'utf-8');

const result = await generateText({
  model: captain.chat('captain-voyager-latest'),
  messages: [
    {
      role: 'user',
      content: 'Find all employees in the Engineering department and get their current projects from the API'
    }
  ],
  tools: {
    getProjectDetails: {
      description: 'Get project details from external API',
      parameters: z.object({
        projectId: z.string()
      }),
      execute: async ({ projectId }) => {
        const response = await fetch(
          `https://api.company.com/projects/${projectId}`
        );
        return await response.json();
      }
    }
  },
  extra_body: {
    captain: {
      context: largeDataset
    }
  },
  maxSteps: 10
});

console.log(result.text);

Advanced Patterns

Multi-Step Tool Calling

Use Vercel AI SDK's maxSteps for automatic multi-turn conversations:

const result = await generateText({
  model: captain.chat('captain-voyager-latest'),
  messages: [
    {
      role: 'user',
      content: 'Calculate quarterly revenue growth rate'
    }
  ],
  tools: {
    getRevenue: {
      description: 'Get revenue for a specific quarter',
      parameters: z.object({
        quarter: z.string(),
        year: z.number()
      }),
      execute: async ({ quarter, year }) => {
        return { revenue: await fetchRevenue(quarter, year) };
      }
    },
    calculateGrowthRate: {
      description: 'Calculate percentage growth between two values',
      parameters: z.object({
        previous: z.number(),
        current: z.number()
      }),
      execute: async ({ previous, current }) => {
        const growth = ((current - previous) / previous) * 100;
        return { growthRate: growth };
      }
    }
  },
  maxSteps: 10  // Allow multiple tool calls in sequence
});

console.log(result.text);
console.log(`Tools used: ${result.toolCalls?.length || 0}`);

Error Handling

from openai import OpenAI
import json

client = OpenAI(
    base_url="https://api.runcaptain.com/v1",
    api_key="your_api_key",
    default_headers={"X-Organization-ID": "your_org_id"}
)

def safe_execute_tool(tool_name, args):
    """Execute tool with error handling"""
    try:
        if tool_name == "calculate":
            return calculate(args["operation"], args["a"], args["b"])
        elif tool_name == "query_db":
            return query_database(args["query"])
        else:
            return {"error": f"Unknown tool: {tool_name}"}
    except Exception as e:
        return {"error": str(e)}

response = client.chat.completions.create(
    model="captain-voyager-latest",
    messages=[{"role": "user", "content": "What is 100 divided by 0?"}],
    tools=tools
)

if response.choices[0].finish_reason == "tool_calls":
    for tool_call in response.choices[0].message.tool_calls:
        args = json.loads(tool_call.function.arguments)
        result = safe_execute_tool(tool_call.function.name, args)

        if "error" in result:
            print(f"Tool execution failed: {result['error']}")
        else:
            print(f"Tool result: {result}")

API Reference

Tool Definition Format

{
    "type": "function",
    "function": {
        "name": "function_name",           # Required: Tool name
        "description": "What it does",     # Required: Clear description
        "parameters": {                    # Required: JSON Schema
            "type": "object",
            "properties": {
                "param1": {
                    "type": "string",
                    "description": "Param description"
                }
            },
            "required": ["param1"]
        },
        "strict": True                     # Required: Must be True
    }
}

Response Format

When the model wants to use a tool:

{
  "id": "chatcmpl-...",
  "choices": [{
    "finish_reason": "tool_calls",
    "message": {
      "role": "assistant",
      "content": null,
      "tool_calls": [{
        "id": "call_abc123",
        "type": "function",
        "function": {
          "name": "tool_name",
          "arguments": "{\"param\": \"value\"}"
        }
      }]
    }
  }]
}

Parameters

Parameter Type Description
tools array List of tool definitions (OpenAI format)
tool_choice string "auto" (default), "none", or specific tool
extra_body.captain.context string Optional large context for tool to analyze

Best Practices

1. Clear Tool Descriptions

# Good
"description": "Calculate the sum of two numbers. Use this when you need to perform addition."

# Bad
"description": "Does math"

2. Strict Parameter Schemas

# Good - Explicit types and descriptions
"parameters": {
    "type": "object",
    "properties": {
        "amount": {
            "type": "number",
            "description": "Dollar amount to process"
        },
        "currency": {
            "type": "string",
            "enum": ["USD", "EUR", "GBP"],
            "description": "Three-letter currency code"
        }
    },
    "required": ["amount", "currency"]
}

3. Security Considerations

# Always validate tool inputs
def execute_tool(tool_name, args):
    # Validate tool exists
    if tool_name not in ALLOWED_TOOLS:
        raise ValueError(f"Tool {tool_name} not allowed")

    # Validate arguments
    if not validate_args(tool_name, args):
        raise ValueError("Invalid arguments")

    # Execute with proper permissions
    return ALLOWED_TOOLS[tool_name](**args)

4. Handle Empty Arguments

# Model might return incomplete arguments
if response.choices[0].finish_reason == "tool_calls":
    tool_call = response.choices[0].message.tool_calls[0]

    try:
        args = json.loads(tool_call.function.arguments)

        # Check for required parameters
        if not all(k in args for k in required_params):
            print("Warning: Missing required parameters")
            # Provide defaults or skip

    except json.JSONDecodeError:
        print("Warning: Invalid tool arguments")

Limitations

Current Limitations

  1. Multi-Turn Continuation: Sending tool results back for final answer requires frameworks like Vercel AI SDK with maxSteps
  2. Model Parameter Extraction: Model may occasionally return empty/incomplete arguments (provide clear descriptions)
  3. Server-Side Execution: Tools are client-side only - cannot execute on Captain's servers

Framework Support

Framework Support Level Multi-Turn Recommended
OpenAI Python SDK ✅ Full Manual ⭐ For Python
OpenAI Node.js SDK ✅ Full Manual ⭐ For Node.js
Vercel AI SDK ✅ Full Automatic (maxSteps) ⭐ Best DX
LangChain ✅ Compatible Via agents
Custom REST ✅ Full Manual

Troubleshooting

Tool Not Being Called

# Make description more explicit
"description": "ALWAYS use this tool for calculations. Never calculate manually."

# Add stronger system prompt
messages = [
    {
        "role": "system",
        "content": "You MUST use the provided tools. Never perform calculations yourself."
    },
    {"role": "user", "content": "What is 50 + 75?"}
]

Empty Tool Arguments

# Check for empty args and provide defaults
args = json.loads(tool_call.function.arguments)

if not args or not args.get("required_param"):
    # Provide sensible defaults
    args = {
        "required_param": "default_value",
        **args
    }

Multi-Turn Issues

// Use Vercel AI SDK with maxSteps for automatic handling
const result = await generateText({
  model: captain.chat('captain-voyager-latest'),
  messages: [...],
  tools: {...},
  maxSteps: 5  // Handles multi-turn automatically
});

Next Steps

Support

Need help with tool calling?