Skip to main content

Installation

bun add raysurfer

Setup

Set your API key:
export RAYSURFER_API_KEY=your_api_key_here
Get your key from the dashboard.

Usage

Swap your import — everything else stays the same:
// Before
import { query } from "@anthropic-ai/claude-agent-sdk";

// After
import { query } from "raysurfer";

for await (const message of query({
  prompt: "Fetch data from GitHub API",
  options: {
    model: "claude-opus-4-6",
    systemPrompt: "You are a helpful assistant.",
  },
})) {
  console.log(message);
}
All Claude SDK types are re-exported from raysurfer, so you don’t need a separate import:
import {
  query,
  type Options,
  type SDKMessage,
  type Query,
} from "raysurfer";

How It Works

  1. On query: Retrieves cached code blocks matching your task
  2. Injects into prompt: Agent sees proven code snippets
  3. After success: New code is cached for next time
Caching is enabled automatically when RAYSURFER_API_KEY is set. Without it, behaves exactly like the original SDK.

Agent-Owned Repos

Set agentId to isolate uploads/searches into an agent-owned repo:
import { RaySurfer } from "raysurfer";

const client = new RaySurfer({
  apiKey: "your_api_key",
  agentId: "claims-agent-v1",
});

Agent-Accessible Functions

Decorator flow:
import { RaySurfer, agentAccessible } from "raysurfer";

const rs = new RaySurfer({ apiKey: "your_api_key", agentId: "claims-agent-v1" });

const generateClaimSummary = agentAccessible(
  (claimId: string) => `summary-${claimId}.pdf`,
  {
    name: "generateClaimSummary",
    description: "Generate claim summary PDF",
    parameters: { claimId: "string" },
  },
);

await rs.publishFunctionRegistry([generateClaimSummary]);
raysurfer.yaml flow:
import * as claimsModule from "./src/claims";
import { RaySurfer, loadConfig } from "raysurfer";

const rs = new RaySurfer({ apiKey: "your_api_key", agentId: "claims-agent-v1" });
const functions = loadConfig("raysurfer.yaml", {
  "src/claims.ts": claimsModule,
});
await rs.publishFunctionRegistry(functions);

Class-based API

For users who prefer a class-based interface:
import { ClaudeSDKClient } from "raysurfer";

const client = new ClaudeSDKClient({
  model: "claude-opus-4-6",
  systemPrompt: "You are a helpful assistant.",
});

for await (const msg of client.query("Fetch data from GitHub API")) {
  console.log(msg);
}
RaysurferClient is also exported as an alias for ClaudeSDKClient — use whichever name you prefer.

System Prompt Preset

Use the Claude Code preset system prompt with appended instructions:
for await (const message of query({
  prompt: "Refactor the auth module",
  options: {
    systemPrompt: {
      type: "preset",
      preset: "claude_code",
      append: "Always explain your reasoning.",
    },
  },
})) {
  console.log(message);
}

Query Control Methods

The query() function returns a Query object with full control methods:
const q = query({ prompt: "Build a REST API" });

await q.interrupt();
await q.setPermissionMode("acceptEdits");
await q.setModel("claude-sonnet-4-5-20250929");
await q.setMaxThinkingTokens(4096);
const models = await q.supportedModels();
const info = await q.accountInfo();
q.close();

Snippet Retrieval Scope

Control which cached snippets are retrieved:
import { ClaudeSDKClient } from "raysurfer";

// Include company-level snippets (Team/Enterprise)
const client = new ClaudeSDKClient({
  snipsDesired: "company",
});

// Pro/Enterprise: client-specific snippets
const proClient = new ClaudeSDKClient({
  snipsDesired: "client",
});
ConfigurationRequired Tier
Default (public only)FREE
snipsDesired: "company"PRO or ENTERPRISE
snipsDesired: "client"PRO or ENTERPRISE (requires workspaceId)

Public Snippets

Include community public snippets (crawled from GitHub) alongside your private snippets:
// High-level
const client = new ClaudeSDKClient({ publicSnips: true });

// Low-level
const rs = new RaySurfer({ apiKey: "...", publicSnips: true });
See How It Works — Public Snippets for details.

Low-Level API

For custom integrations, use the RaySurfer client directly:
import { RaySurfer } from "raysurfer";

const client = new RaySurfer({ apiKey: "your_api_key" });

// 1. Search for cached code
const searchResult = await client.search({
  task: "Fetch GitHub trending repos",
  perFunctionReputation: true,
});
for (const match of searchResult.matches) {
  console.log(`${match.codeBlock.name}: ${match.combinedScore}`);
}

// Your LLM execution loop goes here
// e.g. Claude Agent SDK generates and runs code using the cached snippets

// 2. Upload a new code file after execution (with dependency versions)
await client.upload({
  task: "Fetch GitHub trending repos",
  fileWritten: { path: "fetch_repos.ts", content: "function fetch() { ... }" },
  succeeded: true,
  dependencies: { "node-fetch": "3.3.0", "zod": "3.22.0" },
  perFunctionReputation: true,
});

// 2b. Bulk upload prompts/logs/code for sandboxed grading
await client.uploadBulkCodeSnips(
  ["Build a CLI tool", "Add CSV support"],
  [{ path: "cli.ts", content: "function main() { ... }" }],
  [{ path: "logs/run.log", content: "Task completed", encoding: "utf-8" }],
  true
);

// 3. Vote on whether a cached snippet was useful
await client.voteCodeSnip({
  task: "Fetch GitHub trending repos",
  codeBlockId: "abc123",
  codeBlockName: "github_fetcher",
  codeBlockDescription: "Fetches trending repos from GitHub",
  succeeded: true,
});

Low-Level Client Options

const client = new RaySurfer({
  apiKey: "your_api_key",
  baseUrl: "https://api.raysurfer.com", // optional
  timeout: 30000, // optional, in ms
  organizationId: "org_xxx", // optional, for team namespacing
  workspaceId: "ws_xxx", // optional, for per-customer namespacing (Pro or Enterprise)
  agentId: "claims-agent-v1", // optional, for agent-owned repo isolation
  snipsDesired: "company", // optional, snippet scope
  publicSnips: true, // optional, include community snippets
});

Response Fields

The search() response includes:
FieldTypeDescription
filesCodeFile[]Retrieved code files with metadata
taskstringThe task that was searched
totalFoundnumberTotal matches found
addToLlmPromptstringPre-formatted string to append to LLM system prompt
Each CodeFile contains:
FieldTypeDescription
codeBlockIdstringUnique identifier of the code block
filenamestringGenerated filename (e.g., "github_fetcher.ts")
sourcestringFull source code
entrypointstringMain function to call
descriptionstringWhat the code does
inputSchemaRecord<string, JsonValue>JSON schema for inputs
outputSchemaRecord<string, JsonValue>JSON schema for outputs
languagestringProgramming language
dependenciesRecord<string, string>Package name to version mapping
verdictScorenumberQuality rating (thumbsUp / total, 0.3 if unrated)
thumbsUpnumberPositive votes
thumbsDownnumberNegative votes
similarityScorenumberSemantic similarity (0–1)
combinedScorenumberCombined score (similarity * 0.6 + verdict * 0.4)

Search Response

The search() method returns a SearchResponse:
FieldTypeDescription
matchesSearchMatch[]Matching code blocks with full scoring
totalFoundnumberTotal matches found
cacheHitbooleanWhether results were served from cache
searchNamespacesstring[]Namespaces that were searched
Each SearchMatch contains:
FieldTypeDescription
codeBlockCodeBlockThe code block (has id, name, description, source, entrypoint, language, dependencies, tags)
combinedScorenumberOverall relevance score (0–1)
vectorScorenumberSemantic similarity score
verdictScorenumberCommunity quality rating
errorResiliencenumberError handling robustness score
thumbsUpnumberPositive votes
thumbsDownnumberNegative votes
filenamestringGenerated filename for the code
languagestringProgramming language
entrypointstringMain function name
dependenciesRecord<string, string>Package name to version mapping (e.g., {"node-fetch": "3.3.0"})
agentIdstring | nullAgent identifier when snippet came from an agent-owned repo
s3Urlstring | nullPresigned download URL for agent-stored script content
functionsFunctionReputation[] | nullPer-function reputation metadata (when perFunctionReputation: true)

Method Reference

MethodDescription
search({ task, topK?, minVerdictScore?, preferComplete?, perFunctionReputation? })Search for cached code snippets
upload({ task, fileWritten, succeeded, perFunctionReputation?, ... })Store a code file for future reuse
uploadBulkCodeSnips({ prompts, filesWritten, ... })Bulk upload for grading
delete(snippetId)Delete a snippet by ID or name
voteCodeSnip({ task, codeBlockId, codeBlockName, codeBlockDescription, succeeded })Vote on snippet usefulness
commentOnCodeSnip({ codeBlockId, text })Add a comment to a snippet
publishFunctionRegistry(functions)Publish agentAccessible(...) functions for agent discovery
tool(name, description, parameters, callback)Register a local function as a tool for execute()
execute(task, { userCode?, codegen?, timeout? })Execute user code (primary) or sandbox-generated code (optional) with registered tools

Programmatic Tool Calling

Also available as an integration guide: Register TypeScript Functions. Register local tools, then either pass userCode (primary mode) or use optional sandbox codegen with your own key + prompt:
import { RaySurfer } from "raysurfer";

const rs = new RaySurfer({ apiKey: "your_api_key" });

rs.tool("add", "Add two numbers", { a: "integer", b: "integer" },
  async (args) => String(Number(args.a) + Number(args.b))
);

rs.tool("multiply", "Multiply two numbers", { a: "integer", b: "integer" },
  async (args) => String(Number(args.a) * Number(args.b))
);

const userCode = `
intermediate = add(5, 3)
final = multiply(intermediate, 2)
print(final)
`;
const result = await rs.execute("Add 5 and 3, then multiply the result by 2", {
  userCode,
});
console.log(result.result);     // "16"
console.log(result.toolCalls);  // [{toolName: 'add', ...}, ...]
console.log(result.cacheHit);   // false (reserved field)
tool() lets you wrap any local function as a tool callback. The description string you pass is included in the tool schema payload.

How It Works

  1. SDK opens a live callback channel for tool call routing
  2. Your app sends either userCode (primary mode) or codegen (optional mode) to /api/execute/run
  3. Code runs in a Modal sandbox — tool calls are routed back to your local functions through that callback channel
  4. Results are returned with full tool call history
Use exactly one mode per call:
  • userCode: run pre-generated code directly (recommended default)
  • codegen: generate code in sandbox and run it

Execute Options

const result = await rs.execute("Your task description", {
  userCode: "print(add(1, 2))", // Primary mode
  timeout: 300000,              // Max time in ms (default 300000)
});

const sandboxCodegenResult = await rs.execute("Your task description", {
  codegen: {
    provider: "anthropic",
    apiKey: "YOUR_ANTHROPIC_API_KEY",
    model: "claude-opus-4-6",
    prompt: "Write Python code that uses add(a, b) and prints 2 + 3.",
  },
  timeout: 300000,
});

ExecuteResult Fields

FieldTypeDescription
executionIdstringUnique execution identifier
resultstring | nullStdout output from the script
exitCodenumberProcess exit code (0 = success)
durationMsnumberTotal execution time
cacheHitbooleanReserved field (currently always false for execute)
errorstring | nullError message if exitCode != 0
toolCallsToolCallRecord[]All tool calls made during execution