TypeScript SDK Reference
The optional Spanora TypeScript SDK provides DX sugar and improved signal quality for AI observability on top of raw OpenTelemetry.
The SDK is optional — the platform works with raw OpenTelemetry data. The SDK provides convenience wrappers for common patterns.
Design Principles
- Attaches to existing OTEL context if present — never creates duplicate traces
- Never overrides exporters or environment variables
- Semantic attributes following
gen_ai.*conventions withcustom.*extensions
Initialization
init()
Configures the SDK and sets up the OTEL exporter. Call once at application startup. Idempotent — calling multiple times is a no-op. Registers a beforeExit hook for automatic shutdown in scripts.
import { init } from "@spanora-ai/sdk";
const { shutdown } = init({
apiKey: process.env.SPANORA_API_KEY,
serviceName: "my-agent-service",
environment: "production",
});Options
| Option | Type | Default | Description |
|---|---|---|---|
apiKey | string | — | API key for authenticating with the backend |
serviceName | string | "ai-app" | Service name reported as a resource attribute |
environment | string | — | Deployment environment (e.g. "production", "staging") |
Returns { shutdown } — a handle for explicit teardown (see shutdown).
configure()
Sets the global SDK configuration separately from initialization. Call before init() if you want to split configuration from startup:
import { configure, init } from "@spanora-ai/sdk";
configure({
apiKey: process.env.SPANORA_API_KEY,
});
// Later...
init(); // Uses the previously configured valuesshutdown()
Flushes pending spans and shuts down the SDK-managed provider. No-op when no SDK provider exists. Safe to call multiple times.
import { init, shutdown } from "@spanora-ai/sdk";
init({ apiKey: process.env.SPANORA_API_KEY });
// ... your application logic ...
await shutdown();For short-lived scripts, init() registers a beforeExit hook that calls shutdown() automatically.
createSpanoraExporter()
Creates a SpanProcessor for users with an existing OTEL provider/SDK setup. Add the returned processor to your existing provider instead of calling init().
import { NodeSDK } from "@opentelemetry/sdk-node";
import { createSpanoraExporter } from "@spanora-ai/sdk";
const sdk = new NodeSDK({
spanProcessors: [
createSpanoraExporter({
apiKey: process.env.SPANORA_API_KEY,
}),
],
});
sdk.start();Tracking Executions
track()
Wraps an execution boundary (agent run, request, job) with context. Creates a root span with context attributes, runs the callback inside its OTEL context, and auto-ends the span on completion or error.
All child calls (trackLlm, trackToolHandler, or any OTEL-aware library like Vercel AI SDK) automatically become children of this span.
import { init, track } from "@spanora-ai/sdk";
init({ apiKey: process.env.SPANORA_API_KEY });
const result = await track(
{
agent: "support-agent",
agentVersion: "1.2.0",
userId: "user-123",
orgId: "org-456",
agentSessionId: "sess-789",
attributes: { "custom.priority": "high" },
},
async () => {
// Your agent logic here
return await doWork();
},
);TrackContext Options
| Option | Type | Description |
|---|---|---|
agent | string | Name of the agent implementation |
agentVersion | string | Version of the agent implementation |
agentSessionId | string | Session/conversation identifier |
userId | string | End-user identifier |
orgId | string | Organization/tenant identifier |
attributes | Record<string, string | number | boolean> | Arbitrary key-value attributes on the root span |
Wrapping LLM Calls
trackLlm()
Tracks an LLM call. Creates a child span under the current active context with LLM-specific attributes (model, provider, tokens). Auto-ends the span on completion or error.
Use the optional extractResult callback (third argument or meta.extract) to pull tokens and output from the return value after execution:
import { init, track, trackLlm } from "@spanora-ai/sdk";
init({ apiKey: process.env.SPANORA_API_KEY });
const result = await track({ agent: "support-agent" }, () =>
trackLlm(
{ model: "gpt-4o", provider: "openai", prompt: "Hello!" },
() => myLlmClient.generate("Hello!"),
(result) => ({
output: result.text,
inputTokens: result.usage.input,
outputTokens: result.usage.output,
}),
),
);LlmMeta Options
| Option | Type | Description |
|---|---|---|
provider | string | LLM provider (e.g. "openai", "anthropic") |
model | string | Model name (e.g. "gpt-4o") |
operation | string | Operation type (default: "chat"). Standard values: "chat", "text_completion", "embeddings" |
prompt | string | Record<string, unknown>[] | Input prompt or messages array |
output | string | object | Output text or object |
inputTokens | number | Input token count |
outputTokens | number | Output token count |
durationMs | number | Pre-measured duration in ms (fire-and-forget only) |
extract | (result: unknown) => LlmResult | Result extractor (alternative to the third arg) |
attributes | Record<string, string | number | boolean> | Custom span attributes |
trackLlmStream()
Starts tracking a streaming LLM call. Creates a child span and returns an end function — call it when the stream completes to finalize tracking:
import { trackLlmStream } from "@spanora-ai/sdk";
const endStream = trackLlmStream({
model: "gpt-4o",
provider: "openai",
prompt: "Hello!",
});
try {
const stream = myLlmClient.stream("Hello!");
let output = "";
for await (const chunk of stream) {
output += chunk.text;
}
endStream({ output: output, inputTokens: 10, outputTokens: 50 });
} catch (error) {
endStream({ error: error as Error });
}The end function signature: (opts?: EndStreamOptions) => void
recordLlm()
Fire-and-forget LLM recording. Creates a span, sets all attributes, and ends it immediately. No wrapping needed — use when you've already made the call and want to record it after the fact.
If durationMs is provided, the span start time is backdated to reflect the original call duration.
import { recordLlm } from "@spanora-ai/sdk";
recordLlm({
model: "gpt-4o",
provider: "openai",
prompt: "Hello!",
output: "Hi there!",
inputTokens: 5,
outputTokens: 3,
durationMs: 450,
});Wrapping Tool Calls
trackToolHandler()
Higher-order function that wraps a tool handler with OTEL instrumentation. Returns a function with the same contract as the original handler. Compatible with any tool framework (Vercel AI SDK, Anthropic, OpenAI, manual loops).
Creates a tool.call span as a child of the active context, auto-serializes input/output to JSON for span attributes, and re-throws errors. The handler is reusable — each invocation creates a new span.
import { trackToolHandler } from "@spanora-ai/sdk";
const getWeather = trackToolHandler(
"getWeather",
async (input: { city: string }) => {
const data = await weatherApi.lookup(input.city);
return { temperature: data.temp, condition: data.condition };
},
);
// Use like a normal function — spans are created automatically
const result = await getWeather({ city: "Paris" });runTool()
One-shot tool execution with error handling. Catches errors and returns { output, error } instead of throwing. Useful for manual tool loops where you build SDK-specific result objects.
import { runTool } from "@spanora-ai/sdk";
import type { ToolCallResult } from "@spanora-ai/sdk";
const result: ToolCallResult<WeatherData> = await runTool(
"getWeather",
{ city: "Paris" },
async (input) => weatherApi.lookup(input.city),
);
if (result.error) {
console.error("Tool failed:", result.error);
} else {
console.log("Weather:", result.output);
}recordTool()
Fire-and-forget tool recording. Creates a span, sets all attributes, and ends it immediately.
If durationMs is provided, the span start time is backdated to reflect the original call duration.
import { recordTool } from "@spanora-ai/sdk";
recordTool({
name: "search-kb",
input: { query: "docs" },
status: "success",
durationMs: 150,
});ToolMeta Options
| Option | Type | Description |
|---|---|---|
name | string | Tool name (e.g. "web_search", "calculator") |
status | string | Tool execution status (e.g. "success", "error") |
input | unknown | Tool call arguments (auto-serialized to JSON) |
durationMs | number | Pre-measured duration in ms (fire-and-forget only) |
attributes | Record<string, string | number | boolean> | Custom span attributes |
Provider Integrations
For provider-specific wrappers and integrations, see the integration pages:
- OpenAI SDK —
trackOpenAI(),trackOpenAIStream() - Anthropic SDK —
trackAnthropic(),trackAnthropicStream() - Vercel AI SDK — Automatic instrumentation via
experimental_telemetry - LangChain (Python) — Auto-instrumentation via OpenLLMetry, no Spanora SDK required
Anthropic SDK Tracing Integration
Integrate Spanora with the Anthropic SDK using auto-extraction wrappers. Trace Claude API calls, token usage, costs, and tool use automatically.
OTEL Attribute Conventions for AI Tracing
Semantic attribute conventions for AI observability used by the Spanora platform and SDK. Covers gen_ai.* standard attributes and spanora.* custom attributes.