Anthropic SDK Tracing Integration
Integrate Spanora with the Anthropic SDK using auto-extraction wrappers. Trace Claude API calls, token usage, costs, and tool use automatically.
The @spanora-ai/sdk/anthropic subpath provides trackAnthropic and trackAnthropicStream which auto-extract model, tokens, and output from Anthropic message responses — no manual extractResult callback needed.
For tool-use responses (where there is no text block), the full content array is JSON-stringified so tool calls are visible in spans.
Installation
npm install @spanora-ai/sdk @anthropic-ai/sdkBasic Usage
import { init, track } from "@spanora-ai/sdk";
import { trackAnthropic } from "@spanora-ai/sdk/anthropic";
import Anthropic from "@anthropic-ai/sdk";
init({ apiKey: process.env.SPANORA_API_KEY });
const client = new Anthropic();
const message = await track({ agent: "my-agent" }, () =>
trackAnthropic({ prompt: "Hello!" }, () =>
client.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 256,
messages: [{ role: "user", content: "Hello!" }],
}),
),
);track() also accepts optional userId, orgId, and agentSessionId to attach user context, and LLM calls accept operation (default: "chat", set to "embeddings" or "text_completion" when applicable). See SDK Reference for all options.
Passing Messages as Input
prompt accepts a string or a messages array (auto-serialized to JSON):
const params = {
model: "claude-sonnet-4-20250514",
messages: [{ role: "user" as const, content: "Hello!" }],
};
const message = await trackAnthropic({ prompt: params.messages }, () =>
client.messages.create(params),
);Streaming
For streaming where the stream is collected in a single async scope, wrap the collection in trackAnthropic:
import { trackAnthropic } from "@spanora-ai/sdk/anthropic";
const message = await trackAnthropic({ prompt: "Hello!" }, async () => {
const stream = client.messages.stream({
model: "claude-sonnet-4-20250514",
max_tokens: 256,
messages: [{ role: "user", content: "Hello!" }],
});
for await (const event of stream) {
// process tokens
}
return await stream.finalMessage();
});For cases where the stream isn't in a single async scope, use trackAnthropicStream:
import { trackAnthropicStream } from "@spanora-ai/sdk/anthropic";
const endStream = trackAnthropicStream({ prompt: "Hello!" });
// ... accumulate stream ...
endStream(finalMessage);Call the returned function with the final Anthropic.Message object when the stream completes. Pass an optional second Error argument if the stream failed.
AnthropicMeta Options
All options from LlmMeta except provider and model (which are auto-extracted from the response):
| Option | Type | Description |
|---|---|---|
operation | string | Operation type (default: "chat"). Standard values: "chat", "text_completion", "embeddings" |
prompt | string | Record<string, unknown>[] | Input prompt or messages array |
output | string | object | Output text (auto-extracted, rarely needed) |
inputTokens | number | Input token count (auto-extracted) |
outputTokens | number | Output token count (auto-extracted) |
durationMs | number | Pre-measured duration in ms (fire-and-forget only) |
extract | (result: unknown) => LlmResult | Custom result extractor |
attributes | Record<string, string | number | boolean> | Custom span attributes |
Next Steps
- Vercel AI SDK — Recommended integration with automatic instrumentation
- SDK Reference — Full API reference
OpenAI SDK Monitoring Integration
Integrate Spanora with the OpenAI SDK using auto-extraction wrappers. Monitor ChatCompletion calls, token usage, costs, and tool calls automatically.
TypeScript SDK Reference
The optional Spanora TypeScript SDK provides DX sugar and improved signal quality for AI observability on top of raw OpenTelemetry.