Spanora

Anthropic SDK Tracing Integration

Integrate Spanora with the Anthropic SDK using auto-extraction wrappers. Trace Claude API calls, token usage, costs, and tool use automatically.

The @spanora-ai/sdk/anthropic subpath provides trackAnthropic and trackAnthropicStream which auto-extract model, tokens, and output from Anthropic message responses — no manual extractResult callback needed.

For tool-use responses (where there is no text block), the full content array is JSON-stringified so tool calls are visible in spans.

Installation

npm install @spanora-ai/sdk @anthropic-ai/sdk

Basic Usage

basic.ts
import { init, track } from "@spanora-ai/sdk";
import { trackAnthropic } from "@spanora-ai/sdk/anthropic";
import Anthropic from "@anthropic-ai/sdk";

init({ apiKey: process.env.SPANORA_API_KEY });

const client = new Anthropic();

const message = await track({ agent: "my-agent" }, () =>
  trackAnthropic({ prompt: "Hello!" }, () =>
    client.messages.create({
      model: "claude-sonnet-4-20250514",
      max_tokens: 256,
      messages: [{ role: "user", content: "Hello!" }],
    }),
  ),
);

track() also accepts optional userId, orgId, and agentSessionId to attach user context, and LLM calls accept operation (default: "chat", set to "embeddings" or "text_completion" when applicable). See SDK Reference for all options.

Passing Messages as Input

prompt accepts a string or a messages array (auto-serialized to JSON):

messages-input.ts
const params = {
  model: "claude-sonnet-4-20250514",
  messages: [{ role: "user" as const, content: "Hello!" }],
};

const message = await trackAnthropic({ prompt: params.messages }, () =>
  client.messages.create(params),
);

Streaming

For streaming where the stream is collected in a single async scope, wrap the collection in trackAnthropic:

streaming.ts
import { trackAnthropic } from "@spanora-ai/sdk/anthropic";

const message = await trackAnthropic({ prompt: "Hello!" }, async () => {
  const stream = client.messages.stream({
    model: "claude-sonnet-4-20250514",
    max_tokens: 256,
    messages: [{ role: "user", content: "Hello!" }],
  });
  for await (const event of stream) {
    // process tokens
  }
  return await stream.finalMessage();
});

For cases where the stream isn't in a single async scope, use trackAnthropicStream:

manual-stream.ts
import { trackAnthropicStream } from "@spanora-ai/sdk/anthropic";

const endStream = trackAnthropicStream({ prompt: "Hello!" });
// ... accumulate stream ...
endStream(finalMessage);

Call the returned function with the final Anthropic.Message object when the stream completes. Pass an optional second Error argument if the stream failed.

AnthropicMeta Options

All options from LlmMeta except provider and model (which are auto-extracted from the response):

OptionTypeDescription
operationstringOperation type (default: "chat"). Standard values: "chat", "text_completion", "embeddings"
promptstring | Record<string, unknown>[]Input prompt or messages array
outputstring | objectOutput text (auto-extracted, rarely needed)
inputTokensnumberInput token count (auto-extracted)
outputTokensnumberOutput token count (auto-extracted)
durationMsnumberPre-measured duration in ms (fire-and-forget only)
extract(result: unknown) => LlmResultCustom result extractor
attributesRecord<string, string | number | boolean>Custom span attributes

Next Steps

On this page