OpenAI SDK Monitoring Integration
Integrate Spanora with the OpenAI SDK using auto-extraction wrappers. Monitor ChatCompletion calls, token usage, costs, and tool calls automatically.
The @spanora-ai/sdk/openai subpath provides trackOpenAI and trackOpenAIStream which auto-extract model, tokens, and output from OpenAI ChatCompletion responses — no manual extractResult callback needed.
For tool-call responses (where content is null), the tool_calls array is JSON-stringified so tool calls are visible in spans.
Installation
npm install @spanora-ai/sdk openaiBasic Usage
import { init, track } from "@spanora-ai/sdk";
import { trackOpenAI } from "@spanora-ai/sdk/openai";
import OpenAI from "openai";
init({ apiKey: process.env.SPANORA_API_KEY });
const client = new OpenAI();
const completion = await track({ agent: "my-agent" }, () =>
trackOpenAI({ prompt: "Hello!" }, () =>
client.chat.completions.create({
model: "gpt-4o",
max_tokens: 256,
messages: [{ role: "user", content: "Hello!" }],
}),
),
);track() also accepts optional userId, orgId, and agentSessionId to attach user context, and LLM calls accept operation (default: "chat", set to "embeddings" or "text_completion" when applicable). See SDK Reference for all options.
Passing Messages as Input
prompt accepts a string or a messages array (auto-serialized to JSON):
const params = {
model: "gpt-4o",
messages: [{ role: "user" as const, content: "Hello!" }],
};
const completion = await trackOpenAI({ prompt: params.messages }, () =>
client.chat.completions.create(params),
);Streaming
For streaming where the stream is collected in a single async scope, wrap the collection in trackOpenAI:
import { trackOpenAI } from "@spanora-ai/sdk/openai";
const completion = await trackOpenAI({ prompt: "Hello!" }, async () => {
const stream = client.chat.completions.stream({
model: "gpt-4o",
max_tokens: 256,
messages: [{ role: "user", content: "Hello!" }],
});
for await (const chunk of stream) {
// process tokens
}
return await stream.finalChatCompletion();
});For cases where the stream isn't in a single async scope, use trackOpenAIStream:
import { trackOpenAIStream } from "@spanora-ai/sdk/openai";
const endStream = trackOpenAIStream({ prompt: "Hello!" });
// ... accumulate stream ...
endStream(finalCompletion);Call the returned function with the final ChatCompletion object when the stream completes. Pass an optional second Error argument if the stream failed.
OpenAIMeta Options
All options from LlmMeta except provider and model (which are auto-extracted from the response):
| Option | Type | Description |
|---|---|---|
operation | string | Operation type (default: "chat"). Standard values: "chat", "text_completion", "embeddings" |
prompt | string | Record<string, unknown>[] | Input prompt or messages array |
output | string | object | Output text (auto-extracted, rarely needed) |
inputTokens | number | Input token count (auto-extracted) |
outputTokens | number | Output token count (auto-extracted) |
durationMs | number | Pre-measured duration in ms (fire-and-forget only) |
extract | (result: unknown) => LlmResult | Custom result extractor |
attributes | Record<string, string | number | boolean> | Custom span attributes |
Next Steps
- Vercel AI SDK — Recommended integration with automatic instrumentation
- SDK Reference — Full API reference
Vercel AI SDK Observability Integration
Integrate Spanora with the Vercel AI SDK for automatic LLM observability. Capture every AI call, token count, and cost with zero manual instrumentation.
Anthropic SDK Tracing Integration
Integrate Spanora with the Anthropic SDK using auto-extraction wrappers. Trace Claude API calls, token usage, costs, and tool use automatically.