Spanora

OpenAI SDK Monitoring Integration

Integrate Spanora with the OpenAI SDK using auto-extraction wrappers. Monitor ChatCompletion calls, token usage, costs, and tool calls automatically.

The @spanora-ai/sdk/openai subpath provides trackOpenAI and trackOpenAIStream which auto-extract model, tokens, and output from OpenAI ChatCompletion responses — no manual extractResult callback needed.

For tool-call responses (where content is null), the tool_calls array is JSON-stringified so tool calls are visible in spans.

Installation

npm install @spanora-ai/sdk openai

Basic Usage

basic.ts
import { init, track } from "@spanora-ai/sdk";
import { trackOpenAI } from "@spanora-ai/sdk/openai";
import OpenAI from "openai";

init({ apiKey: process.env.SPANORA_API_KEY });

const client = new OpenAI();

const completion = await track({ agent: "my-agent" }, () =>
  trackOpenAI({ prompt: "Hello!" }, () =>
    client.chat.completions.create({
      model: "gpt-4o",
      max_tokens: 256,
      messages: [{ role: "user", content: "Hello!" }],
    }),
  ),
);

track() also accepts optional userId, orgId, and agentSessionId to attach user context, and LLM calls accept operation (default: "chat", set to "embeddings" or "text_completion" when applicable). See SDK Reference for all options.

Passing Messages as Input

prompt accepts a string or a messages array (auto-serialized to JSON):

messages-input.ts
const params = {
  model: "gpt-4o",
  messages: [{ role: "user" as const, content: "Hello!" }],
};

const completion = await trackOpenAI({ prompt: params.messages }, () =>
  client.chat.completions.create(params),
);

Streaming

For streaming where the stream is collected in a single async scope, wrap the collection in trackOpenAI:

streaming.ts
import { trackOpenAI } from "@spanora-ai/sdk/openai";

const completion = await trackOpenAI({ prompt: "Hello!" }, async () => {
  const stream = client.chat.completions.stream({
    model: "gpt-4o",
    max_tokens: 256,
    messages: [{ role: "user", content: "Hello!" }],
  });
  for await (const chunk of stream) {
    // process tokens
  }
  return await stream.finalChatCompletion();
});

For cases where the stream isn't in a single async scope, use trackOpenAIStream:

manual-stream.ts
import { trackOpenAIStream } from "@spanora-ai/sdk/openai";

const endStream = trackOpenAIStream({ prompt: "Hello!" });
// ... accumulate stream ...
endStream(finalCompletion);

Call the returned function with the final ChatCompletion object when the stream completes. Pass an optional second Error argument if the stream failed.

OpenAIMeta Options

All options from LlmMeta except provider and model (which are auto-extracted from the response):

OptionTypeDescription
operationstringOperation type (default: "chat"). Standard values: "chat", "text_completion", "embeddings"
promptstring | Record<string, unknown>[]Input prompt or messages array
outputstring | objectOutput text (auto-extracted, rarely needed)
inputTokensnumberInput token count (auto-extracted)
outputTokensnumberOutput token count (auto-extracted)
durationMsnumberPre-measured duration in ms (fire-and-forget only)
extract(result: unknown) => LlmResultCustom result extractor
attributesRecord<string, string | number | boolean>Custom span attributes

Next Steps

On this page