Spanora

Vercel AI SDK Observability Integration

Integrate Spanora with the Vercel AI SDK for automatic LLM observability. Capture every AI call, token count, and cost with zero manual instrumentation.

The recommended integration uses the Vercel AI SDK with experimental_telemetry enabled. Spanora's init() sets up the OTEL exporter — all Vercel AI SDK spans are captured automatically with no manual instrumentation needed.

Installation

npm install @spanora-ai/sdk ai @ai-sdk/openai

Replace @ai-sdk/openai with your provider of choice (@ai-sdk/anthropic, @ai-sdk/google, etc.).

Basic Usage

Initialize the SDK and enable telemetry on your AI calls:

basic.ts
import { init } from "@spanora-ai/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

init({ apiKey: process.env.SPANORA_API_KEY });

const result = await generateText({
  model: openai("gpt-4o"),
  system: "You are a helpful assistant.",
  prompt: "What is the capital of France?",
  experimental_telemetry: { isEnabled: true },
});

That's it — model, tokens, prompts, and duration are captured automatically.

With Agent Context

Use track() to attach agent name and user context to Vercel AI SDK calls:

with-track.ts
import { init, track } from "@spanora-ai/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";

init({ apiKey: process.env.SPANORA_API_KEY });

const result = await track(
  {
    agent: "support-agent",
    userId: "user-123",
    orgId: "org-456",
  },
  () =>
    generateText({
      model: openai("gpt-4o"),
      prompt: "Hello!",
      experimental_telemetry: { isEnabled: true },
    }),
);

With Tools

Use trackToolHandler() to instrument tool executions within Vercel AI SDK:

with-tools.ts
import { init, track, trackToolHandler } from "@spanora-ai/sdk";
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";

init({ apiKey: process.env.SPANORA_API_KEY });

const result = await track({ agent: "weather-agent" }, () =>
  generateText({
    model: openai("gpt-4o"),
    prompt: "What's the weather in Paris?",
    experimental_telemetry: { isEnabled: true },
    tools: {
      getWeather: tool({
        description: "Get the weather for a city",
        parameters: z.object({ city: z.string() }),
        execute: trackToolHandler("getWeather", async ({ city }) => {
          return { temperature: 22, condition: "sunny", city };
        }),
      }),
    },
  }),
);

Streaming

Streaming works the same way — just use streamText() with telemetry enabled:

streaming.ts
import { init, track } from "@spanora-ai/sdk";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

init({ apiKey: process.env.SPANORA_API_KEY });

const result = await track({ agent: "support-agent" }, async () => {
  const stream = streamText({
    model: openai("gpt-4o"),
    prompt: "Write a haiku about observability.",
    experimental_telemetry: { isEnabled: true },
  });

  for await (const chunk of stream.textStream) {
    process.stdout.write(chunk);
  }

  return stream;
});

Telemetry Options

The experimental_telemetry option accepts the following:

OptionTypeDescription
isEnabledbooleanEnable/disable telemetry for this call
functionIdstringCustom function identifier for the span
metadataRecord<string, string | number | boolean>Custom metadata attached to spans

Next Steps

On this page