Vercel AI SDK Observability Integration
Integrate Spanora with the Vercel AI SDK for automatic LLM observability. Capture every AI call, token count, and cost with zero manual instrumentation.
The recommended integration uses the Vercel AI SDK with experimental_telemetry enabled. Spanora's init() sets up the OTEL exporter — all Vercel AI SDK spans are captured automatically with no manual instrumentation needed.
Installation
npm install @spanora-ai/sdk ai @ai-sdk/openaiReplace @ai-sdk/openai with your provider of choice (@ai-sdk/anthropic, @ai-sdk/google, etc.).
Basic Usage
Initialize the SDK and enable telemetry on your AI calls:
import { init } from "@spanora-ai/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
init({ apiKey: process.env.SPANORA_API_KEY });
const result = await generateText({
model: openai("gpt-4o"),
system: "You are a helpful assistant.",
prompt: "What is the capital of France?",
experimental_telemetry: { isEnabled: true },
});That's it — model, tokens, prompts, and duration are captured automatically.
With Agent Context
Use track() to attach agent name and user context to Vercel AI SDK calls:
import { init, track } from "@spanora-ai/sdk";
import { generateText } from "ai";
import { openai } from "@ai-sdk/openai";
init({ apiKey: process.env.SPANORA_API_KEY });
const result = await track(
{
agent: "support-agent",
userId: "user-123",
orgId: "org-456",
},
() =>
generateText({
model: openai("gpt-4o"),
prompt: "Hello!",
experimental_telemetry: { isEnabled: true },
}),
);With Tools
Use trackToolHandler() to instrument tool executions within Vercel AI SDK:
import { init, track, trackToolHandler } from "@spanora-ai/sdk";
import { generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
init({ apiKey: process.env.SPANORA_API_KEY });
const result = await track({ agent: "weather-agent" }, () =>
generateText({
model: openai("gpt-4o"),
prompt: "What's the weather in Paris?",
experimental_telemetry: { isEnabled: true },
tools: {
getWeather: tool({
description: "Get the weather for a city",
parameters: z.object({ city: z.string() }),
execute: trackToolHandler("getWeather", async ({ city }) => {
return { temperature: 22, condition: "sunny", city };
}),
}),
},
}),
);Streaming
Streaming works the same way — just use streamText() with telemetry enabled:
import { init, track } from "@spanora-ai/sdk";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";
init({ apiKey: process.env.SPANORA_API_KEY });
const result = await track({ agent: "support-agent" }, async () => {
const stream = streamText({
model: openai("gpt-4o"),
prompt: "Write a haiku about observability.",
experimental_telemetry: { isEnabled: true },
});
for await (const chunk of stream.textStream) {
process.stdout.write(chunk);
}
return stream;
});Telemetry Options
The experimental_telemetry option accepts the following:
| Option | Type | Description |
|---|---|---|
isEnabled | boolean | Enable/disable telemetry for this call |
functionId | string | Custom function identifier for the span |
metadata | Record<string, string | number | boolean> | Custom metadata attached to spans |
Next Steps
- SDK Reference — Full API reference for
track(),trackToolHandler(), and more - OTEL Attributes — Semantic attribute conventions
LangChain Python Observability Integration
Integrate Spanora with LangChain and LangGraph Python applications using OpenTelemetry auto-instrumentation. Monitor LLM calls, chains, and tools automatically.
OpenAI SDK Monitoring Integration
Integrate Spanora with the OpenAI SDK using auto-extraction wrappers. Monitor ChatCompletion calls, token usage, costs, and tool calls automatically.