Any OpenTelemetry Provider — Universal Integration
Send traces from any OpenTelemetry-compatible framework, language, or service to Spanora — no SDK required. Works with LlamaIndex, CrewAI, Spring AI, and more.
Spanora accepts standard OTLP HTTP traces from any OpenTelemetry-compatible source. If your framework, language, or service can export OTEL traces, it works with Spanora out of the box.
This means you can use LangChain, CrewAI, AutoGen, LlamaIndex, Spring AI, or any other framework in any language — as long as it exports OTEL data, Spanora will ingest and visualize it.
Endpoint
Point your OTEL exporter to:
https://spanora.ai/api/v1/tracesAuthentication
Include your API key as a Bearer token in the Authorization header:
Authorization: Bearer <your-spanora-api-key>Get your API key from the dashboard.
Protocol
| Setting | Value |
|---|---|
| Protocol | OTLP HTTP (JSON and Protobuf) |
| Endpoint | https://spanora.ai/api/v1/traces |
| Method | POST |
| Content-Type | application/json or application/x-protobuf |
| Authentication | Authorization: Bearer <api-key> |
Both OTLP JSON and OTLP Protobuf content types are supported. Most OTEL exporters default to Protobuf, which works out of the box.
Python Example
Any Python application using the OpenTelemetry SDK can send traces to Spanora:
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
resource = Resource.create({
"service.name": "my-ai-app",
})
provider = TracerProvider(resource=resource)
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(
endpoint="https://spanora.ai/api/v1/traces",
headers={"Authorization": "Bearer <your-spanora-api-key>"},
)))
trace.set_tracer_provider(provider)Node.js / TypeScript Example
Use the standard @opentelemetry/exporter-trace-otlp-http package:
import { NodeSDK } from "@opentelemetry/sdk-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
const exporter = new OTLPTraceExporter({
url: "https://spanora.ai/api/v1/traces",
headers: {
Authorization: "Bearer <your-spanora-api-key>",
},
});
const sdk = new NodeSDK({
spanProcessors: [new BatchSpanProcessor(exporter)],
});
sdk.start();Environment Variables
Most OTEL SDKs support configuration via environment variables:
OTEL_EXPORTER_OTLP_ENDPOINT=https://spanora.ai
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://spanora.ai/api/v1/traces
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <your-spanora-api-key>"
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf # or http/json — both are supportedRecommended Resource Attributes
Set these resource attributes so Spanora can group and filter your traces:
| Attribute | Description |
|---|---|
service.name | Your application or service name |
gen_ai.agent.name | Agent name shown in the trace list |
spanora.user.id | User ID for per-user cost and usage tracking |
spanora.org.id | Organization ID for multi-tenant filtering |
What Gets Captured
Spanora automatically recognizes spans following these attribute conventions:
- GenAI Semantic Conventions (
gen_ai.*) — the OTEL standard for AI/LLM observability - OpenInference — used by LlamaIndex, Phoenix, and others
- Vercel AI SDK attributes
- Spanora attributes (
spanora.*) for cost and outcome metadata
If your framework emits gen_ai.* attributes on its spans, Spanora will extract model, provider, tokens, prompts, and outputs automatically.
Compatible Frameworks
Any framework that exports OTEL traces works with Spanora. Some popular options:
| Framework / Library | Language | OTEL Support |
|---|---|---|
| LangChain | Python | Via opentelemetry-instrumentation-langchain |
| LlamaIndex | Python | Built-in OTEL export |
| CrewAI | Python | Via OpenTelemetry integration |
| Spring AI | Java | Via Micrometer + OTEL bridge |
| Vercel AI SDK | TypeScript | Built-in experimental_telemetry |
| OpenAI SDK | Any | Via OTEL wrappers |
| Anthropic SDK | Any | Via OTEL wrappers |
Troubleshooting
Traces not appearing?
- Verify your API key is correct and prefixed with
Bearerin the Authorization header - Ensure the endpoint URL is exactly
https://spanora.ai/api/v1/traces - Both OTLP HTTP JSON and Protobuf are supported, but gRPC is not — make sure your exporter uses HTTP
- Make sure
BatchSpanProcessoris flushed before your process exits
Missing cost data?
- Cost is calculated by Spanora from model name + token counts
- Ensure your spans include
gen_ai.request.modeland token usage attributes - If the model is not in Spanora's pricing database, cost will show as null
Next Steps
- LangChain (Python) — Auto-instrumentation for LangChain and LangGraph
- Vercel AI SDK — Native OTEL integration for Vercel AI
- OTEL Attributes — Full list of attributes recognized by Spanora
Integrations — LangChain, Vercel AI SDK, OpenAI & More
Connect Spanora with your existing AI tools and frameworks for full observability. Guides for LangChain, Vercel AI SDK, OpenAI, Anthropic, and any OpenTelemetry provider.
LangChain Python Observability Integration
Integrate Spanora with LangChain and LangGraph Python applications using OpenTelemetry auto-instrumentation. Monitor LLM calls, chains, and tools automatically.