Spanora

Any OpenTelemetry Provider — Universal Integration

Send traces from any OpenTelemetry-compatible framework, language, or service to Spanora — no SDK required. Works with LlamaIndex, CrewAI, Spring AI, and more.

Spanora accepts standard OTLP HTTP traces from any OpenTelemetry-compatible source. If your framework, language, or service can export OTEL traces, it works with Spanora out of the box.

This means you can use LangChain, CrewAI, AutoGen, LlamaIndex, Spring AI, or any other framework in any language — as long as it exports OTEL data, Spanora will ingest and visualize it.

Endpoint

Point your OTEL exporter to:

https://spanora.ai/api/v1/traces

Authentication

Include your API key as a Bearer token in the Authorization header:

Authorization: Bearer <your-spanora-api-key>

Get your API key from the dashboard.

Protocol

SettingValue
ProtocolOTLP HTTP (JSON and Protobuf)
Endpointhttps://spanora.ai/api/v1/traces
MethodPOST
Content-Typeapplication/json or application/x-protobuf
AuthenticationAuthorization: Bearer <api-key>

Both OTLP JSON and OTLP Protobuf content types are supported. Most OTEL exporters default to Protobuf, which works out of the box.

Python Example

Any Python application using the OpenTelemetry SDK can send traces to Spanora:

otel_setup.py
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource

resource = Resource.create({
    "service.name": "my-ai-app",
})

provider = TracerProvider(resource=resource)
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(
    endpoint="https://spanora.ai/api/v1/traces",
    headers={"Authorization": "Bearer <your-spanora-api-key>"},
)))
trace.set_tracer_provider(provider)

Node.js / TypeScript Example

Use the standard @opentelemetry/exporter-trace-otlp-http package:

otel-setup.ts
import { NodeSDK } from "@opentelemetry/sdk-node";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";

const exporter = new OTLPTraceExporter({
  url: "https://spanora.ai/api/v1/traces",
  headers: {
    Authorization: "Bearer <your-spanora-api-key>",
  },
});

const sdk = new NodeSDK({
  spanProcessors: [new BatchSpanProcessor(exporter)],
});

sdk.start();

Environment Variables

Most OTEL SDKs support configuration via environment variables:

OTEL_EXPORTER_OTLP_ENDPOINT=https://spanora.ai
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://spanora.ai/api/v1/traces
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <your-spanora-api-key>"
OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf  # or http/json — both are supported

Set these resource attributes so Spanora can group and filter your traces:

AttributeDescription
service.nameYour application or service name
gen_ai.agent.nameAgent name shown in the trace list
spanora.user.idUser ID for per-user cost and usage tracking
spanora.org.idOrganization ID for multi-tenant filtering

What Gets Captured

Spanora automatically recognizes spans following these attribute conventions:

  • GenAI Semantic Conventions (gen_ai.*) — the OTEL standard for AI/LLM observability
  • OpenInference — used by LlamaIndex, Phoenix, and others
  • Vercel AI SDK attributes
  • Spanora attributes (spanora.*) for cost and outcome metadata

If your framework emits gen_ai.* attributes on its spans, Spanora will extract model, provider, tokens, prompts, and outputs automatically.

Compatible Frameworks

Any framework that exports OTEL traces works with Spanora. Some popular options:

Framework / LibraryLanguageOTEL Support
LangChainPythonVia opentelemetry-instrumentation-langchain
LlamaIndexPythonBuilt-in OTEL export
CrewAIPythonVia OpenTelemetry integration
Spring AIJavaVia Micrometer + OTEL bridge
Vercel AI SDKTypeScriptBuilt-in experimental_telemetry
OpenAI SDKAnyVia OTEL wrappers
Anthropic SDKAnyVia OTEL wrappers

Troubleshooting

Traces not appearing?

  • Verify your API key is correct and prefixed with Bearer in the Authorization header
  • Ensure the endpoint URL is exactly https://spanora.ai/api/v1/traces
  • Both OTLP HTTP JSON and Protobuf are supported, but gRPC is not — make sure your exporter uses HTTP
  • Make sure BatchSpanProcessor is flushed before your process exits

Missing cost data?

  • Cost is calculated by Spanora from model name + token counts
  • Ensure your spans include gen_ai.request.model and token usage attributes
  • If the model is not in Spanora's pricing database, cost will show as null

Next Steps

On this page