LangChain Python Observability Integration
Integrate Spanora with LangChain and LangGraph Python applications using OpenTelemetry auto-instrumentation. Monitor LLM calls, chains, and tools automatically.
Auto-instrument your LangChain / LangGraph Python applications using opentelemetry-instrumentation-langchain. Every LLM call, chain invocation, and tool execution is captured automatically.
No Spanora SDK required — just standard OpenTelemetry.
Installation
pip install langchain langchain-openai \
opentelemetry-sdk \
opentelemetry-exporter-otlp \
opentelemetry-instrumentation-langchainSetup
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.resources import Resource
from opentelemetry.instrumentation.langchain import LangchainInstrumentor
resource = Resource.create({
"service.name": "my-langchain-app",
"gen_ai.agent.name": "my-agent",
})
provider = TracerProvider(resource=resource)
provider.add_span_processor(BatchSpanProcessor(OTLPSpanExporter(
endpoint="https://spanora.ai/api/v1/traces",
headers={"Authorization": "Bearer <your-spanora-api-key>"},
)))
trace.set_tracer_provider(provider)
# Auto-instrument LangChain (call AFTER setting the provider)
LangchainInstrumentor().instrument()Basic Usage
Use LangChain as normal — all calls are traced automatically:
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful assistant."),
("human", "{input}"),
])
chain = prompt | llm
result = chain.invoke({"input": "What is the capital of France?"})Adding User and Org Context
You can pass context via OpenTelemetry resource attributes or via LangChain's metadata config. Resource attributes are attached to all spans from your service:
resource = Resource.create({
"service.name": "my-langchain-app",
"spanora.user.id": "usr_abc123",
"spanora.org.id": "org_acme",
})Alternatively, pass metadata per-invocation via LangChain's config. Note that how metadata appears on spans depends on the instrumentor version:
result = chain.invoke(
{"input": "How do I reset my password?"},
config={
"metadata": {
"user_id": "usr_abc123",
"org_id": "org_acme",
}
},
)Agent with Tools
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
from langchain.agents import create_agent
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"72F and sunny in {city}"
llm = ChatOpenAI(model="gpt-4o")
agent = create_agent(llm, [get_weather])
result = agent.invoke(
{"messages": [{"role": "user", "content": "What's the weather in Paris?"}]},
config={"metadata": {"user_id": "usr_abc123", "org_id": "org_acme"}},
)What Gets Captured
The instrumentor automatically captures:
- LLM spans: model, provider, prompts, completions, token counts
- Tool spans: tool name, input arguments, output, execution status
- Chain spans: full execution hierarchy showing how components are nested
- Cost: calculated by Spanora from model + token counts when available
- Agent name: set via
gen_ai.agent.nameresource attribute - User/Org: set via
spanora.user.id/spanora.org.idresource attributes
Troubleshooting
Traces not appearing?
- Verify your API key is correct and prefixed with
Bearerin the auth header - Ensure
TracerProvideris set before callingLangchainInstrumentor().instrument() - Check that the endpoint URL ends with
/api/v1/traces
Missing prompt content?
- The instrumentor captures prompts by default. If content is missing, ensure you haven't set
TRACELOOP_TRACE_CONTENT=false.
Missing tokens/cost?
- Token counts come from the LLM provider response. Streaming may not report tokens with all providers.
- Cost is calculated by Spanora from model + token counts. If the model is not in Spanora's pricing database, cost will be null.
Next Steps
- Any OTEL Provider — Send traces from any OTEL-compatible framework or service
- OTEL Attributes — Semantic attribute conventions recognized by Spanora
Any OpenTelemetry Provider — Universal Integration
Send traces from any OpenTelemetry-compatible framework, language, or service to Spanora — no SDK required. Works with LlamaIndex, CrewAI, Spring AI, and more.
Vercel AI SDK Observability Integration
Integrate Spanora with the Vercel AI SDK for automatic LLM observability. Capture every AI call, token count, and cost with zero manual instrumentation.