OpenTelemetry Integration
Capture detailed spans from LLM calls and send them to Teckel.
Prerequisites: This guide extends the TypeScript SDK Reference. Read that first for basic setup.
Span Capture Options
Manual Spans (No Dependencies)
Build span objects yourself. Works with any LLM SDK.
- You control: Everything - you call
tracer.trace()with your data - Can attach: All fields including
documents,systemPrompt - Best for: Simple setups, any LLM provider
TeckelSpanCollector (Recommended)
Collects OTel spans in memory, then you attach them to your trace. Works with any OTel instrumentation.
- You control: When to send, what to attach
- Can attach: All fields including
documents,systemPrompt - Extracts: Standard OTel GenAI attributes (
gen_ai.request.model,gen_ai.usage.*) - Best for: Any OTel-instrumented library, RAG document tracing
TeckelSpanProcessor (Zero Config, AI SDK Only)
OTel SpanProcessor that auto-sends traces when AI SDK root spans complete.
- You control: Nothing after setup - fully automatic
- Auto-extracts:
query,response,systemPrompt,model,tokens,latencyMs - From your config:
sessionId,userId,agentName(set inexperimental_telemetry) - Cannot attach:
documents(not in OTel spec) - Best for: AI SDK apps without RAG
Manual Spans (No OTel Required)
Create spans yourself- works with any LLM SDK:
import { TeckelTracer } from 'teckel-ai';
import OpenAI from 'openai';
const tracer = new TeckelTracer({ apiKey: process.env.TECKEL_API_KEY });
const openai = new OpenAI();
async function tracedCompletion(query: string) {
const startTime = new Date().toISOString();
const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: [{ role: 'user', content: query }]
});
const endTime = new Date().toISOString();
tracer.trace({
query,
response: completion.choices[0].message.content,
model: 'gpt-5',
tokens: {
prompt: completion.usage?.prompt_tokens ?? 0,
completion: completion.usage?.completion_tokens ?? 0,
total: completion.usage?.total_tokens ?? 0
},
spans: [{
name: 'openai.chat.completions',
type: 'llm_call',
startedAt: startTime,
endedAt: endTime,
model: 'gpt-5',
promptTokens: completion.usage?.prompt_tokens,
completionTokens: completion.usage?.completion_tokens,
status: 'completed'
}]
});
return completion;
}
No OTel packages required- just construct the span data manually.
With Existing OTel Instrumentation
If you already have OTel set up, add TeckelSpanProcessor to capture spans:
import { TeckelSpanProcessor } from 'teckel-ai/otel';
import { NodeSDK } from '@opentelemetry/sdk-node';
new NodeSDK({
spanProcessors: [
new TeckelSpanProcessor({
apiKey: process.env.TECKEL_API_KEY
})
]
}).start();
// Your existing instrumented code - spans auto-captured
Works with any OTel instrumentation (OpenAI, LangChain, etc.).
With Vercel AI SDK
The examples below use Vercel AI SDK's experimental_telemetry, but the patterns apply to any OTel-instrumented library.
Installation
npm install teckel-ai @opentelemetry/api @opentelemetry/sdk-trace-base
OTel packages are peer dependencies only needed when using teckel-ai/otel.
Quick Start
import { TeckelTracer } from 'teckel-ai';
import { TeckelSpanCollector } from 'teckel-ai/otel';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
const tracer = new TeckelTracer({ apiKey: process.env.TECKEL_API_KEY });
const spanCollector = new TeckelSpanCollector();
const result = await generateText({
model: openai('gpt-5'),
prompt: 'What is RAG?',
experimental_telemetry: {
isEnabled: true,
tracer: spanCollector.getTracer()
}
});
tracer.trace({
query: 'What is RAG?',
response: result.text,
spans: spanCollector.getSpans() // Tokens auto-calculated
});
await spanCollector.shutdown();
await tracer.flush();
TeckelSpanCollector vs TeckelSpanProcessor
TeckelSpanCollector (Any OTel)
Collects spans in memory from any OTel instrumentation. You call tracer.trace() when ready with full control over what gets sent.
const spanCollector = new TeckelSpanCollector();
const result = await generateText({
model: openai('gpt-5'),
prompt: userQuery,
tools: { searchKnowledgeBase },
experimental_telemetry: {
isEnabled: true,
tracer: spanCollector.getTracer()
}
});
// You control the trace - attach documents, systemPrompt, etc.
tracer.trace({
query: userQuery,
response: result.text,
spans: spanCollector.getSpans(),
documents: retrievedDocs,
systemPrompt: SYSTEM_PROMPT,
sessionId,
userId
});
await spanCollector.shutdown();
TeckelSpanProcessor (AI SDK Only)
Auto-sends traces when AI SDK root spans complete. No tracer.trace() call needed.
import { TeckelSpanProcessor } from 'teckel-ai/otel';
import { NodeSDK } from '@opentelemetry/sdk-node';
// One-time setup in instrumentation.ts
new NodeSDK({
spanProcessors: [
new TeckelSpanProcessor({
apiKey: process.env.TECKEL_API_KEY
})
]
}).start();
// Traces sent automatically - no tracer.trace() call
const result = await generateText({
model: openai('gpt-5'),
prompt: 'Hello',
experimental_telemetry: {
isEnabled: true,
functionId: 'my-chatbot', // → agentName
metadata: { userId, sessionId } // → extracted automatically
}
});
Auto-extracted fields: query, response, systemPrompt, model, tokens, latencyMs, sessionId, userId, agentName
Setting sessionId, userId, agentName (AI SDK): You must set these in your telemetry config for them to be captured:
functionId→agentName(defaults to "unknown" if not set)metadata.sessionId→sessionIdmetadata.userId→userId
Cannot attach: documents (use TeckelSpanCollector if you need RAG tracking)
Feedback: User feedback (thumbs up/down, ratings) happens after the trace is sent and isn't part of OTel spans. Use tracer.feedback() separately from your UI handler.
Token & Cost Aggregation
The SDK automatically sums promptTokens and completionTokens from all spans. Cost is calculated server-side from token counts.
// Tokens and cost calculated automatically - no manual work needed
tracer.trace({
query: userQuery,
response: result.text,
spans: spanCollector.getSpans()
});
Why this matters: AI SDK's usage object may only reflect the last step in multi-turn flows. Span aggregation captures tokens from ALL LLM calls.
Override when needed:
tracer.trace({
spans,
tokens: { prompt: 500, completion: 200, total: 700 }, // Your values used
costUsd: 0.0042 // Optional: provide your own cost
});
You can also provide costUsd on individual spans for detailed breakdown.
Span Mapping Reference
AI SDK → Teckel Type Mapping
| AI SDK Span Name | Teckel Type | Description |
|---|---|---|
ai.generateText | agent | Root span for text generation |
ai.streamText | agent | Root span for streaming |
ai.generateObject | agent | Root span for structured output |
ai.generateText.doGenerate | llm_call | Actual LLM API call |
ai.streamText.doStream | llm_call | Streaming LLM call |
ai.toolCall | tool_call | Tool/function execution |
| Other | custom | Catch-all |
OTel Attribute Extraction
| Teckel Field | OTel Source |
|---|---|
spanId | span.spanContext().spanId (hex → UUID) |
parentSpanId | span.parentSpanContext.spanId |
name | span.name |
startedAt | span.startTime (ISO 8601) |
endedAt | span.endTime (ISO 8601) |
model | gen_ai.request.model |
promptTokens | gen_ai.usage.input_tokens |
completionTokens | gen_ai.usage.output_tokens |
toolName | ai.toolCall.name |
toolArguments | ai.toolCall.args |
toolResult | ai.toolCall.result |
What OTel Doesn't Cover
Teckel adds value beyond OTel GenAI conventions:
| Teckel Field | Why Not in OTel |
|---|---|
documents | RAG/retrieval not in OTel GenAI spec |
documents[].similarity | Vector similarity is app-specific |
documents[].source | Knowledge platform is app-specific |
Add documents explicitly:
tracer.trace({
spans,
documents: chunks.map((chunk, i) => ({
id: chunk.id,
name: chunk.title,
text: chunk.content,
similarity: chunk.score,
rank: i
}))
});
Troubleshooting
| Issue | Solution |
|---|---|
| Spans not captured | Ensure isEnabled: true and pass tracer: spanCollector.getTracer() |
| Token counts zero | Streaming may not report until complete; tool calls don't have tokens |
| Missing attributes | Add recordInputs: true, recordOutputs: true to telemetry config |
| OTel packages missing | Install @opentelemetry/api and @opentelemetry/sdk-trace-base |
Best Practices
- Always shutdown the collector:
await spanCollector.shutdown() - Use Manual Collection for RAG: Auto-tracing can't attach documents
- Match sessionId: Set in both telemetry metadata and trace for consistency
See also: TypeScript SDK Reference | HTTP API Reference