Skip to main content

OpenTelemetry Integration

Capture detailed spans from LLM calls and send them to Teckel.

Prerequisites: This guide extends the TypeScript SDK Reference. Read that first for basic setup.

Span Capture Options

Manual Spans (No Dependencies)

Build span objects yourself. Works with any LLM SDK.

  • You control: Everything - you call tracer.trace() with your data
  • Can attach: All fields including documents, systemPrompt
  • Best for: Simple setups, any LLM provider

Collects OTel spans in memory, then you attach them to your trace. Works with any OTel instrumentation.

  • You control: When to send, what to attach
  • Can attach: All fields including documents, systemPrompt
  • Extracts: Standard OTel GenAI attributes (gen_ai.request.model, gen_ai.usage.*)
  • Best for: Any OTel-instrumented library, RAG document tracing

TeckelSpanProcessor (Zero Config, AI SDK Only)

OTel SpanProcessor that auto-sends traces when AI SDK root spans complete.

  • You control: Nothing after setup - fully automatic
  • Auto-extracts: query, response, systemPrompt, model, tokens, latencyMs
  • From your config: sessionId, userId, agentName (set in experimental_telemetry)
  • Cannot attach: documents (not in OTel spec)
  • Best for: AI SDK apps without RAG

Manual Spans (No OTel Required)

Create spans yourself- works with any LLM SDK:

import { TeckelTracer } from 'teckel-ai';
import OpenAI from 'openai';

const tracer = new TeckelTracer({ apiKey: process.env.TECKEL_API_KEY });
const openai = new OpenAI();

async function tracedCompletion(query: string) {
const startTime = new Date().toISOString();

const completion = await openai.chat.completions.create({
model: 'gpt-5',
messages: [{ role: 'user', content: query }]
});

const endTime = new Date().toISOString();

tracer.trace({
query,
response: completion.choices[0].message.content,
model: 'gpt-5',
tokens: {
prompt: completion.usage?.prompt_tokens ?? 0,
completion: completion.usage?.completion_tokens ?? 0,
total: completion.usage?.total_tokens ?? 0
},
spans: [{
name: 'openai.chat.completions',
type: 'llm_call',
startedAt: startTime,
endedAt: endTime,
model: 'gpt-5',
promptTokens: completion.usage?.prompt_tokens,
completionTokens: completion.usage?.completion_tokens,
status: 'completed'
}]
});

return completion;
}

No OTel packages required- just construct the span data manually.

With Existing OTel Instrumentation

If you already have OTel set up, add TeckelSpanProcessor to capture spans:

import { TeckelSpanProcessor } from 'teckel-ai/otel';
import { NodeSDK } from '@opentelemetry/sdk-node';

new NodeSDK({
spanProcessors: [
new TeckelSpanProcessor({
apiKey: process.env.TECKEL_API_KEY
})
]
}).start();

// Your existing instrumented code - spans auto-captured

Works with any OTel instrumentation (OpenAI, LangChain, etc.).

With Vercel AI SDK

The examples below use Vercel AI SDK's experimental_telemetry, but the patterns apply to any OTel-instrumented library.

Installation

npm install teckel-ai @opentelemetry/api @opentelemetry/sdk-trace-base

OTel packages are peer dependencies only needed when using teckel-ai/otel.

Quick Start

import { TeckelTracer } from 'teckel-ai';
import { TeckelSpanCollector } from 'teckel-ai/otel';
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

const tracer = new TeckelTracer({ apiKey: process.env.TECKEL_API_KEY });
const spanCollector = new TeckelSpanCollector();

const result = await generateText({
model: openai('gpt-5'),
prompt: 'What is RAG?',
experimental_telemetry: {
isEnabled: true,
tracer: spanCollector.getTracer()
}
});

tracer.trace({
query: 'What is RAG?',
response: result.text,
spans: spanCollector.getSpans() // Tokens auto-calculated
});

await spanCollector.shutdown();
await tracer.flush();

TeckelSpanCollector vs TeckelSpanProcessor

TeckelSpanCollector (Any OTel)

Collects spans in memory from any OTel instrumentation. You call tracer.trace() when ready with full control over what gets sent.

const spanCollector = new TeckelSpanCollector();

const result = await generateText({
model: openai('gpt-5'),
prompt: userQuery,
tools: { searchKnowledgeBase },
experimental_telemetry: {
isEnabled: true,
tracer: spanCollector.getTracer()
}
});

// You control the trace - attach documents, systemPrompt, etc.
tracer.trace({
query: userQuery,
response: result.text,
spans: spanCollector.getSpans(),
documents: retrievedDocs,
systemPrompt: SYSTEM_PROMPT,
sessionId,
userId
});

await spanCollector.shutdown();

TeckelSpanProcessor (AI SDK Only)

Auto-sends traces when AI SDK root spans complete. No tracer.trace() call needed.

import { TeckelSpanProcessor } from 'teckel-ai/otel';
import { NodeSDK } from '@opentelemetry/sdk-node';

// One-time setup in instrumentation.ts
new NodeSDK({
spanProcessors: [
new TeckelSpanProcessor({
apiKey: process.env.TECKEL_API_KEY
})
]
}).start();

// Traces sent automatically - no tracer.trace() call
const result = await generateText({
model: openai('gpt-5'),
prompt: 'Hello',
experimental_telemetry: {
isEnabled: true,
functionId: 'my-chatbot', // → agentName
metadata: { userId, sessionId } // → extracted automatically
}
});

Auto-extracted fields: query, response, systemPrompt, model, tokens, latencyMs, sessionId, userId, agentName

Setting sessionId, userId, agentName (AI SDK): You must set these in your telemetry config for them to be captured:

  • functionIdagentName (defaults to "unknown" if not set)
  • metadata.sessionIdsessionId
  • metadata.userIduserId

Cannot attach: documents (use TeckelSpanCollector if you need RAG tracking)

Feedback: User feedback (thumbs up/down, ratings) happens after the trace is sent and isn't part of OTel spans. Use tracer.feedback() separately from your UI handler.

Token & Cost Aggregation

The SDK automatically sums promptTokens and completionTokens from all spans. Cost is calculated server-side from token counts.

// Tokens and cost calculated automatically - no manual work needed
tracer.trace({
query: userQuery,
response: result.text,
spans: spanCollector.getSpans()
});

Why this matters: AI SDK's usage object may only reflect the last step in multi-turn flows. Span aggregation captures tokens from ALL LLM calls.

Override when needed:

tracer.trace({
spans,
tokens: { prompt: 500, completion: 200, total: 700 }, // Your values used
costUsd: 0.0042 // Optional: provide your own cost
});

You can also provide costUsd on individual spans for detailed breakdown.

Span Mapping Reference

AI SDK → Teckel Type Mapping

AI SDK Span NameTeckel TypeDescription
ai.generateTextagentRoot span for text generation
ai.streamTextagentRoot span for streaming
ai.generateObjectagentRoot span for structured output
ai.generateText.doGeneratellm_callActual LLM API call
ai.streamText.doStreamllm_callStreaming LLM call
ai.toolCalltool_callTool/function execution
OthercustomCatch-all

OTel Attribute Extraction

Teckel FieldOTel Source
spanIdspan.spanContext().spanId (hex → UUID)
parentSpanIdspan.parentSpanContext.spanId
namespan.name
startedAtspan.startTime (ISO 8601)
endedAtspan.endTime (ISO 8601)
modelgen_ai.request.model
promptTokensgen_ai.usage.input_tokens
completionTokensgen_ai.usage.output_tokens
toolNameai.toolCall.name
toolArgumentsai.toolCall.args
toolResultai.toolCall.result

What OTel Doesn't Cover

Teckel adds value beyond OTel GenAI conventions:

Teckel FieldWhy Not in OTel
documentsRAG/retrieval not in OTel GenAI spec
documents[].similarityVector similarity is app-specific
documents[].sourceKnowledge platform is app-specific

Add documents explicitly:

tracer.trace({
spans,
documents: chunks.map((chunk, i) => ({
id: chunk.id,
name: chunk.title,
text: chunk.content,
similarity: chunk.score,
rank: i
}))
});

Troubleshooting

IssueSolution
Spans not capturedEnsure isEnabled: true and pass tracer: spanCollector.getTracer()
Token counts zeroStreaming may not report until complete; tool calls don't have tokens
Missing attributesAdd recordInputs: true, recordOutputs: true to telemetry config
OTel packages missingInstall @opentelemetry/api and @opentelemetry/sdk-trace-base

Best Practices

  1. Always shutdown the collector: await spanCollector.shutdown()
  2. Use Manual Collection for RAG: Auto-tracing can't attach documents
  3. Match sessionId: Set in both telemetry metadata and trace for consistency

See also: TypeScript SDK Reference | HTTP API Reference