Skip to main content

TypeScript SDK Reference

Official TypeScript SDK for Teckel AI. Handles batching, retries, and error handling automatically.

Field constraints and validation rules: See HTTP API Reference for complete field specifications.

Installation

npm install teckel-ai

Requirements: Node.js 18+, TypeScript 4.5+ (optional)

Runtimes: Node.js, Bun, Deno (npm:teckel-ai), Cloudflare Workers, AWS Lambda, Vercel Edge

Quick Start

import { TeckelTracer } from 'teckel-ai';

const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});

tracer.trace({
query: "How do I reset my password?",
response: "Go to Settings > Security...",
sessionId: "chat-session-42",
userId: "user@example.com",
model: "gpt-5",
latencyMs: 234,
documents: [{
id: "password-reset-guide.md",
name: "Password Reset Guide",
text: "To reset your password...",
similarity: 0.92
}]
});

// Serverless: REQUIRED before returning
await tracer.flush(5000);

API Reference

Constructor

new TeckelTracer(config: TeckelConfig)
interface TeckelConfig {
apiKey: string; // Required: tk_live_...
endpoint?: string; // Default: "https://app.teckel.ai/api"
debug?: boolean; // Default: false
timeoutMs?: number; // Default: 5000
batch?: BatchConfig; // See below
}

interface BatchConfig {
maxSize?: number; // Default: 100 traces
maxBytes?: number; // Default: 5MB
flushIntervalMs?: number; // Default: 100ms
}

tracer.trace(data)

Submit a trace. Fire-and-forget, non-blocking.

tracer.trace(data: TraceData): void

See TraceData type below. For field constraints, see HTTP API Reference.

Token auto-aggregation: If you provide spans but not tokens, the SDK sums promptTokens and completionTokens from all spans automatically.

Cost calculation: Cost is automatically calculated server-side from token counts. Provide costUsd on traces or spans to override with your own values.

tracer.feedback(data)

Submit user feedback for a trace or session.

tracer.feedback(data: FeedbackData): void

See FeedbackData type below.

tracer.flush(timeoutMs?)

Wait for queued traces to send. Required in serverless.

await tracer.flush(timeoutMs?: number): Promise<void>

Throws on timeout. Default: 5000ms.

tracer.destroy()

Cleanup tracer resources. Flushes pending traces and stops auto-flush timer.

await tracer.destroy(): Promise<void>

Types

TraceData

interface TraceData {
query: string; // User's question
response: string; // AI response
traceId?: string; // UUID (auto-generated if omitted)
sessionId?: string; // Groups traces into conversations
userId?: string; // End-user identifier
agentName?: string; // Agent/workflow identifier
model?: string; // LLM model name
latencyMs?: number; // Response time in ms
tokens?: TokenUsage; // Token counts (auto-calculated from spans)
costUsd?: number; // Cost in USD (auto-calculated from tokens)
systemPrompt?: string; // LLM system instructions
documents?: Document[]; // RAG sources
spans?: SpanData[]; // OTel spans
metadata?: Record<string, unknown>;
}

Document

interface Document {
id: string; // Your document identifier (any format)
name: string; // Human-readable name
text: string; // Chunk content sent to LLM
lastUpdated?: string; // ISO 8601- for freshness analysis
url?: string; // Link to source
source?: string; // Platform: 'confluence', 'slack', 'gdrive'
fileFormat?: string; // Format: 'pdf', 'md', 'docx'
similarity?: number; // 0-1 relevance score
rank?: number; // Position in results (0 = first)
ownerEmail?: string; // Document owner
}

SpanData

For automatic span collection, see OpenTelemetry Integration.

interface SpanData {
name: string; // Span name
startedAt: string; // ISO 8601
type?: SpanType; // 'llm_call' | 'tool_call' | 'retrieval' | 'agent' | 'guardrail' | 'custom'
spanId?: string; // UUID (auto-generated)
parentSpanId?: string; // Parent for nesting
endedAt?: string; // ISO 8601
durationMs?: number; // Duration in ms
status?: 'running' | 'completed' | 'error';
statusMessage?: string; // Error message

// LLM calls
model?: string;
promptTokens?: number; // Summed for trace-level tokens
completionTokens?: number;
costUsd?: number; // Cost for this span

// Tool calls
toolName?: string;
toolArguments?: Record<string, unknown>;
toolResult?: Record<string, unknown>;

// Generic I/O (for non-tool spans)
input?: Record<string, unknown>;
output?: Record<string, unknown>;

metadata?: Record<string, unknown>;
}

FeedbackData

interface FeedbackData {
traceId?: string; // Target trace (UUID) - one required
sessionId?: string; // Target session - one required
type: 'thumbs_up' | 'thumbs_down' | 'flag' | 'rating';
value?: string; // For ratings: '1'-'5'
comment?: string; // User explanation
}

TokenUsage

interface TokenUsage {
prompt: number;
completion: number;
total: number;
}

Patterns

Serverless (Critical)

Functions terminate immediately after returning. Without flush(), traces are lost.

// Lambda, Vercel, Cloudflare Workers
export async function handler(event) {
const response = await generateResponse(event.query);

tracer.trace({ query: event.query, response });
await tracer.flush(5000); // REQUIRED

return { body: response };
}

Long-Running Servers

No flush needed as traces send in background.

// Express, Fastify, etc.
app.post('/chat', async (req, res) => {
const response = await generateResponse(req.body.query);
tracer.trace({ query: req.body.query, response });
res.json({ response }); // Trace sends async
});

High-Throughput Batching

Default config handles most cases. For high volume:

const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY,
batch: {
maxSize: 100, // Flush after N traces
maxBytes: 5_000_000, // Flush after 5MB
flushIntervalMs: 100 // Auto-flush interval
}
});

Multi-Turn Conversations

Group conversation turns with sessionId:

const sessionId = "user-123-conv-456";

// Turn 1
tracer.trace({ sessionId, query: "What is X?", response: "X is..." });

// Turn 2
tracer.trace({ sessionId, query: "Tell me more", response: "More about X..." });

// Session-level feedback
tracer.feedback({ sessionId, type: 'thumbs_up' });

RAG Document Tracking

Include retrieved documents for quality analysis:

const chunks = await vectorSearch(query, { limit: 5 });
const response = await generateWithContext(query, chunks);

tracer.trace({
query,
response,
documents: chunks.map((chunk, i) => ({
id: chunk.id,
name: chunk.title,
text: chunk.content,
similarity: chunk.score,
rank: i,
url: chunk.sourceUrl,
lastUpdated: chunk.modifiedAt?.toISOString()
}))
});

OpenTelemetry Integration

Collect spans automatically with OTel:

import { TeckelTracer } from 'teckel-ai';
import { TeckelSpanCollector } from 'teckel-ai/otel';
import { generateText } from 'ai';

const tracer = new TeckelTracer({ apiKey: process.env.TECKEL_API_KEY });
const spanCollector = new TeckelSpanCollector();

const result = await generateText({
model: openai('gpt-5'),
prompt: userQuery,
experimental_telemetry: {
isEnabled: true,
tracer: spanCollector.getTracer()
}
});

tracer.trace({
query: userQuery,
response: result.text,
spans: spanCollector.getSpans() // Tokens auto-calculated
});

await spanCollector.shutdown();

See OpenTelemetry Integration for complete documentation.

Error Handling

  • trace() never throws - logs failures in debug mode
  • feedback() never throws - logs failures in debug mode
  • flush() throws on timeout - catch to monitor data loss
  • Auto-retries once on 429/5xx/network errors (250-350ms jitter)
try {
await tracer.flush(5000);
} catch (err) {
logger.warn('Trace flush timeout - some traces may be lost');
}

Troubleshooting

IssueSolution
Traces not appearingAdd await tracer.flush() in serverless
DateTime errorsUse .toISOString() for timestamps
Debug issuesSet debug: true in constructor

See Troubleshooting Guide for more.


See also: HTTP API Reference | OpenTelemetry Integration | Getting Started

Version 0.7.0