Skip to main content

TypeScript SDK Reference

Complete reference for the teckel-ai TypeScript/JavaScript SDK. This SDK provides a simple way to track AI conversations, get quality insights, and identify AI knowledge gaps.

Installation

npm install teckel-ai

Requirements:

  • Node.js 18+ (or Bun, Deno, serverless runtimes)
  • TypeScript 4.5+ (optional but recommended)

Quick Start

import { TeckelTracer } from 'teckel-ai';

// Initialize once at startup
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});

// In your API handler
async function handleChat(userQuestion: string, sessionId: string) {
// Start conversation
const conversation = tracer.start({
sessionRef: sessionId,
userRef: 'user@example.com'
});

// Your existing RAG logic
const chunks = await vectorDB.search(userQuestion);
const answer = await llm.generate(userQuestion, chunks);

// Map chunks to Teckel format
const documents = chunks.map((chunk, index) => ({
documentRef: chunk.id,
documentName: chunk.title,
documentText: chunk.content,
documentLastUpdated: chunk.lastModified,
sourceUri: chunk.url,
similarity: chunk.score,
rank: index
}));

// Send trace (non-blocking)
conversation.trace({
query: userQuestion,
response: answer,
documents: documents,
model: 'gpt-5',
responseTimeMs: 1200
});

// For serverless: flush before returning
await conversation.flush(5000);

return answer;
}

API Reference

TeckelTracer

Main SDK class.

Constructor

new TeckelTracer(config: TeckelConfig)
FieldTypeRequiredDefaultDescription
apiKeystringYes-Your Teckel API key (starts with tk_live_)
endpointstringNo"https://app.teckel.ai/api"API base URL
debugbooleanNofalseEnable debug logging
timeoutMsnumberNo5000Network timeout in milliseconds

tracer.start()

Start or continue a conversation.

tracer.start(options?: ConversationOptions): Conversation
FieldTypeRequiredDescription
sessionRefstringRecommendedYour conversation identifier for same-chat queries; auto-generated if not provided
userRefstringNoYour user identifier, null if not provided
metadataobjectNoCustom context

Minimal usage (one-off trace):

const conversation = tracer.start(); // sessionRef auto-generated
conversation.trace({ query: '...', response: '...' });

Conversation

conversation.trace()

Record a query-response interaction. Fire-and-forget by default.

conversation.trace(data: TraceData): TraceResult

Returns: { traceRef: string, turnNumber: number }

FieldTypeRequiredDescription
querystringYesUser's question
responsestringYesAI-generated answer
documentsDocument[] - see below for infoRecommendedRetrieved document information (for RAG)
traceRefstringNoTrace correlation ID
modelstringNoLLM model (e.g., "gpt-5")
responseTimeMsnumberNoLatency in milliseconds
tokensTokenUsage - see below for infoNoToken usage
metadataobjectNoCustom context

Example:

const result = conversation.trace({
query: "How do I reset my password?",
response: "Go to Settings > Security...",
model: "gpt-5",
documents: [
{
documentRef: "kb-123",
documentName: "Password Reset Guide",
documentText: "To reset your password...",
documentLastUpdated: "2025-01-15T10:00:00Z",
sourceUri: "https://kb.example.com/security",
similarity: 0.92,
rank: 0
}
]
});

console.log(result.traceRef); // "session-123:1"

conversation.feedback()

Add user feedback signal.

await conversation.feedback(data: FeedbackData): Promise<void>
FieldTypeRequiredDescription
typeFeedbackTypeYes"thumbs_up", "thumbs_down", "flag", or "rating"
valuestringNoFor ratings: "1" to "5"
commentstringNoUser's explanation
traceRefstringNoLink to specific trace

Example:

await conversation.feedback({
type: "thumbs_down",
comment: "Information was outdated"
});

conversation.flush()

Wait for queued traces to send. Required for serverless to prevent data loss.

await conversation.flush(timeoutMs?: number): Promise<void>

Throws: Error on timeout

Example:

// Serverless: flush before returning
try {
await conversation.flush(5000);
} catch (err) {
logger.warn('Flush timeout', { err });
}

conversation.end()

End conversation and flush pending traces.

await conversation.end(): Promise<void>

Read-only Properties

conversation.id          // session reference
conversation.turns // number of traces
conversation.started // start time

Type Definitions

Document

interface Document {
// Required
documentRef: string; // Your document ID
documentName: string; // Human-readable name
documentText: string; // Chunk content

// Recommended
documentLastUpdated?: string; // ISO 8601 timestamp
sourceUri?: string; // URL or path

// Optional
sourceType?: string; // e.g., 'confluence', 'slack'
similarity?: number; // 0-1 score
rank?: number; // Position (0 = first)
ownerEmail?: string; // Owner email
documentType?: string; // e.g., 'pdf', 'markdown'
}

TokenUsage

interface TokenUsage {
prompt: number; // Input tokens
completion: number; // Output tokens
total: number; // Total tokens
}

FeedbackType

type FeedbackType = 'thumbs_up' | 'thumbs_down' | 'flag' | 'rating'

Usage Patterns

Serverless (Vercel, Lambda, Cloudflare Workers)

Must call flush() before returning to prevent data loss.

const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});

export async function handler(request) {
const conversation = tracer.start({
sessionRef: request.sessionId
});

const answer = await generateAnswer(request.question);

conversation.trace({
query: request.question,
response: answer,
documents: retrievedDocs
});

// CRITICAL: Flush before returning (3-5s recommended)
await conversation.flush(5000);

return { statusCode: 200, body: answer };
}

Long-Running Servers (Express, Fastify, etc.)

No flush() needed- traces send in background.

const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});

app.post('/chat', async (req, res) => {
const conversation = tracer.start({
sessionRef: req.session.id
});

const answer = await generateAnswer(req.body.question);

conversation.trace({
query: req.body.question,
response: answer,
documents: retrievedDocs
});

// No flush needed
res.json({ answer });
});

Non-RAG Systems

Omit documents if not using retrieval.

conversation.trace({
query: userQuestion,
response: answer,
model: 'gpt-5'
});

SDK Behavior

Error Handling

  • trace(), feedback(), end() never throw - failures logged in debug mode
  • flush() throws on timeout - catch to monitor potential data loss

Retry Logic

Automatically retries once on transient errors:

  • HTTP 429 (rate limit)
  • HTTP 5xx (server errors)
  • Network failures

Retry pattern:

  1. Initial attempt
  2. Wait 250-350ms (jittered)
  3. Single retry
  4. Log failure (debug mode)

Total time: 2 * timeoutMs + ~300ms

Timeouts and Flush

  • timeoutMs: Per-request network timeout for SDK HTTP calls. If a request exceeds this, it is aborted. With one retry, total worst-case send time is approximately 2 * timeoutMs + ~300ms.
  • flush(timeoutMs?): Bounded wait for the internal send queue to drain. In serverless, call this before returning to avoid data loss. If no argument is passed, it uses the SDK timeoutMs value.
  • Recommendation for serverless: await conversation.flush(3000–5000) to balance reliability and latency.
  • end(): Convenience that flushes pending sends and marks the conversation as ended. It throws on flush timeout just like flush().

Rate Limits

  • Default: 1,000 requests/hour per organization
  • Reset: Top of each hour
  • Headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset

Contact support@teckel.ai for increases.

Runtime Compatibility

Uses standard Web APIs (fetch, AbortSignal):

  • Node.js 18+
  • Bun 1.0+
  • Deno 1.35+ (npm:teckel-ai)
  • Cloudflare Workers
  • AWS Lambda
  • Vercel Edge Runtime
  • Google Cloud Run

Security: Never expose API keys in browser code. Always call from server/serverless backend.

Best Practices

  1. Initialize TeckelTracer once at startup, reuse across requests
  2. Always call flush() in serverless before returning
  3. Include documentLastUpdated when available (enables freshness scoring)
  4. Use consistent sessionRef and traceRef for tracking
  5. Include model, responseTimeMs, tokens for insights
  6. Set debug: false in production
  7. Call conversation.end() when chat session completes

Troubleshooting

Traces not appearing?

  1. Verify API key starts with tk_live_
  2. Check network connectivity to https://app.teckel.ai/api
  3. Enable debug: true to see errors
  4. Look for validation errors in console

Serverless traces dropping?

  1. Ensure await conversation.flush(5000) before returning
  2. Monitor flush timeout errors in logs
  3. Increase timeout if needed (up to 5s for slow networks)

High latency?

  1. Verify trace() is non-blocking (don't await it)
  2. Check timeoutMs configuration (default 5000ms)
  3. Review network connectivity

Support

License

MIT


Version 0.3.4