TypeScript SDK Reference
Complete reference for the teckel-ai TypeScript/JavaScript SDK. This SDK provides a simple way to track AI conversations, get quality insights, and identify AI knowledge gaps.
Installation
npm install teckel-ai
Requirements:
- Node.js 18+ (or Bun, Deno, serverless runtimes)
- TypeScript 4.5+ (optional but recommended)
Quick Start
import { TeckelTracer } from 'teckel-ai';
// Initialize once at startup
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});
// In your API handler
async function handleChat(userQuestion: string, sessionId: string) {
// Start conversation
const conversation = tracer.start({
sessionRef: sessionId,
userRef: 'user@example.com'
});
// Your existing RAG logic
const chunks = await vectorDB.search(userQuestion);
const answer = await llm.generate(userQuestion, chunks);
// Map chunks to Teckel format
const documents = chunks.map((chunk, index) => ({
documentRef: chunk.id,
documentName: chunk.title,
documentText: chunk.content,
documentLastUpdated: chunk.lastModified,
sourceUri: chunk.url,
similarity: chunk.score,
rank: index
}));
// Send trace (non-blocking)
conversation.trace({
query: userQuestion,
response: answer,
documents: documents,
model: 'gpt-5',
responseTimeMs: 1200
});
// For serverless: flush before returning
await conversation.flush(5000);
return answer;
}
API Reference
TeckelTracer
Main SDK class.
Constructor
new TeckelTracer(config: TeckelConfig)
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
apiKey | string | Yes | - | Your Teckel API key (starts with tk_live_) |
endpoint | string | No | "https://app.teckel.ai/api" | API base URL |
debug | boolean | No | false | Enable debug logging |
timeoutMs | number | No | 5000 | Network timeout in milliseconds |
tracer.start()
Start or continue a conversation.
tracer.start(options?: ConversationOptions): Conversation
| Field | Type | Required | Description |
|---|---|---|---|
sessionRef | string | Recommended | Your conversation identifier for same-chat queries; auto-generated if not provided |
userRef | string | No | Your user identifier, null if not provided |
metadata | object | No | Custom context |
Minimal usage (one-off trace):
const conversation = tracer.start(); // sessionRef auto-generated
conversation.trace({ query: '...', response: '...' });
Conversation
conversation.trace()
Record a query-response interaction. Fire-and-forget by default.
conversation.trace(data: TraceData): TraceResult
Returns: { traceRef: string, turnNumber: number }
| Field | Type | Required | Description |
|---|---|---|---|
query | string | Yes | User's question |
response | string | Yes | AI-generated answer |
documents | Document[] - see below for info | Recommended | Retrieved document information (for RAG) |
traceRef | string | No | Trace correlation ID |
model | string | No | LLM model (e.g., "gpt-5") |
responseTimeMs | number | No | Latency in milliseconds |
tokens | TokenUsage - see below for info | No | Token usage |
metadata | object | No | Custom context |
Example:
const result = conversation.trace({
query: "How do I reset my password?",
response: "Go to Settings > Security...",
model: "gpt-5",
documents: [
{
documentRef: "kb-123",
documentName: "Password Reset Guide",
documentText: "To reset your password...",
documentLastUpdated: "2025-01-15T10:00:00Z",
sourceUri: "https://kb.example.com/security",
similarity: 0.92,
rank: 0
}
]
});
console.log(result.traceRef); // "session-123:1"
conversation.feedback()
Add user feedback signal.
await conversation.feedback(data: FeedbackData): Promise<void>
| Field | Type | Required | Description |
|---|---|---|---|
type | FeedbackType | Yes | "thumbs_up", "thumbs_down", "flag", or "rating" |
value | string | No | For ratings: "1" to "5" |
comment | string | No | User's explanation |
traceRef | string | No | Link to specific trace |
Example:
await conversation.feedback({
type: "thumbs_down",
comment: "Information was outdated"
});
conversation.flush()
Wait for queued traces to send. Required for serverless to prevent data loss.
await conversation.flush(timeoutMs?: number): Promise<void>
Throws: Error on timeout
Example:
// Serverless: flush before returning
try {
await conversation.flush(5000);
} catch (err) {
logger.warn('Flush timeout', { err });
}
conversation.end()
End conversation and flush pending traces.
await conversation.end(): Promise<void>
Read-only Properties
conversation.id // session reference
conversation.turns // number of traces
conversation.started // start time
Type Definitions
Document
interface Document {
// Required
documentRef: string; // Your document ID
documentName: string; // Human-readable name
documentText: string; // Chunk content
// Recommended
documentLastUpdated?: string; // ISO 8601 timestamp
sourceUri?: string; // URL or path
// Optional
sourceType?: string; // e.g., 'confluence', 'slack'
similarity?: number; // 0-1 score
rank?: number; // Position (0 = first)
ownerEmail?: string; // Owner email
documentType?: string; // e.g., 'pdf', 'markdown'
}
TokenUsage
interface TokenUsage {
prompt: number; // Input tokens
completion: number; // Output tokens
total: number; // Total tokens
}
FeedbackType
type FeedbackType = 'thumbs_up' | 'thumbs_down' | 'flag' | 'rating'
Usage Patterns
Serverless (Vercel, Lambda, Cloudflare Workers)
Must call flush() before returning to prevent data loss.
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});
export async function handler(request) {
const conversation = tracer.start({
sessionRef: request.sessionId
});
const answer = await generateAnswer(request.question);
conversation.trace({
query: request.question,
response: answer,
documents: retrievedDocs
});
// CRITICAL: Flush before returning (3-5s recommended)
await conversation.flush(5000);
return { statusCode: 200, body: answer };
}
Long-Running Servers (Express, Fastify, etc.)
No flush() needed- traces send in background.
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});
app.post('/chat', async (req, res) => {
const conversation = tracer.start({
sessionRef: req.session.id
});
const answer = await generateAnswer(req.body.question);
conversation.trace({
query: req.body.question,
response: answer,
documents: retrievedDocs
});
// No flush needed
res.json({ answer });
});
Non-RAG Systems
Omit documents if not using retrieval.
conversation.trace({
query: userQuestion,
response: answer,
model: 'gpt-5'
});
SDK Behavior
Error Handling
trace(),feedback(),end()never throw - failures logged in debug modeflush()throws on timeout - catch to monitor potential data loss
Retry Logic
Automatically retries once on transient errors:
- HTTP 429 (rate limit)
- HTTP 5xx (server errors)
- Network failures
Retry pattern:
- Initial attempt
- Wait 250-350ms (jittered)
- Single retry
- Log failure (debug mode)
Total time: 2 * timeoutMs + ~300ms
Timeouts and Flush
timeoutMs: Per-request network timeout for SDK HTTP calls. If a request exceeds this, it is aborted. With one retry, total worst-case send time is approximately2 * timeoutMs + ~300ms.flush(timeoutMs?): Bounded wait for the internal send queue to drain. In serverless, call this before returning to avoid data loss. If no argument is passed, it uses the SDKtimeoutMsvalue.- Recommendation for serverless:
await conversation.flush(3000–5000)to balance reliability and latency. end(): Convenience that flushes pending sends and marks the conversation as ended. It throws on flush timeout just likeflush().
Rate Limits
- Default: 1,000 requests/hour per organization
- Reset: Top of each hour
- Headers:
X-RateLimit-Limit,X-RateLimit-Remaining,X-RateLimit-Reset
Contact support@teckel.ai for increases.
Runtime Compatibility
Uses standard Web APIs (fetch, AbortSignal):
- Node.js 18+
- Bun 1.0+
- Deno 1.35+ (
npm:teckel-ai) - Cloudflare Workers
- AWS Lambda
- Vercel Edge Runtime
- Google Cloud Run
Security: Never expose API keys in browser code. Always call from server/serverless backend.
Best Practices
- Initialize
TeckelTraceronce at startup, reuse across requests - Always call
flush()in serverless before returning - Include
documentLastUpdatedwhen available (enables freshness scoring) - Use consistent
sessionRefandtraceReffor tracking - Include
model,responseTimeMs,tokensfor insights - Set
debug: falsein production - Call
conversation.end()when chat session completes
Troubleshooting
Traces not appearing?
- Verify API key starts with
tk_live_ - Check network connectivity to
https://app.teckel.ai/api - Enable
debug: trueto see errors - Look for validation errors in console
Serverless traces dropping?
- Ensure
await conversation.flush(5000)before returning - Monitor flush timeout errors in logs
- Increase timeout if needed (up to 5s for slow networks)
High latency?
- Verify
trace()is non-blocking (don't await it) - Check
timeoutMsconfiguration (default 5000ms) - Review network connectivity
Support
- Documentation: docs.teckel.ai
- Dashboard: app.teckel.ai
- Email: support@teckel.ai
- Getting Started: docs.teckel.ai/docs/getting-started
License
MIT
Version 0.3.4