Getting Started with Teckel AI
This guide will help you integrate Teckel AI into your application in about 20 minutes. By the end, you'll be tracking AI conversations and receiving quality insights.
Choose Your Integration Method
Using TypeScript/JavaScript? → Follow this guide, then see the TypeScript SDK Reference for complete API details
Using Python, Go, or another language? → Use our HTTP API directly — see the HTTP API Reference
Not sure which to use? The TypeScript SDK is easier and handles retries, error handling, and serverless flushing automatically. Use HTTP API if you can't use the SDK or prefer direct control.
Prerequisites
Before starting, you need:
- A Teckel AI account (sign up here)
- An API key from your Teckel dashboard
- Access to your application's codebase
Step 1: Get Your API Key
- Log in to your Teckel AI dashboard
- Navigate to Admin Panel > API Keys
- Click Generate Key
- Name your key (e.g., "Production API")
- Copy the key immediately and store it as an environment variable
Security Note: Never commit API keys to version control or expose them in client-side code.
# Add to your .env file
TECKEL_API_KEY=tk_live_your_key_here
Step 2: Install the Tracer SDK
Choose your package manager:
# npm
npm install teckel-ai
# Bun
bun add teckel-ai
# Deno
import { TeckelTracer } from "npm:teckel-ai@^0.3.4";
Step 3: Understand What Data to Send
Teckel tracks AI conversations by receiving data about each interaction. Here's what you can send:
Required Fields
query (string)
- The user's question or input to your AI system
- Example:
"How do I reset my password?"
response (string)
- The AI-generated answer from your LLM
- Example:
"Go to Settings > Security > Reset Password..."
Recommended Fields
traceRef (string)
- Your own correlation ID for this specific trace
- Example:
"session-abc123:turn-5"
sessionRef (string)
- Your unique identifier for this conversation or session
- Use your existing session ID if you have one
- Example:
"user-session-abc123"or"chat-2025-01-15-xyz" - Auto-generated if not provided (useful for one-off traces)
- Why: Groups related queries together for conversation analysis
userRef (string, optional but highly recommended)
- Client-provided reference for the end user asking the question
- Example:
"user@example.com"or"user-id-789"
documents (array, for RAG systems)
-
The source documents or chunks your AI used to generate its answer
-
Each document should include:
documentRef (string, required)
- Your internal unique ID for this document or chunk
- Example:
"kb-article-425"or"docs/security#password-reset"
documentName (string, required)
- Human-readable name for the document
- Example:
"Password Reset Guide"or"Security FAQ v2.0"
documentText (string, required)
- The document text content that was retrieved by your LLM
- This should be the exact chunk, not a summary
- Example:
"To reset your password, navigate to Settings > Security..."
documentLastUpdated (ISO 8601 timestamp, optional but recommended)
- When this document was last modified
- Example:
"2025-01-15T10:00:00Z" - Why: Enables freshness scoring to detect stale information
sourceUri (string, optional recommended)
- URL or internal file path to the original document
- Example:
"https://docs.example.com/security"or"file://docs/security.md" - Why: Helps track down original document text for your reference
Optional Fields
model (string)
- The LLM model name you're using
- Example:
"gpt-5","claude-4.5-sonnet", - Why: Enables model-level performance comparisons
responseTimeMs (number)
- How long the AI took to respond, in milliseconds
- Example:
1250(for 1.25 seconds) - Why: Tracks performance trends and identifies slowdowns
tokens (object)
- Token usage for cost tracking
- Contains:
prompt(number),completion(number),total(number) - Example:
{ prompt: 324, completion: 89, total: 413 }
similarity (number)
- Vector similarity score from your search (0-1)
- Example:
0.87(87% similar) - Why: Tracks retrieval quality
rank (number)
- Position in your search results (0 = first result)
- Example:
0,1,2 - Why: Analyzes whether top results are actually being used via precision analysis
metadata (object)
- Any additional context you want to track
- Example:
{ retrieval_method: "vector_search", region: "us-east" }
Step 4: Add Teckel to Your Code
Here's a complete example showing how to integrate Teckel into an API endpoint:
import { TeckelTracer } from 'teckel-ai';
// Initialize once at application startup
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});
// In your API handler
async function handleChatRequest(userQuestion: string, sessionId: string) {
// Start or continue a conversation
const conversation = tracer.start({
sessionRef: sessionId,
userRef: 'user@example.com' // optional
});
// Your existing RAG logic
const chunks = await vectorDB.search(userQuestion);
const answer = await llm.generate(userQuestion, chunks);
// Convert your chunk reference to Teckel's format
const documents = chunks.map((chunk, index) => ({
documentRef: chunk.id,
documentName: chunk.title,
documentText: chunk.content,
documentLastUpdated: chunk.lastModified,
sourceUri: chunk.url,
similarity: chunk.score,
rank: index
}));
// Send the trace (non-blocking, won't slow down your response)
conversation.trace({
query: userQuestion,
response: answer,
documents: documents,
model: 'gpt-5',
responseTimeMs: 1200
});
// Return your answer to the user
return answer;
}
For Non-RAG Systems
If you're not using retrieval (no cited knowledge base), simply omit the documents field:
conversation.trace({
query: userQuestion,
response: answer,
model: 'gpt-5'
});
You will still get trace level completeness scoring, topic consolidation, and other functionality in our platform, though document specific analytics won't be available.
For Serverless Environments
If you're using Vercel, AWS Lambda, or similar serverless platforms, call flush() before your handler returns to ensure traces aren't lost when your function terminates:
// Initialize tracer (same as long-running servers)
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});
// In your handler
export async function handler(request) {
const conversation = tracer.start({ sessionRef: 'session-123' });
// Fire-and-forget trace (zero blocking)
conversation.trace({
query: userQuestion,
response: answer,
documents: documents
});
// CRITICAL: Flush before returning (ensures traces are sent)
// Recommended: 3–5 seconds for serverless
try {
await conversation.flush(5000);
} catch (err) {
// Timeout means the trace may be dropped; monitor
logger?.warn?.('Teckel flush timeout', { err });
}
return { statusCode: 200, body: answer };
}
Why flush()? Serverless functions terminate immediately after returning, which can cancel background HTTP requests. flush() performs a bounded wait for the send queue to drain. It throws on timeout so you can log and monitor potential drops. We recommend passing 3–5 seconds in serverless to ensure delivery without noticeably delaying responses.
Ending a Conversation
When a chat session is complete, you can end it explicitly. end() flushes any pending sends and then marks the conversation as ended:
await conversation.end();
Step 5: Verify It's Working
- Run a test query through your application
- Open your Teckel AI dashboard
- Navigate to Audit History
- Your trace should appear within a few seconds
If you don't see your trace:
- Check that your API key is correct
- Look for errors in your console logs
- Enable debug mode to see detailed logging:
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY,
debug: true // Logs all SDK operations to console
}); - Verify your application can reach
https://app.teckel.ai/api - See the Troubleshooting Guide for more help
What Happens Next
Once integrated, Teckel AI automatically:
- Evaluates quality: Uses our Teckel Judge model to score every response
- Groups similar queries: Identifies common topics and patterns
- Detects gaps: Finds areas where documentation is missing or outdated
- Provides feedback: Suggests specific improvements to documents or retrieval
Check the Dashboard Guide to explore your insights, or read Core Concepts to understand how Teckel analyzes your data.
Need More Details?
- Complete technical documentation:
- Troubleshooting: Solutions to common integration issues
- Email support@teckel.ai for integration assistance