Skip to main content

Troubleshooting Guide

This guide helps you diagnose and resolve common issues when integrating Teckel AI. Problems are organized by root cause for quick resolution.

Setup Issues

Invalid or Missing API Key

Symptom: Traces don't appear, or you see 401 Unauthorized errors.

Check your API key:

  • Starts with tk_live_
  • Correctly stored in environment variables
  • Not revoked in dashboard (Admin Panel > API Keys)
// Verify key is loaded
console.log('API key exists:', !!process.env.TECKEL_API_KEY);
console.log('Key prefix:', process.env.TECKEL_API_KEY?.substring(0, 8));
// Should log: "tk_live_"

For direct HTTP calls, ensure Bearer token format:

# Correct
Authorization: Bearer tk_live_abc123

# Incorrect (missing "Bearer")
Authorization: tk_live_abc123

Missing Required Fields

Symptom: Debug logs show validation errors, or traces silently fail.

All traces require query and response:

// This fails validation
conversation.trace({
query: userQuestion
// Missing: response
});

// This works
conversation.trace({
query: userQuestion,
response: aiAnswer
});

Enable Debug Mode First

Before troubleshooting further, enable debug mode to see what's happening:

const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY,
debug: true // Shows all operations in console
});

Debug output includes:

  • SDK initialization and version
  • Conversation creation
  • Trace submission attempts with field counts
  • Network errors and validation warnings
  • Success/failure status

Example output:

[Teckel] SDK initialized: { endpoint: 'https://app.teckel.ai/api', version: '0.3.1' }
[Teckel] Conversation started: { sessionRef: 'session-123' }
[Teckel] Queueing trace (fire-and-forget): { sessionRef: 'session-123', turnNumber: 1, documentCount: 3 }
[Teckel] Trace send failed (non-blocking): HTTP 401: Unauthorized

Important: Disable debug mode in production to reduce log volume.

Serverless-Specific Issues

Traces Lost When Function Terminates

Symptom: Traces appear locally but not in production serverless (Vercel, Lambda, Cloud Run, Cloudflare Workers).

Root cause: Serverless functions terminate immediately after returning. Fire-and-forget traces are killed before they can send.

Solution: Call flush() before your handler returns and handle timeouts:

const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY,
timeoutMs: 5000 // Must accommodate cold starts
});

export async function handler(request) {
const conversation = tracer.start({ sessionRef: 'session-123' });

const answer = await generateResponse(request.query);

// Fire-and-forget (returns immediately)
conversation.trace({
query: request.query,
response: answer,
documents: retrievedDocs
});

// Wait for trace to send (serverless-safe)
try {
await conversation.flush(5000);
} catch (err) {
// Timeout means the trace may be dropped
logger?.warn?.('Teckel flush timeout', { err });
}

return { statusCode: 200, body: answer };
}

Alternative:

  • await conversation.end() automatically flushes before ending the conversation (it will also throw on flush timeout).

When to use each:

  • flush(): For serverless, or when you want to continue the conversation but ensure traces are sent
  • end(): When conversation is complete (automatically flushes first)

Custom flush timeout:

try {
await conversation.flush(5000);
} catch (err) {
logger?.warn?.('Teckel flush timeout', { err });
}

Cold Start Timeouts

Symptom: Traces work when function is warm, but fail on first request after deployment.

Root cause: Cold starts can take 1-10 seconds to establish HTTP connection. If timeoutMs is too short, requests abort before cold start completes.

Cold start times by platform:

  • Vercel Functions: 1-3 seconds
  • AWS Lambda: 1-5 seconds (up to 10s with large containers)
  • Google Cloud Run: 2-4 seconds
  • Cloudflare Workers: less than 500ms

Solution: Set timeout to accommodate cold starts:

// Default (works for most platforms)
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY,
timeoutMs: 5000 // 5 seconds (default)
});

// For AWS Lambda with containers
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY,
timeoutMs: 10000 // 10 seconds
});

// For Cloudflare Workers (fast cold starts)
const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY,
timeoutMs: 3000 // 3 seconds
});

Why timeoutMs matters: It controls how long SDK HTTP requests wait before aborting. Too short = requests timeout during cold starts. Too long = hung connections could block your queue longer. With one retry, worst-case time per send is approximately 2 * timeoutMs + ~300ms.

How to detect:

  • Enable debug mode and look for timeout errors
  • Check if traces appear only when function is warm
  • Monitor if first request after deployment always fails

Network Issues

Connectivity Test

Symptom: Traces never appear, even with valid API key.

Test if your application can reach Teckel's API:

# Should return 401 (proves API is reachable)
curl -X POST https://app.teckel.ai/api/conversations \
-H "Content-Type: application/json" \
-d '{"sessionRef": "test"}'

# Expected response:
# {"error":"Missing or invalid Authorization header"}

Common network blocks:

  • Corporate firewalls blocking outbound HTTPS
  • VPN or proxy configuration issues
  • DNS resolution failures
  • Regional network restrictions

High Latency Timeouts

Symptom: Traces succeed locally but fail in production, especially in distant regions.

Common scenarios:

  • Application deployed in distant region (e.g., Asia → US East Coast)
  • Poor network connectivity or high packet loss
  • Proxy/firewall adding significant latency

Solution: Increase timeout and enable debug to monitor actual latency:

const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY,
timeoutMs: 10000, // Increase timeout
debug: true // Monitor actual network performance
});

How to detect:

  • Debug logs show "Network retry" or timeout errors
  • Consistent failure in specific geographic regions
  • Traces work locally but not in production environment

Runtime Compatibility

Node.js Version Too Old

Symptom:

TypeError: fetch is not a function

Cause: Node.js < 18 lacks native fetch support.

Solution: Upgrade to Node.js 18 or later:

node --version  # Should be v18.0.0 or higher

# Upgrade via nvm:
nvm install 18
nvm use 18

Deno Import Issues

Symptom: Module not found errors in Deno.

Solution: Use npm: specifier:

// Correct for Deno
import { TeckelTracer } from "npm:teckel-ai@^0.3.3";

// Incorrect
import { TeckelTracer } from "teckel-ai";

Bun Installation

Solution: Use Bun's package manager (don't mix with npm):

bun add teckel-ai

Data Quality Issues

No Freshness Scores

Symptom: Dashboard shows no freshness metrics.

Cause: Missing documentLastUpdated field.

Solution: Always include document timestamps:

// Missing timestamp (no freshness scoring)
const documents = [{
documentRef: 'doc-123',
documentName: 'Guide',
documentText: 'Content here'
}];

// Correct (enables freshness scoring)
const documents = [{
documentRef: 'doc-123',
documentName: 'Guide',
documentText: 'Content here',
documentLastUpdated: '2025-01-15T10:00:00Z' // ISO 8601 format
}];

Format requirements:

  • Must be ISO 8601 format: "2025-01-15T10:00:00Z" or "2025-01-15T10:00:00-05:00"
  • Cannot be in the future
  • Should reflect actual last modification date

Low Accuracy Scores

Common causes:

1. Document text mismatch - Send exact chunk your LLM received, not a summary:

// Wrong
documentText: 'Summary: This doc explains password reset'

// Correct
documentText: 'To reset your password, navigate to Settings > Security > Reset Password...'

2. Incomplete document list - Include ALL chunks sent to LLM:

// Wrong (only first chunk)
const documents = [chunks[0]];

// Correct (all chunks)
const documents = chunks.map((chunk, i) => ({
documentRef: chunk.id,
documentName: chunk.title,
documentText: chunk.content,
rank: i
}));

3. Timing issues - Only trace AFTER LLM completes:

// Wrong
conversation.trace({ query, response: 'pending...', documents });
const answer = await llm.generate(query, chunks);

// Correct
const answer = await llm.generate(query, chunks);
conversation.trace({ query, response: answer, documents });

Inconsistent Document References

Symptom: Documents referenced in traces, but no scores in dashboard.

Cause: Inconsistent casing or formatting breaks aggregation:

// Trace 1
documentRef: 'doc-123'

// Trace 2
documentRef: 'DOC-123' // Different casing = different document!

// Solution: Use consistent IDs
documentRef: 'doc-123' // Always same format/casing

Dashboard & Processing

Topics Not Appearing

Symptom: Traces appear immediately, but no topics generated.

Requirements:

  • Minimum 10-25 queries (more data = better clustering)
  • Similar queries must exist to form meaningful groups
  • Processing time: 10 minutes to 1 hour after sufficient data

Timeline:

  • Traces appear: Immediately (within seconds)
  • Topics appear: 10 minutes to 1 hour after enough data accumulates

Document Scores Missing

Symptom: Documents appear in traces, but no aggregate scores.

Causes:

  • Processing delay: Scores appear after aggregation (10-60 minutes)
  • Inconsistent document IDs (see above)
  • Not enough traces using those documents

API Error Responses

Rate Limiting (HTTP 429)

Response:

{
"error": "Rate limit exceeded",
"retry_after": 60
}

Default limits:

  • 1000 requests per hour per organization
  • Resets on the hour (e.g., 10:00:00, 11:00:00)

The SDK retries automatically - you don't need to implement retry logic.

Solutions:

  • Wait for rate limit to reset (check retry_after seconds)
  • Contact support@teckel.ai for limit increases

Validation Errors (HTTP 400)

Common failures:

// Missing required field
conversation.trace({ query: 'test' });
// Error: response is required

// Field too long
conversation.trace({ query: veryLongString, response: answer });
// Error: query too long (max 10,000 characters)

// Invalid document structure
documents: [{ documentRef: 'doc-123' }]
// Error: documentName is required, documentText is required

// Invalid data types
responseTimeMs: -100
// Error: responseTimeMs must be positive

Testing Checklist

Before deploying to production:

1. Test with debug mode

const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY,
debug: true
});

2. Verify trace appears in dashboard

  • Send test query
  • Check Audit History within 30 seconds
  • Confirm all fields populated correctly

3. Test serverless configuration

const conversation = tracer.start({ sessionRef: 'test-123' });
conversation.trace({ query, response, documents });
await conversation.flush(5000);
// Check debug logs for successful send

4. Test error handling

  • Use invalid API key (should log warning but not crash)
  • Send trace with missing fields (should log validation error)
  • Verify application continues running normally

5. Monitor in production

const result = conversation.trace(data);
if (result?.traceRef) {
console.log(`Trace sent: ${result.traceRef}`);
}

await conversation.flush(5000); // For serverless

Getting Help

1. Check documentation:

2. Contact support:

Email support@teckel.ai with:

  • Organization name
  • Error messages from debug mode
  • Example trace payload that's failing
  • Environment details (Node.js version, platform)
  • Time when issue occurred (with timezone)

Response time: Within 24 hours on business days.