Skip to main content

Getting Started with Teckel AI

Welcome to Teckel AI! This guide will help you set up your account, get your API key, and send your first trace to start improving your AI system's quality.

What You'll Do

  1. Create your account
  2. Choose your plan
  3. Get your API key
  4. Choose your integration path
  5. Send your first trace
  6. View results in the dashboard
  7. Explore platform features

1. Create Your Account

  1. Visit app.teckel.ai/auth/get-started
  2. Sign up with your work email
  3. Verify your email address
  4. Complete your organization profile

Free Trial: All new accounts include a 14-day free trial with up to 10,000 traces. No credit card required.

2. Choose Your Plan

All accounts start with a 14-day free trial (10,000 traces, no credit card required).

PlanPriceTracesKey Features
Starter$79/mo25kEvaluators, patterns, topics
Growth$249/mo100k+ Slack alerts, Google Drive
EnterpriseCustomCustom+ SLA, unlimited seats

See Pricing and Plans for full details.

3. Get Your API Key

Once logged in:

  1. Navigate to Admin PanelAPI Keys
  2. Click Generate New Key
  3. Copy your key immediately (starts with tk_live_)
  4. Store it securely in your environment variables
# .env file
TECKEL_API_KEY=tk_live_your_key_here

Security Note: Never commit API keys to version control or expose them in client-side code. API keys are account-level secrets.

4. Choose Your Integration Path

Best for Node.js, Deno, Bun, and serverless environments. Handles retries, errors, and timeouts automatically.

npm install teckel-ai

See TypeScript SDK Reference for complete documentation.

HTTP API (All Languages)

For Python, Go, Ruby, Java, or any language with HTTP support.

# Test connection
curl -X POST https://app.teckel.ai/api/sdk/traces \
-H "Authorization: Bearer tk_live_your_key_here" \
-H "Content-Type: application/json" \
-d '{"traces": [{"query": "test", "response": "test"}]}'

See HTTP API Reference for complete documentation.

5. Send Your First Trace

Using the SDK (Minimal Example)

import { TeckelTracer } from 'teckel-ai';

const tracer = new TeckelTracer({
apiKey: process.env.TECKEL_API_KEY
});

// Minimal trace - just query and response
tracer.trace({
query: "How do I reset my password?",
response: "Go to Settings > Security > Reset Password and follow the prompts."
});

// For serverless (Lambda, Vercel, etc.) - wait for send to complete
await tracer.flush(5000);

Using HTTP API (Minimal Example)

curl -X POST https://app.teckel.ai/api/traces \
-H "Authorization: Bearer tk_live_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"query": "How do I reset my password?",
"response": "Go to Settings > Security > Reset Password and follow the prompts."
}'

Teckel will automatically evaluate the response quality and store the trace.

6. View Your First Trace

  1. Open app.teckel.ai
  2. Navigate to Traces page
  3. Your trace appears within seconds with quality scores if evaluators are enabled

7. Explore the Platform

Dashboard

Get an overview of:

  • Total traces and quality trends
  • Active topics and emerging issues
  • Document performance
  • User feedback signals

See Platform Guide for details.

Topics

View automatically grouped queries by topic:

  • See what users ask about most
  • Identify poorly performing topics
  • Track topic-level quality trends

Documents

If you send document metadata with traces (RAG systems):

  • Track which documents support accurate answers
  • Get freshness alerts for outdated content
  • See document usage and impact

Next Steps: Enhanced Integration

Now that you've sent your first trace, enhance your integration to unlock more insights.

Add These Fields for Better Analytics

Session grouping:

  • sessionId - Group related queries into conversations (any string up to 200 chars)
  • userId - Track per-user metrics

LLM context:

  • model - LLM model name
  • latencyMs - Response time
  • systemPrompt - LLM system instructions (helps understand AI behavior)

Document analytics (for RAG systems):

  • documents - Array of retrieved documents with metadata
  • Enables document performance and freshness analysis

Agent analytics (for multi-step workflows):

  • spans - Array of operation spans for agent workflows (see OpenTelemetry Integration)
  • agentName - Identifier for the agent/workflow that processed this trace

Complete Technical Documentation

Recommended reading order:

  1. TypeScript SDK Reference - Complete API reference
  2. OpenTelemetry Integration - Span collection for AI SDK

Other Languages:

Platform Guides:

Need Help?