Skip to main content

HTTP API Reference

REST API for Teckel AI. This is the canonical field reference for all integrations.

Using TypeScript? The SDK wraps this API with retries, batching, and error handling.

Authentication

Authorization: Bearer tk_live_your_key_here

Get API key: Dashboard → Admin Panel → API Keys

Base URL

https://app.teckel.ai/api

Response Codes

CodeMeaningAction
200SuccessRequest accepted
400Bad RequestCheck validation errors
401UnauthorizedVerify API key
429Rate LimitedImplement backoff
500Server ErrorContact support

Endpoints

POST /api/sdk/traces

Submit traces (1-100 per request).

Request:

{
"traces": [
{
"query": "How do I reset my password?",
"response": "Go to Settings → Security → Reset Password...",
"sessionId": "chat-session-42",
"userId": "user@example.com",
"model": "gpt-5",
"latencyMs": 1250,
"systemPrompt": "You are a helpful assistant...",
"tokens": {
"prompt": 324,
"completion": 89,
"total": 413
},
"costUsd": 0.0042,
"documents": [{
"id": "password-reset-guide.md",
"name": "Password Reset Guide",
"text": "To reset your password, navigate to Settings...",
"url": "https://kb.example.com/security",
"similarity": 0.92,
"rank": 0
}],
"spans": [{
"name": "llm.complete",
"type": "llm_call",
"startedAt": "2025-01-15T10:00:00.000Z",
"endedAt": "2025-01-15T10:00:01.250Z",
"model": "gpt-5",
"promptTokens": 324,
"completionTokens": 89,
"costUsd": 0.0042,
"status": "completed"
}],
"metadata": {
"ticket_ref": "SUPPORT-12345"
}
}
]
}

Trace Fields:

FieldTypeRequiredConstraintsDescription
querystringYes1-10,000 charsUser's question
responsestringYes1-50,000 charsAI's answer
traceIdstring (UUID)NoUUID v4Trace identifier (auto-generated if omitted)
sessionIdstringNo1-200 charsSession identifier for grouping traces
userIdstringNo1-255 charsEnd-user identifier
documentsarrayRecommendedMax 15Retrieved chunks (RAG)
spansarrayNoMax 100Spans for detailed tracing
modelstringNo1-100 charsLLM model (e.g., "gpt-5")
latencyMsnumberNoPositive intResponse time
tokensobjectNo-{ prompt, completion, total } - auto-calculated from spans if omitted
costUsdnumberNoNon-negativeCost in USD (auto-calculated from tokens if omitted)
systemPromptstringNo1-50,000 charsLLM system prompt
metadataobjectNo-Custom correlation IDs

Document Fields:

FieldTypeRequiredConstraintsDescription
idstringYes1-500 charsYour document identifier
namestringYes1-500 charsHuman-readable name
textstringYes1-50,000 charsChunk content sent to LLM
lastUpdatedstringRecommendedISO 8601Last modified (for freshness analysis)
urlstringNo1-2000 charsLink to source document
sourcestringNo1-100 charsPlatform: 'confluence', 'slack', 'gdrive'
fileFormatstringNo1-100 charsFormat: 'pdf', 'md', 'docx'
similaritynumberNo0.0-1.0Relevance score
ranknumberNoNon-negativePosition (0 = first)
ownerEmailstringNoValid emailDocument owner

Span Fields:

FieldTypeRequiredConstraintsDescription
namestringYes1-500 charsSpan name
startedAtstringYesISO 8601Start timestamp
typestringNoSee belowSpan type (default: "custom")
spanIdstringNoUUID v4Span ID (auto-generated)
parentSpanIdstringNoUUID v4Parent span for nesting
endedAtstringNoISO 8601End timestamp
durationMsnumberNoPositive intDuration in ms
statusstringNorunning, completed, errorSpan status
statusMessagestringNo1-2000 charsError message
modelstringNo1-100 charsModel name (llm_call)
promptTokensnumberNoNon-negativeInput tokens
completionTokensnumberNoNon-negativeOutput tokens
costUsdnumberNoNon-negativeCost in USD for this span
toolNamestringNo1-200 charsTool name (tool_call)
toolArgumentsobjectNoMax 500KBTool input
toolResultobjectNoMax 500KBTool output
inputobjectNoMax 500KBGeneric input (non-tool spans)
outputobjectNoMax 500KBGeneric output (non-tool spans)
metadataobjectNoMax 10KBCustom metadata

Span Types: llm_call, tool_call, retrieval, agent, guardrail, custom

Response:

{
"traceIds": ["550e8400-e29b-41d4-a716-446655440000"],
"successCount": 1
}

cURL:

curl -X POST https://app.teckel.ai/api/sdk/traces \
-H "Authorization: Bearer tk_live_your_key_here" \
-H "Content-Type: application/json" \
-d '{
"traces": [{
"query": "How do I reset my password?",
"response": "Go to Settings → Security...",
"sessionId": "chat-session-42",
"model": "gpt-5"
}]
}'

Python:

import requests

response = requests.post(
'https://app.teckel.ai/api/sdk/traces',
headers={
'Authorization': 'Bearer tk_live_your_key_here',
'Content-Type': 'application/json'
},
json={
'traces': [{
'query': 'How do I reset my password?',
'response': 'Go to Settings → Security...',
'sessionId': 'chat-session-42',
'model': 'gpt-5',
'documents': [{
'id': 'password-reset-guide.md',
'name': 'Password Reset Guide',
'text': 'To reset your password...'
}]
}]
}
)

result = response.json()
print(f"Traces: {result['traceIds']}")

POST /api/sdk/feedback

Submit user feedback.

Request:

{
"traceId": "550e8400-e29b-41d4-a716-446655440000",
"type": "thumbs_down",
"comment": "Information was outdated"
}

Field Reference:

FieldTypeRequiredDescription
traceIdstring (UUID)One requiredTarget trace
sessionIdstringOne requiredTarget session (1-200 chars)
typestringYesthumbs_up, thumbs_down, flag, rating
valuestringNoFor ratings: "1" to "5"
commentstringNoUser's explanation (1-2,000 chars)

Response:

{
"feedbackId": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"createdAt": "2025-01-15T10:00:10Z"
}

Rate Limits

  • 1,000 requests/hour per organization (trial), 10,000/hour (paid)
  • Each trace in a batch counts toward the limit
  • Headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset

429 Response:

{
"error": "Rate limit exceeded",
"retry_after": 60
}

Error Handling

Validation (400):

{
"error": "Validation failed",
"details": ["query: Required", "documents[0].text: Required"]
}

Authentication (401):

{
"error": "Invalid or missing API key"
}

Size Limits

FieldLimit
query10,000 chars
response50,000 chars
systemPrompt50,000 chars
documents15 max
document.text50,000 chars
spans100 max
toolArguments500 KB
toolResult500 KB
metadata10 KB
Total trace3 MB
Request100 traces, 5 MB

See also: TypeScript SDK | OpenTelemetry Integration