Glossary of Terms
Here is a brief glossary of the key terms and concepts you'll encounter while using Teckel AI.
Accuracy
The accuracy metric measures how well the AI's response is grounded in the provided source documents. Calculated as the ratio of supported claims to total claims extracted from the response. A score of 1.0 means all factual claims are fully supported by the sources.
Agent / AI Agent
An autonomous AI system that can use tools, make decisions, and execute multi-step workflows. Agents go beyond simple question-answering to perform complex tasks like searching databases, calling APIs, and coordinating multiple operations. Teckel captures agent workflows as spans within traces, providing visibility into each step of execution.
AI Audit/LLM-as-a-Judge
The process of automatically reviewing an AI's answer using another AI model. This is also referred to as a LLM-as-a-Judge system. This process produces an audit result with scores and identified issues with the quality of the original answer.
API Key
A secure credential used to authenticate your application with Teckel AI. API keys are prefixed with tk_live_ and should be kept secret. We store only hashed versions of keys for security.
Chunking
Breaking documents into smaller pieces for AI to process. This makes it easier for AI systems to find and use the most relevant information when answering questions. Commonly used in RAG systems.
Classifier
A small machine learning model trained to categorize traces into specific topics. Classifiers are trained by labeling positive and negative examples, and once deployed, they automatically classify incoming traces. Classifiers power the Topics feature and can be used as conditions in Patterns.
Condition
A rule within a Pattern that determines which traces trigger the pattern. Conditions can be based on evaluator scores (e.g., completeness < 0.7), topic classifications (e.g., Topic = "Billing"), or operational metrics (e.g., latency > 5000ms). Multiple conditions can be combined with AND/OR logic.
Connectors
Integrations that connect Teckel AI to external services. Currently supported connectors include Slack (for daily recap notifications) and Google Drive (for knowledge base source verification). Connectors are configured in the Admin settings.
Claims Analysis
The process of breaking down AI responses into individual factual statements and mapping them to supporting source chunks. This provides transparency into which claims are supported by evidence and which are potentially hallucinated.
Completeness
Our core metric. A measure (0-1.0) of how well the AI's response addresses the user's specific question. This evaluates whether the agent provided helpful information that directly answers what was asked. A score of 1.0 means the response directly and comprehensively addresses the question.
Daily Recaps
Automated Slack notifications summarizing your AI system's performance. Recaps highlight trending topics, low-scoring areas, and topics you're actively tracking. Configured via the Slack connector in Admin settings.
Document Freshness
A measure of the age of documents retrieved by your system. If five documents were cited for a given user question, this metric will show you the average age between them. Used to find stale documentation in a given subject area.
Embeddings/Vectorization
The process of converting text into numerical representations (vectors) that capture semantic meaning. Teckel AI uses embeddings to analyze query patterns and identify similar questions in the Topics functionality.
Evaluator
A custom LLM-based metric that measures AI response quality. Evaluators consist of a prompt template with variables (like {query} and {response}), an output type (numeric, boolean, or string), and configuration options like sample rate and model selection. Evaluators power the metrics displayed throughout the dashboard and can be used as conditions in Patterns.
Feedback Loop
The cycle of improvement enabled by Teckel AI: user questions reveal gaps, Teckel identifies patterns, subject matter experts create better documentation, and AI responses improve. This closes the gap between what users need and what your knowledge base provides.
Hallucination
When AI makes up information that wasn't in its sources. This is one of the main problems Teckel AI helps you detect and prevent by analyzing whether claims in your AI responses are actually supported by your documents.
Human-in-the-Loop
Having people review and improve AI responses to ensure quality. Teckel AI helps identify which areas need human review by flagging low-scoring topics and flagging common issues.
Hybrid Search
Using both keyword matching and meaning-based search together. This combines the precision of exact word matches with the flexibility of semantic understanding to find the most relevant information.
Interventions
Actions taken to improve AI performance based on Teckel feedback. This includes linking new documents to topics, updating existing content, or adjusting retrieval settings. Interventions are tracked over time to measure their impact.
Knowledge Base
The collection of documents, FAQs, policies, and other content that your AI system uses to answer questions. Teckel AI helps you understand which parts of your knowledge base are working well and which need improvement.
Knowledge Verification
An AI agent that searches your connected knowledge sources (like Google Drive) to check if documentation already exists for identified gaps. This helps determine whether the issue is missing content or a retrieval problem where existing docs aren't being found.
LLM (Large Language Model)
The AI model (like GPT-5, Claude, etc.) that generates answers to user questions. Teckel AI is model-agnostic and observes the input and output of any LLM you currently use.
Organization
Your team or company account in Teckel AI. All data, API keys, and settings are scoped to your organization. Team members can be invited with different permission levels (owner or member).
Pattern
A Sentry-like issue detection mechanism for AI systems. Patterns consist of conditions (based on evaluator scores, topic classifications, or operational metrics) that identify problematic traces. When conditions are met, the pattern becomes "active" and groups related traces together. Patterns include AI-generated feedback explaining the issue and suggested action items. Patterns transition through states: active (issue ongoing), resolved (issue fixed), and archived (no longer tracking).
Precision
A measure (0-1.0) of how relevant the retrieved document chunks are to answering the user's question. Calculated as the ratio of relevant chunks to total chunks retrieved. A score of 1.0 means all retrieved chunks were directly useful for answering the question.
Prompt
The question or additional instructions given to an AI to parse. In Teckel AI's context, this is typically the user's query that your RAG system is trying to answer.
RAG (Retrieval-Augmented Generation)
A technique where an LLM is augmented with retrieved context from a knowledge base. Teckel AI provides first-class analytics for RAG systems, including document freshness, retrieval precision, and source verification. It also supports agent workflows with spans, tool calls, and multi-step trace analysis.
Retrieval
Finding and fetching relevant information from your documents. This is the first step in RAG systems where the most relevant chunks are selected to help answer a user's question.
SDK (Software Development Kit)
The teckel-ai npm package that provides a simple, non‑blocking interface for integrating Teckel AI into your application. The SDK handles trace submission, error handling, and authentication.
Semantic Clustering
The process of grouping similar queries together based on their meaning rather than exact keywords. This helps identify patterns in what users are asking and where documentation might be lacking.
Semantic Search / Vector Search
Finding related content based on meaning and context rather than exact keyword matches. Most AI chat applications use semantic search for retrieval. Teckel AI uses this to analyze query patterns and identify similar questions across your trace history.
Session
A grouping key for traces that can correspond to a user's chat session or any other logical grouping of queries. This allows for filtering and aggregation of data by conversation.
Span
A segment within a trace representing a discrete operation such as a tool call, retrieval step, or LLM invocation. Spans enable detailed performance analysis of agent workflows by capturing timing, inputs, and outputs for each step. Multiple spans combine to form a complete trace of an agent's execution path.
Slack Integration
Connect Teckel AI to Slack to receive daily recap notifications about your AI system's performance. The integration sends summaries of trending topics, low-scoring areas, and tracked topics to a channel of your choice.
Subject Matter Expert (SME)
The person or team with domain expertise who creates and maintains your knowledge base content. Teckel AI helps SMEs understand exactly what information users need, so they can create targeted documentation that improves AI responses.
Source / Document Chunk
In Teckel AI's context, a "source" is a piece of content used to derive an answer, such as a paragraph from a document
Teckel Analyst
Teckel's proprietary feedback system for debugging AI agents. Available in the web platform and via CLI for terminal-based workflows. Teckel Analyst helps identify issues in agent behavior, suggests improvements, and explains why an agent made specific decisions. It provides actionable insights for improving agent performance across your system.
Teckel Evaluation Engine
The system that runs evaluators on incoming traces. When a trace is ingested, the evaluation engine executes configured evaluators by substituting trace data into prompt templates and parsing LLM responses. Results are attached to traces and power metrics, patterns, and reporting throughout the platform.
Token
A piece of text (like a word or part of a word) that AI processes. Understanding tokens helps explain AI pricing and context limits.
Tool Call
An invocation of an external function or API by an AI agent. Tool calls are captured as spans within traces, including the tool name, input parameters, output results, and latency. This data helps debug agent behavior by showing exactly which tools were called and whether they returned useful results.
Topic Clustering
The automatic categorization of user queries into meaningful topics based on semantic similarity. This helps you understand what subjects your users ask about most frequently.
Topic Gaps
Areas identified through query analysis where your documentation is missing, incomplete, or consistently produces low-quality responses. Topic gaps highlight the most impactful documentation improvements you can make.
Topic Relationships
Connections and dependencies between different query topics, showing how different subjects relate to each other in your users' questions. This helps identify documentation that should be linked or cross-referenced.
Topics
Teckel AI's semantic analysis feature that uses embeddings to identify patterns in user queries, revealing trending topics, common questions, and areas where your AI struggles to provide good answers.
Trace
A single interaction record in Teckel AI, corresponding to one user question and the AI's answer, along with associated metadata. Each trace has a unique identifier for auditing. For agent workflows, a trace may contain multiple spans representing tool calls, retrieval steps, and LLM invocations.
Vector Database / Embeddings
Many RAG systems use vector embeddings (numerical representations of text) and vector databases for retrieving relevant document chunks. Teckel AI works with any vector database or retrieval system. We additionally use embedding on user queries in our Query Insights section, to show you themes from your AI's output.