Glossary of Terms
Here is a brief glossary of the key terms and concepts you'll encounter while using Teckel AI.
Accuracy
The accuracy metric measures how well the AI's response is grounded in the provided source documents. Calculated as the ratio of supported claims to total claims extracted from the response. A score of 1.0 means all factual claims are fully supported by the sources.
AI Audit/LLM-as-a-Judge
The process of automatically reviewing an AI's answer using another AI model. This is also referred to as a LLM-as-a-Judge system. This process produces an audit result with scores and identified issues with the quality of the original answer.
Chunking
Breaking documents into smaller pieces for AI to process. This makes it easier for AI systems to find and use the most relevant information when answering questions. Commonly used in RAG systems.
Claims Analysis
The process of breaking down AI responses into individual factual statements and mapping them to supporting source chunks. This provides transparency into which claims are supported by evidence and which are potentially hallucinated.
Completeness
Our core metric. A measure (0-1.0) of how well the AI's response addresses the user's specific question. This evaluates whether the agent provided helpful information that directly answers what was asked. A score of 1.0 means the response directly and comprehensively addresses the question.
Document Freshness
A measure of the age of documents retrieved by your system. If five documents were cited for a given user question, this metric will show you the average age between them. Used to find stale documentation in a given subject area.
Embeddings/Vectorization
The process of converting text into numerical representations (vectors) that capture semantic meaning. Teckel AI uses embeddings to analyze query patterns and identify similar questions in the Topics functionality.
Hallucination
When AI makes up information that wasn't in its sources. This is one of the main problems Teckel AI helps you detect and prevent by analyzing whether claims in your AI responses are actually supported by your documents.
Human-in-the-Loop
Having people review and improve AI responses to ensure quality. Teckel AI helps identify which areas need human review by flagging low-scoring topics and flagging common issues.
Hybrid Search
Using both keyword matching and meaning-based search together. This combines the precision of exact word matches with the flexibility of semantic understanding to find the most relevant information.
LLM (Large Language Model)
The AI model (like GPT-5, Claude, etc.) that generates answers to user questions. Teckel AI is model-agnostic and observes the input and output of any LLM you currently use.
Precision
A measure (0-1.0) of how relevant the retrieved document chunks are to answering the user's question. Calculated as the ratio of relevant chunks to total chunks retrieved. A score of 1.0 means all retrieved chunks were directly useful for answering the question.
Prompt
The question or additional instructions given to an AI to parse. In Teckel AI's context, this is typically the user's query that your RAG system is trying to answer.
RAG (Retrieval-Augmented Generation)
A technique where an LLM is augmented with retrieved context from a knowledge base. Teckel AI is specifically designed for RAG systems, as it monitors the answer that was produced using this retrieved information.
RAGAS
An open-source framework for evaluating RAG systems. Teckel AI takes inspiration from RAGAS but implements its own evaluation engine to calculate similar metrics like Accuracy, Precision, and Completeness.
Retrieval
Finding and fetching relevant information from your documents. This is the first step in RAG systems where the most relevant chunks are selected to help answer a user's question.
SDK (Software Development Kit)
The teckel-ai npm package that provides a simple, non‑blocking interface for integrating Teckel AI into your application. The SDK handles trace submission, error handling, and authentication.
Semantic Clustering
The process of grouping similar queries together based on their meaning rather than exact keywords. This helps identify patterns in what users are asking and where documentation might be lacking.
Semantic Search / Vector Search
Finding related content based on meaning and context rather than exact keyword matches. Most AI chat applications use semantic search for retrieval. Teckel AI uses this to analyze query patterns and identify similar questions across your trace history.
Session
A grouping key for traces that can correspond to a user's chat session or any other logical grouping of queries. This allows for filtering and aggregation of data by conversation.
Source / Document Chunk
In Teckel AI's context, a "source" is a piece of content used to derive an answer, such as a paragraph from a document
Teckel Judge
Our proprietary evaluation engine that performs automated analysis of AI responses. It calculates four core metrics (Accuracy, Precision, Completeness, and Document Freshness) and provides detailed claims analysis to identify improvement opportunities.
Token
A piece of text (like a word or part of a word) that AI processes. Understanding tokens helps explain AI pricing and context limits.
Topic Clustering
The automatic categorization of user queries into meaningful topics based on semantic similarity. This helps you understand what subjects your users ask about most frequently.
Topic Gaps
Areas identified through query analysis where your documentation is missing, incomplete, or consistently produces low-quality responses. Topic gaps highlight the most impactful documentation improvements you can make.
Topic Relationships
Connections and dependencies between different query topics, showing how different subjects relate to each other in your users' questions. This helps identify documentation that should be linked or cross-referenced.
Topics
Teckel AI's semantic analysis feature that uses embeddings to identify patterns in user queries, revealing trending topics, common questions, and areas where your AI struggles to provide good answers.
Trace
A single interaction record in Teckel AI, corresponding to one user question and the AI's answer, along with associated metadata. Each trace has a unique identifier for auditing.
Vector Database / Embeddings
Many RAG systems use vector embeddings (numerical representations of text) and vector databases for retrieving relevant document chunks. Teckel AI works with any vector database or retrieval system. We additionally use embedding on user queries in our Query Insights section, to show you themes from your AI's output.