Skip to main content

Core Concepts

Teckel AI transforms scattered AI interactions into structured documentation insights. Here's how our system works and what makes it powerful for documentation teams.

The Teckel Judge

Our proprietary evaluation engine analyzes every AI response in a single, efficient process. The Teckel Judge combines quantitative scoring inspired by RAGAS metrics and LLM generated qualitative analysis to provide comprehensive insights that documentation teams can immediately act upon. See the Teckel Judge section for more detail.

Topic Intelligence: Your Documentation Roadmap

The real power of Teckel AI comes from understanding patterns across thousands of queries. We automatically organize user needs into actionable insights.

Automatic Topic Discovery

We use advanced clustering algorithms to group similar queries together, to provide at-a-glance business intelligence into what users are talking to your AI systems about. Some advanced analytical capabilities we offer include:

  • Semantic Grouping - Queries with similar meaning cluster together, even with different direct wording
  • Dynamic Evolution - Topics fluidly adapt as user needs change over time
  • Statistical Significance - Only meaningful patterns surface, filtering out noise

Example: These queries would group into a "password reset" topic that you can track at scale:

  • "How do I reset my password?"
  • "Forgot password help"
  • "Can't log in, need new password"
  • "Password recovery process"

Documentation Gap Analysis

For each topic, we identify exactly what's missing:

  • Performance Metrics - Topics with low completeness scores lack proper documentation
  • Query Volume - See how many users are at risk of misinformation
  • Impact Scoring - Prioritize fixes by potential improvement
  • Trend Tracking - Monitor if knowledge gaps are improve after intervention such as adding to the knowledge base

How Documentation Teams Use This

Here's what makes our topic intelligence invaluable for documentation teams:

Direct Feedback Mapping
Every topic shows:

  • Real user queries (what they're actually asking)
  • Current document linkage (which docs try to answer these questions)
  • Specific knowledge gaps (what specific information is missing)
  • Fix priority (based on volume and impact)

Topic Relationships
We map connections between topics, revealing:

  • Which topics users explore together
  • Documentation that is or should be linked
  • Content that should be consolidated

Actionable Tasks
Instead of vague "improve documentation" feedback, you get:

  • "Create a password reset guide - 487 users asked, poor accuracy score"
  • "Update API authentication docs - missing OAuth flow details"
  • "Include subscription management with billing FAQ- users are often interested in both"

Hypothetical Example: E-commerce Documentation

  1. Topic: "Shipping Rates" (892 queries/month)

    • Accuracy: 0.4 (very low)
    • Missing: International shipping info for Spain, Germany, Poland
    • Action: Create comprehensive shipping guide for Europe
  2. Topic: "Return Process" (634 queries/month)

    • Accuracy: 0.7 (moderate)
    • Missing: Return label generation steps
    • Action: Add step-by-step instructions on where to print a return label

Result: After fixing these major gaps in just two topics, overall AI accuracy across the system improves from 67% to 75%.

Document-Topic Intelligence Network

Understanding how your documents support different user needs is crucial for maintaining high-quality AI responses. Teckel AI provides sophisticated analysis of the relationships between your documentation and the topics users ask about.

How Documents Support Multiple Topics

A single document rarely answers just one type of question. Our system tracks how each document performs across different topic clusters:

Multi-Topic Coverage
Your "User Authentication Guide" might support:

  • Login troubleshooting (85% accuracy)
  • Password policies (92% accuracy)
  • Session management (67% accuracy)
  • Security settings (71% accuracy)

This granular view reveals that while the document excels at password policy questions, it needs improvement for session management queries.

The Document Performance Matrix

We analyze every document through multiple lenses:

Utilization Rate
How often is this document retrieved and actually used to answer questions? A low utilization rate might indicate:

  • Poor chunking or indexing
  • Irrelevant content mixed with valuable information
  • Need for document splitting or reorganization

Topic-Specific Performance
The same document can perform brilliantly for one topic while failing another. We track:

  • Accuracy per topic cluster
  • Which sections support which topics
  • Performance trends over time
  • Impact of document updates on different topics

Cross-Document Dependencies
Modern documentation is interconnected. We identify:

  • Documents that work well together
  • Missing links between related content
  • Redundant information across multiple documents
  • Opportunities for consolidation

Feedback Aggregation & Pattern Recognition

When hundreds of queries reference the same document, patterns emerge:

Consolidated Feedback Analysis
Instead of reviewing individual query feedback, see aggregated insights:

  • "Users consistently ask about rate limits, but the API guide doesn't mention them"
  • "The setup guide references a config file that no longer exists in v3.0"
  • "Multiple queries show confusion between 'workspace' and 'project' terminology"

Pattern-Based Prioritization
We identify which document issues affect the most users:

  • Critical: Affecting 500+ queries per week with less than 60% accuracy
  • High: Impacting major topic clusters with declining performance
  • Medium: Isolated issues in low-traffic topics

Real-World Impact Example

Consider a SaaS platform's documentation:

Document: "Billing & Subscriptions FAQ"
Topics Supported:

  • Payment Methods (423 queries/week) - 89% completeness
  • Plan Upgrades (312 queries/week) - 45% completeness
  • Invoice Management (198 queries/week) - 78% completeness
  • Cancellation Process (156 queries/week) - 91% completeness

Analysis Reveals:

  • Plan upgrade information is severely lacking
  • The document excels at answering payment and cancellation topics
  • Invoice management could use minor improvements

Recommended Action: Split the FAQ into focused guides. Create a dedicated "Plan Management Guide" with detailed upgrade/downgrade procedures. This single change would improve accuracy for 312 weekly queries from 45% to an estimated 85%.

The Feedback Loop

Our system creates an automatic improvement cycle:

  1. Monitor - Track every query and document reference
  2. Analyze - Identify patterns and performance issues
  3. Prioritize - Focus on high-impact improvements

Allowing you to:

  1. Enhance - Your knowledge base on specific feedback (and even fill-in-the-blank document templates)
  2. Validate - Measure topic and agent improvement over time

This systematic approach transforms reactive documentation maintenance into proactive quality management, ensuring your AI has accurate, relevant information to provide users.

Processing Times

Trace Realtime Processing

  • Immediate results in 20-30 seconds, determining how completely a user query was answered
  • Premium pricing applies for advanced analytic capabilities including chunk-claim attribution

Feedback Processing

  • Any Topic or document level feedback can be created on demand, including recommended document templates
  • Automated feedback on trending, low-scoring, and tracked topics send daily via Slack

Integration with Your Workflow

Teckel AI fits into existing documentation workflows without disruption:

For Documentation Teams

  • Daily reports on document/topic performance
  • Prioritized fix lists based on ground truth testing
  • Automatically generated rough draft documents to save you time writing boiler plate
  • Before/after quality tracking at the document level

For Product Teams

  • Track specific topic areas on demand- semantically or via keyword
  • Feature gap identification from user queries
  • Usage patterns for roadmap planning
  • Quality metrics to track AI chatbot KPIs for your area of expertise

For Engineering Teams

  • Overall RAG system performance metrics
  • Vector search effectiveness data including retrieval problem attribution
  • AI system health monitoring including Slack notifications
  • RBAC for admins, as well as user roles