The five-factor rubric.
Every Pondral score is computed from a public, weighted rubric. We average across multiple runs with 95% confidence intervals — and we publish the math.
The factors
How queries are selected
Each audit begins with a query bank — a set of buyer-intent questions a real customer would ask before choosing a brand in your category. Queries are generated from four templates:
- Category-entry: "What is the best [category]?" — captures which brands own the category frame.
- Problem-first: "How do I [solve specific problem]?" — captures which brands appear as solutions.
- Comparison: "What are the top [category] options for [use case]?" — captures competitive share directly.
- Brand-adjacent: "[Competitor name] alternatives" — captures adjacency and substitution.
Queries are reviewed for specificity (too broad = noise; too narrow = skewed sample). The final bank for each audit is between 5 and 20 queries, chosen to cover your primary buyer journeys without duplication. You can review and edit the query bank before a run.
Run consensus
Each query is asked three times per engine, with prompt variations, to mitigate single-run noise. We compute a mean and 95% confidence interval.
Inter-rater reliability
Across engines, we calculate Cohen's kappa for citation agreement. When kappa drops below 0.6 for a query, we flag it as low-confidence and surface that in the report.
Reproducibility
Every audit ships with a provenance ledger — the exact prompts, timestamps, model versions, and raw responses. Available for download on every paid plan.
Methodology v2.0.0 (May 2026). We publish a changelog of every methodology change.