← All postsJournal

The five-factor rubric, explained slowly.

Published Apr 24, 2026By Pondral TeamRead time 8 min read

When we built the first version of Pondral, we knew the hardest thing we'd have to defend was our scoring methodology. Every brand-monitoring tool that has ever existed has been accused (usually fairly) of running a black box.

So we made a rule: every score Pondral produces must be derivable from a public rubric, run on a reproducible prompt, against a known model version. No proprietary "secret sauce." Just five factors, weighted, averaged across runs.

This post walks through each factor, why it's weighted the way it is, and what edge cases we've learned to handle in the year we've been running it in production. For the full design rationale, see our deep-dive on how the scoring rubric was built.

The five factors are: Presence (20%), Prominence (25%), Context (20%), Citation Link (20%), and Competitive Share (15%). For definitions of each of these terms, see the AEO glossary. Presence is the simplest: does your brand appear at all? Prominence is harder. We measure position in the response, whether you're mentioned by name vs. inferred, and whether the model leads with you or buries you.

Context is where most teams discover their real problem. A model can mention you accurately or it can mention you in the wrong category, with the wrong description, alongside the wrong competitors. We treat all three of those failure modes as distinct, and we report them separately. To understand how AI visibility measurement handles answer variability, read our post on inter-rater reliability for AI answers.

Last updated April 2026Run a free audit