Week 1 — Baseline and triage
Run a full audit across all five engines before changing anything. Export the prompt set, the verbatim responses, and the citation list. The goal is a frozen snapshot you can compare against in week four.
Triage your gap list by traffic-weighted query volume, not raw count. A single high-intent query like "best [category] for [use case]" is worth more than ten long-tail informational gaps.
Identify your weakest engine. For most B-grade brands, this is Perplexity (citation-driven) or Gemini (entity-driven). The fix patterns differ — start with the one that surfaces highest-value gaps.
Week 2 — Source pages
Pick the five gap queries with the highest commercial intent. For each, identify the single page on your site that *should* be cited. If no such page exists, that's your first piece of work — write it.
Audit each source page for three things: a clear definitional first paragraph, structured comparisons (tables or lists), and explicit attribution markup. Models cite pages that look like sources, not pages that look like marketing.
Add FAQPage and Article schema to every source page. Validate with Google's Rich Results Test and Schema.org's validator. Both must pass.
Week 3 — Off-site reinforcement
Models triangulate. A claim only on your own site is weaker than the same claim on three independent third-party sources. Identify the three highest-authority sites in your category and pitch a contributed piece, an interview, or a data-backed comment.
Update your Wikidata entity if you have one, or seed a Crunchbase / G2 / industry-directory profile if you don't. Entity disambiguation is upstream of citation quality.
Audit your existing third-party mentions. Where models are citing outdated or incorrect facts about you, contact the source and request a correction. This is high-leverage and almost no-one does it.
Week 4 — Re-audit and iterate
Re-run the same audit you ran in week one. Compare gap-by-gap. Expect 40–60% of your top-priority gaps to close in a single cycle if you executed the source-page work cleanly.
For gaps that did not close, read the verbatim model response. Most failures fall into three buckets: the source page exists but is too marketing-heavy to be cited; the entity is being confused with a similarly-named competitor; or the prompt is asking for something genuinely outside your offering. Each requires a different fix.
Schedule monthly re-audits. AEO is not a one-time project — engines update weekly, competitors publish weekly, and your scores will drift if you stop measuring.
- Triage by query value, not gap count.
- Every gap needs a single owned source page that *looks* like a source.
- Off-site reinforcement closes the gaps on-site work cannot.
- Monthly re-audits are non-negotiable.