We ran the same query, "best CRM for a 50-person remote team," through ChatGPT, Claude, Gemini, Perplexity, and Grok, three times each, every day for ninety days. The results were illuminating. For context on how AI engines differ in how they cite brands, see our ChatGPT vs Perplexity vs Gemini comparison.
The headline finding: there is no consensus. Across the runs, no single brand appeared in every top-three list. A handful of incumbents dominated, but the order shifted constantly. We saw a similar pattern in our broader research on which brands dominate AI search and why.
But the more interesting finding is that the criteria the models cited shifted over time. The dominant ranking criterion changed week to week: "ease of use," then "AI features," then specific integrations. The underlying products barely moved. Our scoring rubric explainer breaks down how we measure these shifts systematically.
This is what makes AI search so different from traditional SEO: the answer changes with the zeitgeist, not just with your content. You can do everything right and still drop because the model decided the question is now about something else. The metric that matters is AI Voice Share, not branded search volume. Pondral's complete guide to AI visibility covers how to measure and improve it.