New Study Reveals Differences Between Google Rankings and LLM Citations

Rambabu Thapa
Share on:
New Study Reveals Differences Between Google Rankings and LLM Citations

A new report highlights notable differences between how large language models (LLMs) cite sources and how websites rank on Google. Search Atlas, a company specializing in SEO software, analyzed over 18,000 queries to compare citations from OpenAI’s ChatGPT, Google’s Gemini, and Perplexity against traditional Google search rankings.

Perplexity Shows Strongest Alignment with Google Search

Because Perplexity performs live web retrieval, its citation patterns resemble Google search results more closely than those of other models. The study found Perplexity had a median domain overlap of about 25 to 30 percent with Google’s results, and URL-level overlap near 20 percent. In total, Perplexity shared roughly 18,500 domains with Google, representing approximately 43 percent of all domains it cited.

Selective Citation Behavior in ChatGPT and Gemini

Conversely, ChatGPT demonstrated substantially lower overlap with Google’s rankings. Its median domain overlap was between 10 and 15 percent, and it shared about 1,500 domains with Google, making up 21 percent of its cited sources. URL-level matches typically stayed below 10 percent. Gemini’s behavior was inconsistent; some responses barely overlapped with search results, while others aligned more closely. Overall, Gemini shared just 160 domains with Google, which is around 4 percent of its citations despite those domains comprising 28 percent of Google’s results.

Implications for Website Visibility

The data indicates that ranking well on Google does not guarantee citations from LLMs. Each AI platform uses different mechanisms to select sources. Perplexity’s architecture, which emphasizes live searching, tends to mirror Google’s domain strength and visibility. Thus, websites ranking highly on Google are more likely to appear in Perplexity’s citations.

In contrast, ChatGPT and Gemini rely more on their pre-trained knowledge and selective retrieval processes. They reference a narrower range of sources, making their citation patterns less reflective of current search rankings. This is notable in the low URL-level matches observed for both models relative to Google.

Study Limitations and Methodology

The dataset primarily represented Perplexity queries, which accounted for 89 percent of the sample, with OpenAI’s ChatGPT comprising 8 percent and Gemini just 3 percent. Queries were paired based on semantic similarity using OpenAI’s embedding model with an 82 percent similarity threshold. The analysis was conducted over a two-month period, providing a recent snapshot that may not fully represent long-term patterns.

Future Outlook

For AI models like Perplexity that depend heavily on real-time web data, traditional SEO signals and domain authority will likely continue to influence visibility in AI citations. However, for models like ChatGPT and Gemini that emphasize reasoning over retrieval, traditional SEO rankings may have less direct effect on cited sources.

This study sheds light on the evolving relationship between AI-powered search platforms and conventional search engines, highlighting the need for marketers and content creators to understand and adapt to multiple visibility dynamics in the age of generative AI.

Disclaimer

Book A Consultation With An Industry Expert

Unlock personalized insights and strategies with a one-on-one consultation with Rambabu Thapa. Whether you’re looking to grow your business, refine your SEO strategy, or get expert advice, Rambabu is here to help.

Book A Consultation

Need tailored advice to grow your business or optimize your strategies?

Related News