Back to Blog

The Death of the 10x Pillar Page: Why Super-Long Form Content Wins SEO but Loses GEO

Tom Fry

Tom Fry

The Death of the 10x Pillar Page: Why Super-Long Form Content Wins SEO but Loses GEO

Way back in 2015, Moz came up with the concept of the 10x pillar page – the idea that if you create content 10x better than anything else out there, it'll rise to the top even in the most competitive SERPs (Moz, "Why Good Unique Content Needs to Die").

Over time this concept evolved, and in the end gave us super-optimised long form content that was – in some cases – thousands of words long. Even when not building a 10x pillar page, the underlying SEO strategy has been consistent: cover a subject thoroughly, demonstrate expertise, and search engines will reward you with rankings.

However, our research reveals this approach fundamentally breaks down when optimising for LLM visibility.

The KnowBe4 Paradox: When Comprehensive Content Fails

Consider KnowBe4's content: their /phishing page is a substantial educational resource explaining what phishing is, how it works, and how to prevent it (KnowBe4, "What is Phishing?"). It even includes a (brief) history of phishing – and the source code reveals 455 uses of the word "phishing".

KnowBe4's phishing page is exactly the kind of comprehensive content that performs well in organic search. Yet it was cited only 8 times across ~1,000 cybersecurity prompts.

Meanwhile, a short press release announcing that "security training reduces phishing click rates by 86%" was cited 37 times – nearly 5x more despite being a fraction of the word count (600 versus 10,000+ words) (KnowBe4, "Security Training Reduces Global Phishing Click Rates by 86%").

Why Brief Content Outperforms Comprehensive Guides

Why does a brief press release outperform a one-page all-encompassing guide when it comes to citations?

The reason is structural: LLMs don't need to cite definitions or explanations because that knowledge is already embedded in their training data. When a buyer asks "what is phishing?", the LLM answers from its embedded knowledge – no citation required – it already knows!

But when someone asks "What impact does security awareness training have on phishing click rates?" or "what is the evidence that training reduces phishing click-rates?", the LLM needs external sources to back up its claims. The press release gives it exactly that – a specific, quantified, attributable data point.

Research Confirms the Pattern at Scale

That's one example, but our analysis confirms this pattern at scale:

  • Standards & Regulatory Bodies have the highest citation reuse rate (2.63 avg citations per URL) – they provide authoritative, citable specifications
  • Analyst firms and research publications consistently outperform brand content when they publish original data and benchmarks
  • Press releases with quantified claims punch well above their weight – a single statistic can drive citations across dozens of query types
  • 72% of all URLs are cited only once – but the top 0.01% (those with 100+ citations) are almost exclusively data-rich research, benchmarks, and standards documents

The Implication: Rethink How You Measure Content Value

The implication for content strategy is clear: stop measuring content value by word count or topic coverage, and start measuring it by citable claims per page.

A 500-word press release with one strong statistic will generate more LLM visibility than a 5,000-word guide that explains concepts without unique data.

The Winning Formula for GEO

The winning formula for GEO isn't comprehensiveness – it's density of proprietary, quantified proof points that LLMs need to reference as evidence:

  • Benchmark data ("86% reduction", "50% faster time-to-value")
  • ROI and TCO studies with specific dollar figures
  • Survey findings with sample sizes and percentages
  • Research reports with rankings and comparisons

Conclusion: Optimise for Citability, Not Crawlability

The 10x pillar page is optimised for crawlers that reward thoroughness – but GEO requires optimising for models that reward citability.

In the age of AI answer engines, the question isn't "How comprehensively can we cover this topic?" It's "What unique, quantified claims can we make that LLMs will need to cite?"