Search kind of sucks now
Google's AI Overview recently told users that astronauts met cats on the moon1. It suggested adding glue to pizza to stop cheese sliding off2. For a brief period in early 2025, it claimed we were still in 20243. These aren't edge cases—they're symptoms of a much larger problem that's quietly dismantling the internet's economic foundation.
While tech companies race to put LLM-powered summaries everywhere, publishers are haemorrhaging traffic and revenue. The math is brutal, the hallucinations are worrying, and the ad revenue that's supposed to keep publishers afloat is declining.
$2bn lost and counting
The numbers are staggering. Between May 2024 and May 2025, the world's 500 most visited publishers lost over 600 million visits—dropping from 2.3 billion to 1.7 billion4. That's 64 million lost visits per month on average, translating to an estimated $2 billion in ad revenue loss5.
The traffic declines are consistent and severe. CNN saw a 30% drop. Business Insider and HuffPost each lost 40%.4 But the most dramatic case is The Planet D, a travel blog that shut down after losing 90% of their traffic following the introduction of AI Overviews—though traffic loss was likely not the only factor in their closure6.
This isn't a temporary adjustment period—it's a structural shift. Zero-click searches have increased from 56% to 69% in just one year7. When Google shows an AI Overview, users click through to publisher sites 8% of the time instead of 15%—a 46.7% reduction in click-through rates8.
The IAB Tech Lab research puts it plainly: publishers are experiencing "20-60% traffic reduction (up to 90% for niche sites)."5 For most publishers, that's the difference between sustainable and insolvent.
Ads can't fix what LLMs are taking
Here's where the economics become truly perverse. Let's say ChatGPT follows through on its advertising plans. According to Incrmntal's analysis, "ChatGPT could increase revenues by 16.5x to 24.8x, potentially reaching $26 billion annually" by adopting an ad-supported model comparable to Meta or Google9.
That sounds brilliant until you realise: the original content creators get precisely nothing.
The crawler economics tell the story. Cloudflare's analysis reveals that Anthropic's crawl-to-refer ratio is 50,000:1. For every one time Claude sends traffic to a publisher, ClaudeBot has crawled them 50,000 times. OpenAI is better at 887:1, and Perplexity manages 118:1, but these are still extractive relationships masquerading as symbiotic ones10.
It gets worse. Training traffic accounts for 80% of AI bot crawling.10 These bots aren't even pretending to send traffic back—they're hoovering up content to build models that will further reduce the need for users to visit the original sources.
Even if ChatGPT reaches that $26 billion revenue milestone, publishers still face the same problem: their content is being used to generate revenue elsewhere while their own traffic—and therefore their own ad revenue—continues to collapse. The current ChatGPT ARPU sits at $0.67 versus Google's $5.12 and Meta's $3.38. Sam Altman has openly stated that "OpenAI is losing money on the $200-per-month subscription tier."9
The fundamental business model is extraction, not partnership.
You're not even getting accurate information
The irony is that while LLMs are destroying publisher revenue, they're not even delivering accurate information in return.
The best-performing models in 2025—Google's Gemini-2.0-Flash, GPT-4o, Claude 3.5—achieve hallucination rates of 0.7% to 2% in controlled benchmarks11. That sounds acceptable until you deploy them at scale across billions of searches.
And those are the good models. Falcon-7B-Instruct posts a 29.9% hallucination rate—nearly one in three responses is confidently wrong. Domain-specific performance varies wildly: legal information carries a 6.4% hallucination rate, while general knowledge manages 0.8%11.
The SEI's research on LLM summarisation accuracy notes that "Accuracy generally measures how closely hallucinations output by the LLM" align with source material—and the gap is often significant12.
Google's AI Overviews have become a case study in confident wrongness. Beyond the moon cats and pizza glue, these summaries are appearing on 69% of searches and are frequently the only information users see. When you're wrong 1-2% of the time across billions of queries, you're systematically polluting the information ecosystem.
The BBC already learned this the hard way when their LLM-generated notification summaries started producing nonsense. 77% of businesses report being concerned about AI hallucinations, according to recent surveys—and they're right to be worried11.
We're in a bizarre situation where facts and hallucinations are being mixed together in a format that looks authoritative, served to users who have no way to distinguish between them, while the original fact-checkers and journalists lose the revenue they need to produce accurate content in the first place.
Content creation is becoming economically unviable
This creates a vicious economic cycle. Publishers losing 25-40% of their traffic4 can't maintain the same level of investment in quality journalism. Less revenue means less rigorous fact-checking, fewer specialist reporters, more reliance on cheap content.
That cheaper content becomes the training data for the next generation of LLMs, which produce even more hallucinations, which further reduce the need for users to visit publisher sites, which further reduces publisher revenue. It's a death spiral for quality information.
The math is unforgiving. 64 million lost visits per month multiplied by typical ad revenue per visit means publishers are losing millions in revenue they can't replace. When Condé Nast, The New York Times, Gannett, and Reddit all back Cloudflare's new crawler control initiative, it's because they can see where this ends13.
Small publishers are already shutting down. Medium-sized ones are cutting staff. Large ones are desperately trying to negotiate AI training deals just to recoup a fraction of what they're losing in traffic. None of this is sustainable.
The IAB Tech Lab has observed a 117% surge in bot traffic, overwhelming publisher infrastructure while delivering no corresponding value5. Publishers are paying for bandwidth and servers to feed bots that directly undermine their business model.
The proposed way forward
Cloudflare's intervention is a start. Their Pay Per Crawl system, which blocked AI crawlers by default from July 1, 2025, has already attracted over one million customers. CEO Matthew Prince put it bluntly: "If the Internet is going to survive the age of AI, we need to give publishers the control they deserve."13
Roger Lynch from Condé Nast added that the "permission-based approach makes way for new business model."13 That's what's needed: actual permission, actual compensation, actual sustainability.
The IAB Tech Lab framework proposes several solutions: cost-per-crawl pricing structures, query API-based monetisation, and Model Context Protocol integration. These are sensible starting points, but they require AI companies to actually participate in good faith.
Right now, the incentives are all wrong. AI companies can scrape freely, train models on publisher content, serve summaries that keep users from clicking through, and then maybe—maybe—share a tiny fraction of advertising revenue that doesn't come close to replacing what publishers lost.
It's no different from scraping the contents of literary works or the transcript of a movie. Although, Cloudflare's stance in this could still be seen as an effort to protect their own business interests.
More discomfort
The internet's business model was never perfect. Banner ads and SEO gaming created their own problems. But at least there was a functioning value exchange: publishers created content, users visited sites, advertisers paid for attention, and the cycle continued.
LLMs have broken that cycle. They extract content at massive scale (remember that 50,000:1 ratio), synthesise it into summaries of varying accuracy, and serve those summaries in place of the original sources. The original creators get a 46.7% reduction in traffic and a polite suggestion that they should be grateful for the remaining scraps.
This isn't sustainable. You can't destroy the revenue model that funds quality journalism and expect quality journalism to continue existing. You can't train models on publisher content while simultaneously eliminating the business model that pays for that content creation.
Just look at The Sun or The Daily Mail—once giants of UK journalism, now shadows of their former selves... but probably not because of AI.
Either AI companies need to actually compensate publishers for training data and attribution, or we'll end up with an internet where the only content being created is whatever can survive on zero traffic and no revenue. That's a race to the bottom that ends with LLMs trained on increasingly low-quality content, producing increasingly unreliable outputs, to serve users who can no longer find accurate information anywhere.
The 600 million lost visits aren't coming back. The question is whether we're going to build a system that actually compensates the people creating the content, or whether we're going to watch the entire ecosystem collapse while AI companies celebrate their ad revenue growth.
I know which outcome I'd bet on, and it's not the optimistic one.
References
- WION News, Google's AI search under fire for falsely claiming Obama is Muslim, astronauts played with cats on Moon, May 2024 ↩
- NBC News, Glue on pizza? Google's AI faces social media mockery, May 2024 ↩
- TechCrunch, Google fixes bug that led AI Overviews to say it's now 2024, May 2025 ↩
- Infactory, The Numbers Don't Lie: Publishers Are Losing 25% of Their Traffic to AI Platforms, 2025 ↩
- IAB Tech Lab, LLM Framework, 2025 ↩
- IAB Tech Lab, Dude, AI ate my traffic, 2025 ↩
- StanVentures, Similarweb: Zero-Click Searches Surge to 69% Since Google AI Overviews Launched, 2025 ↩
- Pew Research Center, Google users are less likely to click on links when an AI summary appears in the results, July 2025 ↩
- Incrmntal, LLM Advertising, 2025 ↩
- Cloudflare, AI crawler traffic by purpose and industry, 2025 ↩
- Multiple sources including web search data on LLM hallucination rates, 2025 ↩
- Carnegie Mellon University Software Engineering Institute, Evaluating LLMs for Text Summarisation: Introduction, 2025 ↩
- Cloudflare, Cloudflare just changed how AI crawlers scrape the internet at large, 2025 ↩