Skip to content
← Back to blog

The Dead Internet Problem Is Real — And X Articles Are Ground Zero

·XDigestly

In September 2025, Sam Altman posted something that got 5.7 million views:

“I never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now”

The dead internet theory holds that most online content, and most engagement with it, is no longer generated by humans. Altman, whose company makes the tools most responsible for it, was admitting the theory might have a point.

The irony landed hard. The CEO of OpenAI, looking at a platform full of AI-generated content, expressing surprise. But the data underneath the post is worth taking seriously. And on X Articles specifically, the picture is worse than most people realize.

What the numbers actually show

An Ahrefs study analyzed 900,000 new web pages in April 2025 and found 74.2% contained AI-generated content. That’s across the web broadly.

Merriam-Webster named “slop” Word of the Year in 2025, defining it officially as “digital content of low quality produced usually in quantity by means of artificial intelligence.”

On X Articles specifically, the format has features that make it unusually attractive for AI content operations:

  • No editorial process, anyone can publish anything instantly
  • Articles get algorithmic amplification that standard posts don’t
  • Engagement drives revenue sharing for Premium account holders
  • The barrier to production is effectively zero: ChatGPT generates a 2,000-word article in under a minute

The incentive math is simple. Produce 20 AI articles a day across 10 accounts, some will perform, the ones that do generate revenue, the ones that don’t cost nothing to make. Volume wins.

What our data shows from 700+ articles

Our rating system checks every article across four AI agents: credibility, originality, depth, and reader value. After 700+ articles rated, the distribution looks like this:

  • ~26% score “Worth It” or above — genuinely good content worth your time
  • ~56% score “decent” — competent, forgettable, not particularly useful
  • ~16% score “Filler” or “Skip” — real quality problems, often AI-generated or heavily derivative
  • ~1.8% score below 3 overall — clearly AI slop with multiple tell-patterns

The 1.8% hard floor is smaller than the media narrative would suggest. The bigger problem is the 56% in the middle: articles that may be human-written but read like they could have been generated. Competent, generic, and not worth 10 minutes of your time even if a human spent 2 hours writing them.

Why X’s algorithm makes this worse

Clay Shirky put it well in 2008, before the AI flood existed: “It’s not information overload. It’s filter failure.”

X’s algorithm optimizes for engagement, not quality. A mediocre article with a good hook can outperform a genuinely researched piece, because the hook drives clicks, and clicks drive the metrics X cares about. The algorithm has no mechanism for detecting whether an article actually delivered on its opening promise.

X’s own “Top Articles” discovery feature was quietly removed in January 2026, around the same time Musk announced the $1M prize for best article. The platform is actively incentivizing article creation while removing the main native discovery tool for finding quality ones. More content, less signal.

Herbert Simon described the underlying dynamic in 1971: “A wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.” He was writing about early computer systems. The same mechanics have reproduced at a scale he couldn’t have imagined.

How to filter it yourself

The platform isn’t going to solve this for you, at least not anytime soon. What actually helps:

Check the author’s publishing pattern. Accounts publishing 15 articles a week across unrelated topics are almost certainly using AI as the primary writer. Writers who publish one or two articles a month on a specific subject they clearly work in are far more likely to be genuine.

Look for first-hand specificity. AI can’t describe something it experienced, because it experienced nothing. Any sentence that describes a real conversation, a specific mistake, or a personal outcome with concrete numbers is a positive signal. Generic claims with no named sources or verifiable numbers are a red flag.

Check the originality score. Our rating system has an Originality agent that checks specifically for AI patterns: uniform section structure, hedging language, generic openers, zero first-hand detail, corporate vocabulary that nobody uses in conversation. Articles scoring 4 or below on originality are almost always exhibiting multiple tells. Articles scoring 8+ are consistently genuine writing with real perspective.

Watch for the vocabulary tells. Max Planck Institute researchers found that words like “delve,” “robust,” and “pivotal” spiked over 50% in published text after ChatGPT’s release. “Delves” specifically increased over 6,000% in scientific abstracts from 2020 to 2024. One or two of these in an article is normal. Five or more is a pattern.


The dead internet problem isn’t going away. The tools are too easy, the incentives are too well-aligned for content farms, and the platform infrastructure doesn’t currently select against it. But the filter failure is a solvable problem at the individual level, even if the content flood isn’t solvable at the platform level.

The top 26% of X articles, the “Worth It” and above tier in our system, is genuinely excellent. Specific, original, well-reasoned writing you won’t find anywhere else. The gap between that and everything else is getting wider, not narrower. Finding those articles is the whole problem.

Check any article yourself: xdigestly.app/rate — or browse the Discover feed where every article has already been scored for originality before it appears.

Get the best X articles delivered weekly

Every Friday, the top-rated articles from X, scored by AI. No slop.