12 Signs an X (Twitter) Article Was Written by AI
“Slop” was Merriam-Webster’s 2025 Word of the Year. Defined as “digital content of low quality produced usually in quantity by means of artificial intelligence,” the term captured a growing frustration that AI-generated filler was drowning out real writing across the internet.
X Articles are one of the worst-hit formats. An Ahrefs study of 900,000 new web pages found 74.2% contained AI-generated content. There’s over 150,000 long-form posts published on X daily. Anyone can generate a 2,000-word article with ChatGPT in 30 seconds and publish it with zero editorial oversight.
We built an AI agent (internally called Marcus Webb) specifically to detect this. After scoring 700+ X articles, the originality agent has learned to spot the patterns that separate genuine writing from AI-generated filler. Here are the 12 most reliable tells.
1. The generic hook opener
“In today’s fast-paced world of technology…”
“Have you ever wondered what the future of AI looks like?”
“The crypto landscape is evolving at an unprecedented pace.”
Real writers start with a specific claim, a personal story, or a provocative statement. AI starts with throat-clearing. If the first sentence could apply to literally any article on the topic, that’s a red flag.
2. Suspiciously comprehensive coverage
An article that covers every aspect of a topic in roughly equal depth, without going genuinely deep on any of them, is almost certainly generated. Real experts focus on what they know best and skip the rest. AI tries to cover everything because it has no sense of what’s important.
A 2,000-word article about “the state of DeFi” that gives equal weight to lending, DEXes, derivatives, insurance, and real-world assets isn’t thorough. It’s shallow across five topics instead of deep on one.
3. Zero first-hand experience
This is the biggest tell. Read the article and ask: did the author do, see, or experience anything they’re writing about? Or are they summarizing and commenting on things from the outside?
AI-generated articles almost never contain:
- “I tried this and here’s what happened”
- Specific tool names with real opinions about them
- Concrete numbers from personal projects
- References to conversations, meetings, or events the author attended
- Mistakes the author made and what they learned
If every sentence could have been written by someone who has never touched the subject, it probably was.
4. The “balanced” fence-sitting
“While there are certainly pros and cons to this approach…”
“On one hand… but on the other hand…”
“There are valid arguments on both sides.”
AI loves to hedge because hedging is safe. Real writers take positions. An article about a controversial topic that never actually picks a side is a strong AI signal. The writing equivalent of saying nothing in 2,000 words.
5. Lists of obvious points
“10 Things You Need to Know About AI Agents”
And then every point is something anyone who’s read two articles on the topic already knows. The structure isn’t the problem. It’s that each bullet adds zero insight beyond the headline.
Genuine listicles work when each point reveals something surprising or specific. AI listicles work when you skim the headings and realize you already know everything they’re about to say.
6. Perfectly even section lengths
Open an article and squint at the structure. If every section is almost exactly the same length (3 paragraphs each, or 150 words each), that’s a template. Real writing is messy. Some points need more explanation than others. A section that’s twice as long as the rest usually means the author cared more about that part.
Uniform structure is a sign of “fill this template” writing, whether by AI or by a human following an AI’s outline.
7. No specific numbers, dates, or names
“Many companies are adopting this technology.”
Which companies? How many? When?
“Studies show that AI improves productivity.”
Which studies? By how much? Measured how?
AI writes in generalities because it can’t cite what it doesn’t know. When you read an entire article and can’t find a single specific number, date, or named source, something is off.
8. The recycled conclusion
Read the conclusion and then re-read the introduction. If they say the same thing in slightly different words, and neither says anything the other doesn’t, the article has no actual arc. It started with a thesis, restated it five ways, and then restated it again as the conclusion.
This is AI’s default structure: introduction (state thesis), body (restate thesis with examples), conclusion (restate thesis again). No building, no evolution, no “here’s what I realized while writing this.”
9. Corporate vocabulary that nobody uses in conversation
A Max Planck Institute study found that words like “delve,” “robust,” and “pivotal” spiked over 50% in published text after ChatGPT’s release. The word “delves” specifically increased over 6,000% in scientific abstracts from 2020 to 2024.
The top AI tell-words:
- Verbs: “delve,” “leverage,” “utilize,” “harness,” “streamline,” “foster”
- Adjectives: “pivotal,” “robust,” “innovative,” “seamless,” “cutting-edge,” “nuanced”
- Nouns: “landscape,” “tapestry,” “synergy,” “testament”
- Transitions: “furthermore,” “moreover,” “consequently,” “notably”
- “It’s worth noting that…” (filler)
- “In conclusion” (tells you the AI is wrapping up its template)
Why does ChatGPT “delve” so much? FSU researchers found that OpenAI outsourced RLHF annotation to Kenya and Uganda, where “delve” is far more common in formal English. The annotators naturally preferred phrasing that matched their register, and the model learned accordingly.
One or two of these words is normal. Five or more in a single article is a pattern.
10. No rough edges
Real writing has personality. The author gets excited about something and spends too long on it. They use a weird metaphor that only half-works. They start a sentence one way and end it another. They have a tic, a repeated phrase, a distinctive rhythm.
AI-generated text is smooth. Every sentence flows into the next. Every transition is clean. It reads like a textbook, not like a person. The absence of rough edges IS the tell.
11. The “helpful” tone that helps nothing
“By following these steps, you can significantly improve your workflow.”
“Understanding these concepts is crucial for anyone working in this space.”
“This approach can help you achieve better results.”
Notice how none of these sentences contain actual information. They’re cheerful, encouraging, and completely empty. AI loves to sound helpful without actually helping. If you can delete a sentence and the article loses zero information, that sentence was padding.
12. It’s about a trending topic and adds nothing to it
The most common AI slop pattern on X: someone sees a trending topic (a new AI model, a market crash, a regulatory announcement), generates a 1,500-word article summarizing what happened, adds generic analysis, and publishes it within hours.
The tell: the article contains no information you couldn’t get from reading the top 3 tweets about the same topic. It exists purely because the topic is trending, not because the author had something to add.
How we check for this automatically
Our rating system has an Originality agent that checks for all 12 of these patterns on every article. Articles that score below 4 on originality are almost always exhibiting multiple AI tells. Articles that score 8+ are consistently genuine human writing with real perspective.
The data from 700+ ratings shows that about 1.8% of X articles are clearly AI slop (scoring below 3 overall). But the bigger problem is the ~56% that’s “decent” but derivative, articles that might be human-written but read like they could have been generated. Competent, forgettable, and not worth 10 minutes of your time.
Check any article yourself: xdigestly.app/rate
Get the best X articles delivered weekly
Every Friday, the top-rated articles from X, scored by AI. No slop.