5 Signs an X Article Is Actually Worth Reading (Before You Click)
X ran a $1M competition for the best article in January 2026. Thousands were submitted. Millions of others are published every week without any competition at all, and most of them aren’t worth 10 minutes of your time.
The problem isn’t that good articles don’t exist on X. They do, and we’ve rated over 700 of them to prove it. The problem is that you can’t tell if an article is worth reading until you’ve already spent 8 minutes reading it. By then, you’ve already lost the time.
Here are 5 signals you can check in under 30 seconds, before you commit.
1. Does the first paragraph make a specific claim?
Read just the opening lines.
Good openers make a specific, contestable claim. “The model I used for the past 6 months just got quietly discontinued, and nobody’s talking about the fallout.” That’s specific. You either agree it matters or you don’t.
Bad openers are generic and could apply to any article on the same topic. “AI is changing the way we think about information.” That sentence could open a thousand different articles. It’s a placeholder, not an argument.
The first paragraph is where writers signal whether they have something to say or they’re going through the motions. If the opening is vague, the rest usually is too.
2. Is there at least one number, date, or named source?
Specificity is the fastest quality signal in writing. Numbers, dates, and named sources can be verified. They require the author to know something real, not just summarize in generalities.
“Many companies are adopting this technology.”
versus
“Stripe processed $1.4 trillion in payment volume in 2024, up from $817 billion in 2022.”
Both sentences are claims. Only one of them means anything.
AI-generated text and low-effort human writing share the same failure: they operate at the level of generalities. If you scan an entire article and find no specific numbers, named people, or dated events, that’s a red flag, regardless of who or what wrote it.
3. What’s the bookmark-to-like ratio?
An article with 500,000 impressions and 40 bookmarks is probably clickbait. An article with 4,000 impressions and 200 bookmarks is probably genuinely useful.
Bookmarks mean someone thought “I need to come back to this.” That’s a deliberate act. Likes are automatic. Most people hit like while scrolling past, not because they read it carefully.
A high bookmark count relative to likes and reposts is the closest thing X gives you to a quality signal. It’s imperfect, but it correlates strongly with articles that score well in our rating system.
4. Does the author have any track record on this topic?
Check the profile. Do they actually work in the area they’re writing about? Do they have other articles on the same subject? Is there any reason to believe they have real experience here, or did they just decide to write about a trending topic this week?
Writers with genuine expertise on a topic have a track record. They’ve been building in this space, or writing about it, or arguing about it publicly for a while. The article isn’t the first thing they’ve ever said on the subject.
Accounts publishing high-volume content across many unrelated topics are a reliable low-quality signal. Not always, but often enough to use as a filter.
5. Does the title make a specific promise it has to keep?
Good article titles are commitments. “How We Increased Retention by 34% Without Changing the Product” commits to explaining a specific outcome. “The Future of AI in Business” commits to nothing.
Listicle titles work when the number is part of the promise. “7 Signs Your API Is About to Break” is a promise: here are 7 specific, checkable things. “10 Things You Should Know About DeFi” is vague: it could be anything, and the author can fill it with anything.
The quickest version of this filter: would you know if the article failed to deliver on its title? If the answer is no, the title didn’t promise anything real.
The 30-second check
In practice: read the first paragraph, scan for one specific number or named source, check the bookmark count. That’s usually enough to make the call.
We built xdigestly.app/rate as the automated version of this check. Paste any X article URL and four AI agents run a full credibility, originality, depth, and reader value analysis in under a minute. The human version above is faster. The AI version is more consistent and catches patterns you’d miss skimming.
Both are better than spending 10 minutes on something that wasn’t worth it.
Try it: Rate any X article or browse the Discover feed for articles that already passed the check.
Get the best X articles delivered weekly
Every Friday, the top-rated articles from X, scored by AI. No slop.