How Misinformation Spreads Online — And What Stops It
Misinformation doesn't spread randomly. It follows predictable patterns, exploits specific psychological vulnerabilities, and is countered by specific interventions. Here's what the research shows.
The study of misinformation has matured from anecdote into rigorous social science. We now have large-scale datasets, natural experiments from platform policy changes, and randomised controlled trials of interventions. What emerges is a nuanced picture that defies simple explanations — and that has real implications for how individuals and platforms can respond.
Why Misinformation Spreads
The most influential finding in misinformation research is one that surprises most people when they first hear it: false information spreads faster and further than true information on social media, even after controlling for the age of accounts, follower counts, and network structure.
This finding comes from a landmark 2018 MIT study by Vosoughi, Roy, and Aral, published in Science. Analysing the diffusion of verified true and false stories on Twitter from 2006 to 2017, they found that false stories were 70% more likely to be retweeted than true stories, and reached people six times faster. The effect was caused by human sharing behaviour, not bots.
Why do humans share false information more? The most supported explanation is novelty. False information tends to be more novel than true information — the world is stranger in false claims than in real ones — and novelty increases engagement. Social media algorithms that reward engagement amplify this effect.
Emotional arousal. Content that induces high-arousal emotions — fear, outrage, disgust, excitement — is shared more than content that induces low-arousal emotions. Misinformation disproportionately triggers high-arousal emotions. This is not because misinformation is inherently more emotional — it's partly because misinformers are deliberately crafting content to maximise sharing, and emotional arousal is an effective tool.
Prior belief congruence. We are all more likely to share information that confirms what we already believe, less likely to scrutinise it when it arrives, and less likely to flag it as wrong when it's false. This is motivated reasoning — our desire to be right shapes our evaluation of evidence. Misinformation that fits a prior narrative gets a pass that contradicting evidence doesn't.
Structural Amplifiers
Individual psychology doesn't fully explain misinformation propagation. Structural features of information systems do significant work.
Algorithmic recommendation. Platform algorithms that optimise for engagement time systematically surface high-engagement content. Since misinformation often has higher engagement than accurate information (due to emotional arousal and novelty), algorithmic recommendation amplifies it.
Network homophily. People tend to connect with others who share similar views. This creates "echo chambers" — networks where information circulates within a community of the like-minded. Within these networks, misinformation is rarely challenged and frequently reinforced.
Low-friction sharing. Retweet and share buttons make it trivially easy to propagate content without reading it, without pausing, and without reflection. Studies find that a significant proportion of shares come from users who have not read the linked content.
What Interventions Work
This is where the research becomes genuinely hopeful — interventions exist that demonstrably reduce misinformation sharing.
Accuracy prompts. Research by Pennycook and Rand at MIT found that simply asking users to consider whether content is accurate before sharing — a single question posed at login or before sharing — significantly reduced sharing of misinformation without reducing sharing of accurate content. The mechanism: people already know accuracy matters, but the sharing environment doesn't prime them to think about it. A nudge is enough.
Friction. Adding a small delay or confirmation step before sharing — making users click twice instead of once — modestly reduces sharing of all content, with disproportionate effects on misinformation because it creates a moment of reflection.
Prebunking (inoculation theory). Exposing people to weakened versions of misinformation techniques — not the specific false claims, but the rhetorical devices used to deceive — makes them more resistant to those techniques when they encounter them in the wild. Google's Jigsaw unit has deployed prebunking campaigns that showed measurable effects in randomised trials.
Social norms. Misinformation sharing is reduced when people believe their peers would disapprove of sharing inaccurate content. Interventions that make accurate sharing socially normative — including labels that communicate "your connection X shared this" alongside noting its accuracy — modestly reduce misinformation spreading.
Source expertise labels. Displaying verified credentials for sources (official health authority, peer-reviewed journal) reduces engagement with non-expert counter-claims.
What Doesn't Work as Expected
The most common intuitive response to misinformation — simply correcting it — is less effective than most people assume.
Corrections work, but imperfectly. A correction does reduce belief in a false claim, on average. But the effect is modest and decays over time. People who encounter a correction may still carry residual belief in the original claim, particularly if the false version was emotionally resonant or if they encountered it multiple times before the correction.
Repeating the misinformation in corrections backfires. Counter-intuitively, corrections that prominently state the false claim before refuting it may actually reinforce the false claim through repeated exposure. The "continue/don't continue" version of the misinformation gets refreshed even while it's being denied.
No single platform-level policy solves the problem. Content moderation, labels, and demotion each reduce specific forms of misinformation spread, but the effect sizes are modest and enforcement is inconsistent. The problem is structural enough that it can't be solved by any single intervention.
The Individual Response
What this research collectively suggests for individual news consumers:
-
Pause before sharing. The accuracy prompt research suggests that even a brief pause for reflection reduces misinformation sharing without reducing accurate sharing.
-
Read before sharing. Studies show a large proportion of misinformation shares come from people who didn't read the content. Make reading a prerequisite for sharing.
-
Check surprising claims against primary sources. Misinformation is disproportionately novel. When a claim surprises you, that's a moment to seek corroboration before spreading it.
-
Recognise your own motivated reasoning. We all have stronger belief-perseverance when claims confirm our priors. That's precisely when scrutiny is most important.
Sources: Vosoughi, Roy & Aral, "The spread of true and false news online," Science (2018); Pennycook & Rand, "Real Solution for Fake News" (2019); Google Jigsaw prebunking research (2022-24).
Sources & Citations
This analysis is based on primary documents, curated reporting from The Associated Press, Reuters, and verified direct quotes. We adhere to the SPJ Code of Ethics.
Corrections Policy
We are committed to accuracy. If you spot an error in this analysis, please contact us. Read our full corrections policy.