“It’s sort of disheartening at first to realize how much we humans are responsible,” said Sinan Aral, a professor at the M.I.T. Sloan School of Management and an author of the study. “It’s not really the robots that are to blame.”
Here are other findings from the research.
Covering the history of Twitter
The research, published on Thursday in Science magazine, examined true and false news stories posted on Twitter from the social network’s founding in 2006 through 2017. The study’s authors tracked 126,000 stories tweeted by roughly three million people more than 4.5 million times. “News” and “stories” were defined broadly — as claims of fact — regardless of the source. And the study explicitly avoided the term “fake news,” which, the authors write, has become “irredeemably polarized in our current political and media climate.”
The stories were classified as true or false, using information from six independent fact-checking organizations including Snopes, PolitiFact and FactCheck.org. To ensure that their analysis held up in general — not just on claims that drew the attention of fact-checking groups — the researchers enlisted students to annotate as true or false more than 13,000 other stories that circulated on Twitter. Again, a tilt toward falsehood was clear.
The way information flows online — and, occasionally, spreads rapidly like a virus — has been studied for decades. There have also been smaller studies examining how true and false news and rumors propagate across social networks. But experts in network analysis said the M.I.T. study was larger in scale and well designed.
“The comprehensiveness is important here, spanning the entire history of Twitter,” said Jon Kleinberg, a computer scientist at Cornell University. “And this study shines a spotlight on the open question of the success of false information online.”
Novelty wins retweets
The M.I.T. researchers pointed to factors that contribute to the appeal of false news. Applying standard text-analysis tools, they found that false claims were significantly more novel than true ones — maybe not a surprise, since falsehoods are made up.
The study’s authors also explored the emotions evoked by false and true stories. The goal, said Soroush Vosoughi, a postdoctoral researcher at the M.I.T. Media Lab and the lead author, was to find clues about what is “in the nature of humans that makes them like to share false news.”
The study analyzed the sentiment expressed by users in replies to claims posted on Twitter. As a measurement tool, the researchers used a system created by Canada’s National Research Council that associates English words with eight emotions. False claims elicited replies expressing greater surprise and disgust. True news inspired more anticipation, sadness and joy, depending on the nature of the stories.
Two stories: one true, one false
The researchers provided an example of two business stories, and how much more time it took the true one to reach 200 retweets. The example also shows the judgment calls made by fact-checking organizations.
• In 2014, the fashion chain Zara introduced children’s pajamas with horizontal stripes and a gold star. The company said the design was inspired by what a cowboy sheriff would wear. But Twitter users posted messages saying the pajamas resembled Nazi concentration camp uniforms. Snopes: True. Time to reach 200 retweets: 7.3 hours.
• In 2016, a website republished a portion of a satirical article about how the Chick-fil-A restaurant chain had decided to begin a “We don’t like blacks either” marketing campaign to stir up controversy and boost sales. It came after the company’s president did say he opposed gay marriage. Snopes: False. Time to 200 retweets: 4.2 hours.
What can be done?
The M.I.T. researchers said that understanding how false news spreads is a first step toward curbing it. They concluded that human behavior plays a large role in explaining the phenomenon, and mention possible interventions, like better labeling, to alter behavior.
For all the concern about false news, there is little certainty about its influence on people’s beliefs and actions. A recent study of the browsing histories of thousands of American adults in the months before the 2016 election found that false news accounted for only a small portion of the total news people consumed. “We have to be very careful about making the inference that fake news has a big impact,” said Duncan Watts, a principal researcher at Microsoft Research.
Another author of the M.I.T. study, Deb Roy, former chief media scientist at Twitter, is engaged in a project to improve the health of the information ecosystem. In fall 2016, Mr. Roy, an associate professor at the M.I.T. Media Lab, became a founder and the chairman of Cortico, a nonprofit that is developing tools to measure public conversations online to gauge attributes like shared attention, variety of opinion and receptivity. The idea is that improving the ability to measure such attributes would lead to better decision-making that would counteract misinformation.
Mr. Roy acknowledged the challenge in trying to not only alter individual behavior but also in enlisting the support of big internet platforms like Facebook, Google, YouTube and Twitter, and media companies.
“Polarization,” he said, “has turned out to be a great business model.”