Falsehoods almost always beat out the truth on Twitter, penetrating further, faster, and deeper into the social network than accurate information.
By ROBINSON MEYER, The Atlantic 8 Mar 2018
“It seems to be pretty clear [from our study] that false information outperforms true information,” said Soroush Vosoughi, a data scientist at MIT who has studied fake news since 2013 and who led this study. “And that is not just because of bots. It might have something to do with human nature.”
Though the study is written in the clinical language of statistics, it offers a methodical indictment of the accuracy of information that spreads on these platforms. A false story is much more likely to go viral than a real story, the authors find. A false story reaches 1,500 people six times quicker, on average, than a true story does. And while false stories outperform the truth on every subject—including business, terrorism and war, science and technology, and entertainment—fake news about politics regularly does best.
Twitter users seem almost to prefer sharing falsehoods. Even when the researchers controlled for every difference between the accounts originating rumors—like whether that person had more followers or was verified—falsehoods were still 70 percent more likely to get retweeted than accurate news.
“I think it’s very careful, important work,” Brendan Nyhan, a professor of government at Dartmouth College, told me. “It’s excellent research of the sort that we need more of.”
“In short, I don’t think there’s any reason to doubt the study’s results,” said Rebekah Tromble, a professor of political science at Leiden University in the Netherlands, in an email.
What makes this study different? In the past, researchers have looked into the problem of falsehoods spreading online. They’ve often focused on rumors around singular events, like the speculation that preceded the discovery of the Higgs boson in 2012 or the rumors that followed the Haiti earthquake in 2010.
On April 15, 2013, two bombs exploded near the route of the Boston Marathon, killing three people and injuring hundreds more. Almost immediately, wild conspiracy theories about the bombings took over Twitter and other social-media platforms. The mess of information only grew more intense on April 19, when the governor of Massachusetts asked millions of people to remain in their homes as police conducted a huge manhunt.
“I was on lockdown with my wife and kids in our house in Belmont for two days, and Soroush was on lockdown in Cambridge,” Roy told me. Stuck inside, Twitter became their lifeline to the outside world. “We heard a lot of things that were not true, and we heard a lot of things that did turn out to be true” using the service, he said.
The ordeal soon ended. But when the two men reunited on campus, they agreed it seemed seemed silly for Vosoughi—then a Ph.D. student focused on social media—to research anything but what they had just lived through. Roy, his adviser, blessed the project.
They opted to turn to the ultimate arbiter of fact online: the third-party fact-checking sites. By scraping and analyzing six different fact-checking sites—including Snopes, Politifact, and FactCheck.org—they generated a list of tens of thousands of online rumors that had spread between 2006 and 2016 on Twitter. Then they searched Twitter for these rumors, using a proprietary search engine owned by the social network called Gnip.
Ultimately, they found about 126,000 tweets, which, together, had been retweeted more than 4.5 million times. Some linked to “fake” stories hosted on other websites. Some started rumors themselves, either in the text of a tweet or in an attached image. (The team used a special program that could search for words contained within static tweet images.) And some contained true information or linked to it elsewhere.
Tweet A and Tweet B both have the same size audience, but Tweet B has more “depth,” to use Vosoughi’s term. It chained together retweets, going viral in a way that Tweet A never did. “It could reach 1,000 retweets, but it has a very different shape,” he said.
Here’s the thing: Fake news dominates according to both metrics. It consistently reaches a larger audience, and it tunnels much deeper into social networks than real news does. The authors found that accurate news wasn’t able to chain together more than 10 retweets. Fake news could put together a retweet chain 19 links long—and do it 10 times as fast as accurate news put together its measly 10 retweets.
Nonetheless, roughly 38,000 Twitter users shared the story. And it put together a retweet chain three times as long as the sick-child story managed.
A false story alleging the boxer Floyd Mayweather had worn a Muslim head scarf to a Trump rally also reached an audience more than 10 times the size of the sick-child story.
The team wanted to answer one more question: Were Twitter bots helping to spread misinformation?
After using two different bot-detection algorithms on their sample of 3 million Twitter users, they found that the automated bots were spreading false news—but they were retweeting it at the same rate that they retweeted accurate information.
“The massive differences in how true and false news spreads on Twitter cannot be explained by the presence of bots,” Aral told me.
But some political scientists cautioned that this should not be used to disprove the role of Russian bots in seeding disinformation recently. An “army” of Russian-associated bots helped amplify divisive rhetoric after the school shooting in Parkland, Florida, The New York Times has reported.
Some political scientists also questioned the study’s definition of “news.” By turning to the fact-checking sites, the study blurs together a wide range of false information: outright lies, urban legends, hoaxes, spoofs, falsehoods, and “fake news.” It does not just look at fake news by itself—that is, articles or videos that look like news content, and which appear to have gone through a journalistic process, but which are actually made up.
“The key takeaway is really that content that arouses strong emotions spreads further, faster, more deeply, and more broadly on Twitter,” said Tromble, the political scientist, in an email. “This particular finding is consistent with research in a number of different areas, including psychology and communication studies. It’s also relatively intuitive.”
“False information online is often really novel and frequently negative,” said Nyhan, the Dartmouth professor. “We know those are two features of information generally that grab our attention as human beings and that cause us to want to share that information with others—we’re attentive to novel threats and especially attentive to negative threats.”
In a statement, Twitter said that it was hoping to expand its work with outside experts. In a series of tweets last week, Jack Dorsey, the company’s CEO, said the company hoped to “increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable toward progress.”
Facebook did not respond to a request for comment.
But Tromble, the political-science professor, said that the findings would likely apply to Facebook, too. “Earlier this year, Facebook announced that it would restructure its News Feed to favor ‘meaningful interaction,’” she told me.
In fact, the team found that the opposite is true. Users who share accurate information have more followers, and send more tweets, than fake-news sharers. These fact-guided users have also been on Twitter for longer, and they are more likely to be verified. In short, the most trustworthy users can boast every obvious structural advantage that Twitter, either as a company or a community, can bestow on its best users.
It is unclear which interventions, if any, could reverse this tendency toward falsehood. “We don’t know enough to say what works and what doesn’t,” Aral told me. There is little evidence that people change their opinion because they see a fact-checking site reject one of their beliefs, for instance. Labeling fake news as such, on a social network or search engine, may do little to deter it as well.
In short, social media seems to systematically amplify falsehood at the expense of the truth, and no one—neither experts nor politicians nor tech companies—knows how to reverse that trend. It is a dangerous moment for any system of government premised on a common public reality.
What price ethics for software designers in the poisonous era of Cambridge Analytica? by John Naughton in The Guardian
The programmers behind data analytics have unleashed forces they could never have imagined

On 12 September 1933, Leo Szilard, an unemployed Jewish physicist who had fled Nazi Germany, was walking down a street in Bloomsbury, London. He was brooding on a report in the Times that morning of a lecture given by Ernest Rutherford the previous day, at the annual meeting of the British Association for the Advancement of Science. In that lecture, the great physicist had expressed scepticism about the practical feasibility of atomic energy. Szilard stopped at a traffic light at the junction of Russell Square and Southampton Row. “As the light changed and I crossed the street,” he later recalled, “it suddenly occurred to me that if we could find an element which was split by neutrons and which would emit two neutrons when it absorbs one neutron, such an element, if it were assembled in sufficiently large mass, could sustain a nuclear chain reaction”.
This epiphany, which is recounted in Richard Rhodes’s monumental book The Making of the Atomic Bomb, was the key insight that led eventually to Hiroshima and Nagasaki. And after those atrocities, Szilard “felt a full measure of guilt for the development of such terrible weapons of war; the shape of things to come that he had first glimpsed as he crossed Southampton Row in 1933 had found ominous residence in the world partly at his invitation”.
Thus do ideas change the world. As the furore about Cambridge Analytica raged last week, I thought about Szilard and then about three young Cambridge scientists who brought another powerful idea into the world. Their names are Michal Kosinski, David Stillwell and Thore Graepel and in 2013 they published an astonishing paper, which showed that Facebook “likes” could be used to predict accurately a range of highly sensitive personal attributes, including sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age and gender.
The work reported in their paper was a paradigm of curiosity-driven research. The trio were interested in social media as a phenomenon and had a hunch about how unintentionally revealing its users’ online activities could be. They found a way of confirming their hunch. Since they are smart, they doubtless understood how valuable it could be to, say, advertisers.
What they might not have appreciated, though, was the power this conferred on Facebook. But one of their colleagues in the lab obviously did get that message. His name was Aleksandr Kogan and we are now beginning to understand the implications of what he did.
In a modest way, Kosinski, Stillwell and Graepel are the contemporary equivalents of Szilard and the theoretical physicists of the 1930s who were trying to understand subatomic behaviour. But whereas the physicists’ ideas revealed a way to blow up the planet, the Cambridge researchers had inadvertently discovered a way to blow up democracy.
Which makes one wonder about the programmers – or software engineers, to give them their posh title – who write the manipulative algorithms that determine what Facebook users see in their news feeds, or the “autocomplete” suggestions that Google searchers see as they begin to type, not to mention the extremist videos that are “recommended” after you’ve watched something on YouTube. At least the engineers who built the first atomic bombs were racing against the terrible possibility that Hitler would get there before them. But for what are the software wizards at Facebook or Google working 70-hour weeks? Do they genuinely believe they are making the world a better place? And does the hypocrisy of the business model of their employers bother them at all?
These thoughts were sparked by reading a remarkable essay by Yonatan Zunger in the Boston Globe, arguing that the Cambridge Analytica scandal suggests that computer science now faces an ethical reckoning analogous to those that other academic fields have had to confront.
“Chemistry had its first reckoning with dynamite; horror at its consequences led its inventor, Alfred Nobel, to give his fortune to the prize that bears his name. Only a few years later, [in May 1915] its second reckoning began when chemist Clara Immerwahr committed suicide the night before her husband and fellow chemist, Fritz Haber, went to stage the first poison gas attack on the Eastern Front.
Physics had its reckoning when nuclear bombs destroyed Hiroshima and Nagasaki and so many physicists became political activists – some for arms control, some for weapons development. Human biology had eugenics. Medicine had Tuskegeeand thalidomide, civil engineering a series of building, bridge and dam collapses.”
Up to now, my guess is that most computer science graduates have had only a minimal exposure to ethical issues such as these. Indeed, it’s possible they regard ethics as a kind of hobbyhorse for people who don’t have enough to do. Hackers, in contrast, have more important stuff on their plates, such as refining that algorithm for increasing user “engagement” by exploiting human foibles or finding a neat way of combining GPS co-ordinates with Facebook “likes” to suggest a nearby gluten-free bakery. And all the while enabling their employers to laugh all the way to the bank.