“We can’t shy away from phrases because they’ve been somehow weaponized.”
But just as notable as their admission was the language used to make it. I was surprised to find this group of scholars using the term fake news at all—even though they were calling for research into fake news.
That may sound odd. How can you study something and not call it by its name? Yet over the past year, academics and tech companies have increasingly shied away from the phrase. Facebook has pushed an alternative term, false news. And some scholars have worried that by using the term, they amplify President Trump’s penchant for calling all negative media coverage of himself “fake.”
“We think it’s a phrase that should sometimes be used,” he told me. “We define it in a very particular way. It’s content that is being put out there that has all the dressings of something that looks legitimate. It’s not just something that is false—it’s something that is manufactured to hide the fact that it is false.”
For instance, the infamous hoax report that Pope Francis had endorsed Donald Trump’s presidential candidacy was hosted on a website that had the appearance of being a local TV station, “WTOE 5 News.” There is no station called WTOE 5 in the United States, but the plausibility of the name allowed the falsehood to spread. (That one fake story had roughly three times more Facebook engagement—that is, likes, shares, and comments—than any New York Times story published in 2016.)
“I’m sure The Atlantic has sometimes gotten things wrong and published incorrect reporting,” he told me. “Those reports may be false, but I wouldn’t call them fake. For fake news, the incorrect nature of it is a feature, not a bug. Whereas when The Atlantic publishes something that’s incorrect, it’s a bug.”
“The term fake news, describing this problem, has been around for a long time,” he added. “There’s a wonderful Harper’s article about the role of fake news and how information technology is rapidly spreading fake news around the world. It used that term, and it was published in 1925.”
None of the political scientists endorsed President Trump’s tack of calling almost any news coverage he dislikes fake news. “We see that usage getting picked up by authoritarian types around the world,” Lazer said. But he does hope that by using the eye-grabbing term, scholars can reinforce the idea that there is something wrong with the information ecosystem, even though “it may not be the pathology that Donald Trump wants you to believe in.”
Yet no research has pointed to effective ways of reducing the spread of falsehoods online. Some still-unpublished studies have suggested that labeling fake news as such on Facebook could cause more people to share it. The same goes for relying on fact-checking sites like Snopes and Politifact. “Despite the apparent elegance of fact checking, the science supporting its efficacy is, at best, mixed,” say the authors.
At times, seeing a fact-checked rumor may cause people to remember the rumor itself as true. “People tend to remember information, or how they feel about it, while forgetting the context within which they encountered it,” they write. “There is thus a risk that repeating false information, even in a fact-checking context, may increase an individual’s likelihood of accepting it as true.”
“Research has found that people who are important nodes in the network play an important role in dissemination,” especially on Twitter, Nyhan told me. “Stories are being refracted through these big hubs. And I’m not a big hub, but I think it’s important to practice what I preach.”
Nyhan, who has about 65,000 Twitter followers, tries to correct incorrect information that he’s tweeted as quickly as possible, and he also tries to courteously notify other users when they’ve been tricked by unreliable information.
“We will all inadvertently share false or misleading information—that’s part of being online in 2018,” said Nyhan. “But I think we’ve seen people in public life be wildly irresponsible.” Users who repeatedly share bad information or fake news should suffer “reptuational consequences,” he said.
He specifically criticized Laurence Tribe, a widely respected Harvard Law professor who has argued dozens of cases in the Supreme Court. Tribe also has more than 300,000 Twitter followers. “He’s one of the most important constitutional-law scholars in the country, but he has repeatedly retweeted the most dubious anti-Trump information,” said Nyhan. “He’s gotten better, but I think what he did was irresponsible.”
“There are lots of people in these companies trying to do their best, but they can’t solve the problem of our public debate for us, and we shouldn’t expect them to,” he told me.
“We need more research about what works and what doesn’t on the platforms so we can be sure they are intervening in an effective way—but also so we can make sure they’re not intervening in a destructive manner,” he said. “I don’t think people take seriously enough the risks of major public intervention by the platforms. I don’t think we want Twitter, Facebook, and Google deciding what kinds of news and information are shown to people.”
“This,” he said—meaning fake news, falsehood, and the entire debacle of unreliable information online—“is not strictly the fault of the platforms. Part of what it’s revealing are the limitations of human psychology. But human psychology is not going to change.”
So the institutions that buttress that psychology—the journalists and editors, the politicians and judges, the readers and consumers of news, and the programmers and executives who design the platforms themselves—must change to accommodate it. Abraham Lincoln once said that one of the great tasks of the United States was “to show to the world that freemen could be prosperous.” Now, Americans and people all over the world must show that they can use every technological blessing of that prosperity—and remain well informed, enlightened, and liberated from falsehood themselves.