When we hear the term “fake news,” more often than not it refers to false narratives written by people to distort the truth and poison the public discourse. But new developments in natural language generation have raised the prospect of a new potential threat: neural fake news. Generated by artificial intelligence and capable of adopting the particular language and tone of popular publications, this brand of fake news could pose an even greater problem for society due to its ability to emulate legitimate news sources at a massive scale. To fight the emerging threat of fake news authored by AI, a team of researchers at the Allen School and Allen Institute for Artificial Intelligence (AI2) developed Grover, a new model for detecting neural fake news more reliably than existing technologies can.
Until now, the best discriminators could correctly distinguish between real, human-generated news content and AI-generated fake news 73% of the time; using Grover, the rate of accuracy rises to 92%. What makes Grover so effective at spotting fake content is that it learned to be very good at producing that content itself. Given a sample headline, Grover can generate an entire news article written in the style of a legitimate news outlet. In an experiment, the researchers found that the system can also generate propaganda stories in such a way that readers rated them more trustworthy than the original, human-generated versions.
“Our work on Grover demonstrates that the best models for detecting disinformation are the best models at generating it,” explained Yejin Choi, a professor in the Allen School’s Natural Language Processing group and a researcher at AI2. “The fact that participants in our study found Grover’s fake news stories to be more trustworthy than the ones written by their fellow humans illustrates how far natural language generation has evolved — and why we need to try and get ahead of this threat.”
Choi and her collaborators — Allen School Ph.D. students Rowan Zellers, Ari Holtzman, and Hannah Rashkin; postdoctoral researcher Yonatan Bisk; professor and AI2 researcher Ali Farhadi; and professor Franziska Roesner — describe their results in detail in a paper recently published on the preprint site arXiv.org. Although they show that Grover is capable of emulating the style of a particular outlet and even writer — for example, one of the Grover-generated fake news pieces included in the paper is modeled on the writing of columnist Paul Krugman of The New York Times — the researchers point out that even the best examples of neural fake news are still based on learned style and tone, rather than a true understanding of language and the world. So, that Krugman piece and others like it will contain evidence of the true source of the content.
“Despite how fluid the writing may appear, articles written by Grover and other neural language generators contain unique artifacts or quirks of language that give away their machine origin,” explained Zellers, lead author of the paper. “It’s akin to a signature or watermark left behind by neural text generators. Grover knows to look for these artifacts, which is what makes it so effective at picking out the stories that were created by AI.”
Although Grover will naturally recognize its own quirks, which explains the high success rate in the team’s study, the ability to detect evidence of AI-generated fake news is not limited to its own content. Grover is better at detecting fake news written by both human and machine than any system that came before it, in large part because it is more advanced than any neural language model that came before. The researchers believe that their work on Grover is only the first step in developing effective defenses against the machine-learning equivalent of a supermarket tabloid. They plan to release two of their models, Grover-Base and Grover-Large, to the public, and to make the Grover-Mega model and accompanying dataset available to researchers upon request. By sharing the results of this work, the team aims to encourage further discussion and technical innovation around how to counteract neural fake news.
According to Roesner, who co-directs the Allen School’s Security and Privacy Research Laboratory, the team’s approach is a common one in the computer security field: try to determine what adversaries might do and the capabilities they may have, and then develop and test effective defenses. “With recent advances in AI, we should assume that adversaries will develop and use these new capabilities — if they aren’t already,” she explained. “Neural fake news will only get easier and cheaper and better regardless of whether we study it, so Grover is an important step forward in enabling the broader research community to fully understand the threat and to defend the integrity of our public discourse.”
Roesner, Choi and their colleagues believe that models like Grover should be put to practical use in the fight against fake news. Just as sites like YouTube rely on deep neural networks to scan videos and flag those containing illicit content, a platform could employ an ensemble of deep generative models like Grover to analyze text and flag articles that appear to be AI-generated disinformation.
“People want to be able to trust their own eyes when it comes to determining who and what to believe, but it is getting more and more difficult to separate real from fake when it comes to the content we consume online,” Choi said. “As AI becomes more sophisticated, a tool like Grover could be the best defense we have against a proliferation of AI-generated fake news.”
Read the arXiv paper here, and see coverage by TechCrunch, GeekWire, New Scientist, The New York Times, ZDNet, and Futurism. Also check out a previous project by members of the Grover team analyzing the language of fake news and political fact checking here.