Skip to main content

“Hey, check out this 450-pound dog!” Allen School researchers explore how users interact with bogus social media posts

Dark, swirling clouds over an aerial shot of Sydney harbor and downtown
Is that a superstorm over Sydney, or fake news?

We’ve all seen the images scrolling through our social media feeds — the improbably large pet that dwarfs the human sitting beside it; the monstrous stormcloud ominously bearing down on a city full of people; the elected official who says or does something outrageous (and outrageously out of character). We might stop mid-scroll and do a double-take, occasionally hit “like” or “share,” or dismiss the content as fake news. But how do we as consumers of information determine what is real and what is fake?

Freakishly large Fido may be fake news — sorry! — but this isn’t: A team of researchers led by professor Franziska Roesner, co-director of the Allen School’s Security and Privacy Research Laboratory, conducted a study examining how and why users investigate and act on fake content shared on their social media feeds. The project, which involved semi-structured interviews with more than two dozen users ranging in age from 18 to 74, aimed to better understand what tools would be most useful to people trying to determine which posts are trustworthy and which are bogus.

In a “think aloud” study in the lab, the researchers asked users to provide a running commentary on their reaction to various posts as they scrolled through their social feeds. Their observations provided the team with insights into the thought process that goes into a user’s decision to dismiss, share, or otherwise engage with fake content they encounter online. Unbeknownst to the participants, the researchers deployed a browser extension that they had built which randomly layered misinformation posts previously debunked by Snopes.com over legitimate posts shared by participants’ Facebook friends and accounts they follow on Twitter.

The artificial posts that populated users’ feeds ranged from the sublime (the aforementioned giant dog), to the ridiculous (“A photograph shows Bernie Sanders being arrested for throwing eggs at civil rights protesters”), to the downright hilarious (“A church sign reads ‘Adultery is a sin. You can’t have your Kate and Edith too’”). As the participants scrolled through the mixture of legitimate and fake posts, Allen School Ph.D. student Christine Geeng and her colleagues would ask them why they chose to engage with or ignore various content. At the end of the experiment, the researchers pointed out the fake posts and informed participants that their friends and contacts had not really shared them. Geeng and her colleagues also noted that participants could not actually like or share the fake content on their real feeds.

“Our goal was not to trick participants or to make them feel exposed,” explained Geeng, lead author of the paper describing the study. “We wanted to normalize the difficulty of determining what’s fake and what’s not.”

Participants employed a variety of strategies in dealing with the misinformation posts as they scrolled through. Many posts were simply ignored at first sight, whether because they were political in nature, required too much time and effort to investigate, or the viewer was simply disinterested in the topic presented. If a post caught their attention, some users investigated further by looking at the name on the account that appeared to have posted it, or read through comments from others before making up their own minds. For others, they might click through to the full article to check if the claim was bogus — such as in the case of the Bernie Sanders photo, which was intentionally miscaptioned in the fake post. Participants also self-reported that, outside of a laboratory setting, they might consult a fact-checking website like Snopes.com, see if trusted news sources were reporting on the same topic, or seek out the opinions of family members or others in their social circle.

The researchers found that users were more likely to employ such ad hoc strategies over purpose-built tools provided by the platforms themselves. For example, none of the study participants used Facebook’s “i” button to investigate fake content; in fact, most said they were unaware of the button’s existence. Whether a matter of functionality or design (or both), the team’s findings suggest there is room for improvement when it comes to offering truly useful tools for people who are trying to separate fact from fiction.

“There are a lot of people who are trying to be good consumers of information and they’re struggling,” said Roesner. “If we can understand what these people are doing, we might be able to design tools that can help them.”

In addition to Roesner and Geeng, Savanna Yee, a fifth-year master’s student in the Allen School, contributed to the project. The team will present its findings at the Association for Computing Machinery’s Conference on Human Factors in Computing Systems (CHI 2020) next month.

Learn more in the UW News release here, and read the research paper here.