Catch CME at ICA’s Virtual Conference

Are you attending ICA’s virtual conference? CME Researchers will be virtually presenting on a variety of topics. Check out our presentations!

 

The Effectiveness of Fact Check Headlines on Social Media: Field Experiments Across Four Continents

Natalie Jomini Stroud; Jay Jennings; Jessica Collier; Adriana Crespo Tenorio; Joanna Sterling; Mila Fang Xia

Fact-checkers face a difficult task of disseminating their content and utilizing the condensed structure of social media to do so. Research has identified the delicate nature with which fact checks must be presented in order to ensure that individuals receive and remember the accurate information. This study utilizes Facebook Sponsored Posts and follow-up surveys for five international fact-checking organizations across 15 tests to analyze the effectiveness of three different types of fact check headlines: refutation, assertion, and question. Findings indicate that refutations to misinformation are more effective at increasing knowledge than assertion or question formats, although the question format promotes greater recall of seeing a post in one’s feed. These formats do not appear to amplify or diminish gaps in knowledge based on one’s attitudes about the topic. The differences, however, are small in magnitude. Practical implications for fact-checking organizations are discussed.

 

Gatekeeping Versus Gatewatching: Do the Topics of Online Comments That Journalists Reject Differ From Those That Readers Flag?

Ashley Muddiman; Ori Tenenboim; Gina M. Masullo; Magdelena Saldana

Online comments posted on news stories offer democratic potential for political discussions and civic empowerment, as well as business and journalistic value as they give the audience a place to engage. Yet, comment sections also can attract incivility and trolling. Newsrooms perform a gatekeeping role by having journalists review comments in advance and reject for publication comments that appear abusive or uncivil. Similarly, newsreaders perform a gatewatching role by flagging comments they deem as abusive or uncivil in hopes the newsroom will remove the comments. We explore whether journalists and newsreaders focus their gatekeeping and gatewatching attentions on different topics in news comment sections. To investigate this subject, we run a series of topic models on news comment threads that include/do not include comments flagged as abusive by newsreaders and on news comment threads that include/do not include comments that have been rejected by journalists. These analyses will (a) uncover the topics that occur most frequently in the different types of comment threads and (b) test whether the comment threads that include comments rejected by journalists produce similar types of topics to those produced in the threads with at least one comment flagged as abusive by news readers.

 

Regulating Comments: Attitudes of Germans and Americans Toward Regulation by Citizens, Platforms, or Law Enforcement

Martin Riedl; Teresa K. Naab; Gina M. Masullo; Pablo Jost; Marc Ziegele

This study examines whom people in the United States (n = 1,261) and Germany (n = 1,231) perceive to be responsible for regulating online user comments on news articles – law enforcement (the state), platforms (Facebook), news organizations, or users themselves. Using two surveys, we found that Germans had greater support for free speech than Americans. Germans also attributed greater responsibility for online comment regulation to Facebook, news organizations, and law enforcement than Americans.

 

Social Scaffolding or Computational Propaganda?: A Comparative Analysis of Automated Journalism in China and the United States

Chenyan Jia; Samuel Woolley

In recent years, bots—automated online tools for parsing and communicating information—have become associated with the spread of political manipulation over social media. Research from Communication and other disciplines has uncovered the use of these software-driven automatons in efforts to amplify and suppress content from Twitter to VK. Scholars have termed this now pervasive phenomenon “computational propaganda”. Other research, however, has detailed the ways in which bots, including social bots—which are built to interact and converse directly with other users online, are used as civic protheses for people who otherwise might not have the time or resources to achieve a task. In such cases, bots allow the humans behind them to scale and automate the most rote tasks associated with their jobs so that they can focus on tasks that require human-oversight and nuance. News outlets around the world now use bots to automate the writing of increasingly complex news articles, ideally allowing human reporters to do deeper investigative work. But are these “news bots” also harnessed for political control? Can they, too, be tools of computational propaganda? Using comparative analysis, including in-depth interviews with news bot builders and computational analysis of news bot content, we explore the ways in which these tools can be leveraged for purposes both democratic and controlling in China and the United States. We argue, accessing actor network theory, activity theory and theory specific to social bots, that the politics of a bot is inherently tied to the politics of its creator, but also that unforeseen occurrences online can affect how, when and where a bot communicates.

 

“Ingroup Filtering” of Outgroup Messages Online: Its Prevalence and Effects

Magdalena Wojcieszak; Xudong Yu; Andreu Casas; Samuel Woolley; Joshua Tucker; Jonathan Nagler

Prior work on online echo chambers typically focuses on whether social media users connect with likeminded others and/or share information from across the political aisle. This work largely ignores the fact that users can share outgroup content and – when they do – accompany this content with negative interpretation. This project is the first to account for the prevalence and the effects of exposure to outgroup political messages filtered through one’s partisan ingroup with negative commentary (what we refer to as ingroup filtering). We rely on behavioral data from a random sample of Twitter users to establish that ingroup filtering indeed takes place on Twitter, finding that liberals are especially likely to share outparty information with negative comments. We later experimentally test the effects of ingroup filtering on affective polarization and political participation as well as the role of ambivalence in these effects. Although ingroup filtering has no direct effects on the two outcomes, it decreases ambivalence relative to the positive ingroup commentary and the outgroup tweet itself, which predicts both affective polarization and political participation. These findings have important practical implications and substantially extend the ways in which researchers have been studying how people share political information online.