Is Coordinated Political Activity on Social Media a Threat to Democracy?

CASE STUDY: Political “Troll Farms” on Facebook and Twitter

Case Study PDF | Additional Case Studies


In September 2020, tech giants Facebook and Twitter moved to suspend several accounts on their platforms after a Washington Post article revealed that the accounts were participating in coordinated activity whose goal was to “cast doubt on the electoral process and downplay the threat of COVID-19” by leaving comments under news stories and posts (McGregor, 2020). The scandal was associated with a conservative youth organization called Turning Point USA and the accounts were run by influencers who were paid for their efforts, some of whom were minors (McGregor, 2020). Although not to scale, this coordinated activity was akin to troll farms in countries like Russia and North Macedonia. According to Graham Brookie, director of the Atlantic Council’s Digital Forensic Research Lab, such domestic troll farms may do far greater damage to democracy than foreign ones (Stanley-Becker, 2020). Given the seriousness of such a proclamation, it is understandable why some would want to ban such coordinated political activity on social media. Where do social media platforms draw the line on when to suspend suspected coordinated activities? Furthermore, the vagueness of platform policies and the inconsistency with which they are enforced has left many users feeling unsure that the tech giants are reacting ethically.

In response to the Washington Post article and subsequent action by Facebook and Twitter, Turning Point USA (TPA) field director Austin Smith argued that “any comparison to a troll farm was a ‘gross mischaracterization’ [as] this is sincere political activism conducted by real people who passionately hold the beliefs they describe online, not an anonymous troll farm in Russia” (Stanley-Becker, 2020). Like any political campaign that hires canvassers, Smith said that “many positions TPA had planned for in field work were [disrupted by COVID so] TPA reimagine[d] these roles [based on] virtual and online activist model[s]” (Stanley-Becker, 2020). Moreover, Robert Jason Noonan, a parent of two of the influencers affiliated with Turning Point USA whose account was suspended, argued that Facebook and Twitter should fairly enforce their policies. Indeed, this is not the first time that either platform has encountered coordinated activity; in fact, it happens frequently. For example, K-pop fans on Twitter regularly coordinate their activity in order to get a certain hashtag trending or, more recently, take over other hashtags (Chan, 2020). Political coordinated activity, such as hashtag activism, is clearly evident on both social media sites, and operates without much interference from the tech companies. For example, the Bernie Sanders campaign used supporters on Reddit to get their hashtags trending (McGregor, 2020). Thus, Noonan “cast[s] the activity as a response to similar exaggerations by Democrats,” and implies that the platforms are unfairly punishing conservative coordinated activity while turning a blind eye when liberal (or apolitical) coordinated activity happens (Stanley-Becker, 2020).

While it is true that some liberal coordinated activity has been allowed to spread online in the past, such as with the Sanders campaign, this does not mean that all conservative activity has been shot down. Indeed, the Trump campaign engaged in similar tactics using a crew of influencers termed “The Big-League Trump Team” who would be contacted by campaign personnel about what content to push out to their followers (McGregor, 2020). The difference in the Turning Point USA case is, according to the tech companies, that the accounts violated platform policies. Facebook spokesperson Andy Stone said the accounts were banned for being connected to users having multiple accounts, while a Twitter spokesperson said the suspension was due to the accounts breaching manipulation and spam rules (Stanley-Becker, 2020). In other words, the Turning Point USA accounts were flagged due to issues with the creation of several fake accounts, which were then used to spread spam and misinformation. Of course, neither Facebook nor Twitter moved to suspend the accounts until after the Washington Post began investigating, which leads us to question whether the platforms would have banned the accounts on their own accord or not in the first place (McGregor, 2020).

Is such coordinated political activity a threat to democracy? Democracy seems to allow for, or even require, individuals and groups advocating passionately for issues that concern them. But the anonymous or pseudonymous nature of many the accounts pushing these messages is worrisome. While anonymity might protect individuals airing unpopular opinions, it also protects speakers from being held accountable for their messages and their methods. What happens when these campaigns push content that’s deemed by many as misinformation or factually untrue? Perhaps the questions social media users should be asking is whether or not they are being manipulated by coordinated activities online and what impact, if any, that will have on the democratic process. Social media activity, no matter how minor, can add up to some sort of effect on larger political processes. In his book “This is Not Propaganda: Adventures in the War Against Reality,” Peter Pomerantsev documents how former president of the Philippines, Rodrigo Duterte, utilized troll farms to engage in coordinated activity on social media to manipulate public opinion in his favor. Once elected, he was so merciless against drug crime that he began mass executions and killed so many people in such a short amount of time that Manila alleys were full of corpses (Pomerantsev, 2020). This is an extreme example from a country with a much different sociopolitical history than the United States, but nonetheless, this is proof of the impact coordinated activity online can have, especially if someone is purposefully trying to skew the democratic process. Could such a thing ever happen in more stable democracies like the U.S.? Perhaps social media platforms and their policies should be held to a higher standard now, so we never have to find out.

Discussion Questions: 

  1. What ethical values conflict when evaluating political coordinated activity online?
  2. Are foreign countries using “troll farms” an ethical way of influencing other countries? What makes such “troll farms” different from coordinated political action campaigns in the U.S.?
  3. Is it an important difference if groups of U.S. citizens create and use fake accounts to push messages they believe in?
  4. Must advocacy groups believe in every message or persuasive appeal they use? What latitude do individuals and groups have in their political persuasive activity?
  5. How should groups ideally communicate in a democracy? Do digital environments change this calculation?

Further Information:

Chan, T. (2020, June 03). “K-Pop Power: Fandoms Unite to Take Over #WhiteLivesMatter Hashtag on Twitter.” Rolling Stone. Available at: https://www.rollingstone.com/music/music-news/white-lives-matter-k-pop-1009581/

McGregor, S. (2020, September 17). “What Even Is ‘Coordinated Inauthentic Behavior’ on Platforms?” WIRED. Available at: https://www.wired.com/story/what-even-is-coordinated-inauthentic-behavior-on-platforms/

Pomerantsev, P. (2020). This is not propaganda: Adventures in the war against reality. New York: Public Affairs, Hachetter Book Group.

Stanley-Becker, I. (2020, September 16). “Pro-Trump youth group enlists teens in secretive campaign likened to a ‘troll farm,’ prompting rebuke by Facebook and Twitter.” The Washington Post. Available at: https://www.washingtonpost.com/politics/turning-point-teens-disinformation-trump/2020/09/15/c84091ae-f20a-11ea-b796-2dd09962649c_story.html

Timberg, C., & Stanley-Becker, I. (2020, September 17). “Violent memes and messages surging on far-left social media, a new report finds.” The Washington Post. Available at: https://www.washingtonpost.com/technology/2020/09/14/violent-antipolice-memes-surge/

Authors:

Claire Coburn, Kat Williams, & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
October 13, 2022

Image by Daria Nepriakhina on Unsplash

This case was supported by funding from the John S. and James L. Knight Foundation. It can be used in unmodified PDF form in classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement. 

Ethics Case Study © 2022 by Center for Media Engagement is licensed under CC BY-NC-SA 4.0