CASE STUDY: Facebook’s Policy to Allow Misleading Political Ads
Case Study PDF | Additional Case Studies
On September 24, 2019 Facebook’s Vice President of Global Affairs and Communications, Nick Clegg, announced during his speech at the Atlantic Festival in Washington DC that Facebook would not fact-check or censor political advertising on the social media platform. Speaking on behalf of the tech company, he noted: “We don’t believe that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience and being subject to public debate and scrutiny” (Clegg, 2019).
With the 2020 presidential election in the United States approaching, Facebook immediately faced criticism for this decision, especially since it closely followed other controversial decisions involving the tech company’s refusal to remove misleading content – namely, a doctored video which made House Speaker Nancy Pelosi appear drunk and a Donald Trump ad which accused candidate Joe Biden of bribing the Ukrainian government to fire a prosecutor investigating the former Vice President’s son. Despite the Pelosi video’s misleading editing techniques and the Biden-focused ad’s lack of evidence to back it up its serious claims, Facebook stood firm in their decision (Stewart, 2019). There is the probability that more misinformation campaigns will be run by a range of parties in the next election, causing worries about the hopes of achieving a free and informed election. For example, Roose recounts a Facebook ad “run by [the group] North Dakota Democrats [which] warned North Dakotans that they could lose their out-of-state hunting licenses if they voted in the midterm elections” – an assertion that was utterly false (Roose, 2018).
On October 17, 2019 founder and CEO of Facebook Mark Zuckerberg spoke publicly at Georgetown University explaining his reasoning for the policy to not fact-check political advertisements, using his 3-minute speech to appeal to the First Amendment. Zuckerberg emphasized that he is concerned about misinformation, but ultimately believes it is dangerous to give a private entity the power to determine which forms of non-truthful speech are deserving of censorship. Instead, he stressed the importance of the credibility of the individual behind a post, rather than the post itself. Zuckerberg hopes to accomplish this through the introduction of another policy in which Facebook requires users to provide a government ID and prove their location in order to purchase and run political ads on the site (Zuckerberg, 2019).
Zuckerberg maintains that through the transparency of identity, accountability will be achieved and “people [can] decide what’s credible, not tech companies” (Zuckerberg, 2019). Appealing to John Stuart Mill’s ideas of free speech, Zuckerberg believes that the truth is always bound to come out. The unfiltered speech of politicians provides an opportunity for claims to be publicly evaluated and contested. If deception is revealed, then an opportunity for correction is provided through the refutation of the false speech of others. If a non-popular source turns out to be right about an unpopular point, others have the opportunity to learn from that truth. In either case, the hope is that the political community can use the identity of candidates or speakers in making judgments concerning who they deem credible and what arguments are worthy of belief. Censoring political information, the argument goes, only deprives people of the ability to see who their representatives really are.
Many find Zuckerberg’s free-speech defense of Facebook’s stance too idealized and not realistic enough. Of particular importance is the evolving role that social media has to play in society. Social media platforms were once only utilized for catching up with friends; now many Americans catch their news from social media sites rather than traditional print or televised news (Suciu, 2019). Additionally, as Facebook’s algorithm “gets to know our interests better, it also gets better at serving up the content that reinforces those interests, while also filtering out those things we generally don’t like” (Pariser, 2016). Based on user data such as what we “like” or what our friends share, Facebook may facilitate the creation of an “echo chamber” by providing news to our feeds that is increasingly one-sided or identifiably partisan. Such arrangements where people only engage with those who share like-minded views contribute heavily to confirmation bias – a logical error that occurs when people “stop gathering information when the evidence gathered so far confirms the views or prejudices one would like to be true” Heshmat, 2015). If politicians are able to mislead in their purchased advertising, they could use such a platform to encourage individuals to engage in confirmation bias by feeding them information tailored to match their data without critically looking into it – or opposing information – any further (Heshmat, 2015). Furthermore, those hoping that Facebook will crack down on paid political advertising are disheartened by the conflict of interest surrounding this issue. Political advertisers pay top dollar to advertise on social media sites. In fact, the Trump campaign alone has already spent more than $27 million on Facebook’s platform and the Wall Street Journal predicts that “in 2020, digital political ad spending [will] increase to about $2.8 billion” (Isaac & Kang, 2020 and Bruell, 2019). The economics of political advertising revenue make Facebook’s decision about curtailing it even harder to swallow.
The larger question of whether platforms like Facebook should become the arbiters of truthful and informative political speech on their sites presents one of the most pressing ethical dilemmas of the information age. On one hand, it is a dangerous and possibly slippery slope to place private tech companies into the position of deciding what counts as untruthful speech deserving of censorship. Some might worry that the distinction between truthful and untruthful political speech isn’t one that could be enforced – political ads often make questionable inferences from cherry-picked evidence, or purposefully extract specific phrases, images, or statements out of their context to render their opponents especially undesirable among audience members. How could anyone – including Zuckerberg – be tasked with evaluating anything but blatant falsehoods among the sea of claims that are questionable or badly reasoned to only some on the political spectrum? Given the challenging nature of determining what a lie is (as opposed to strategic presentation, lies of omission, or simple mistakes), the issue of eliminating purposefully untruthful speech becomes that much more challenging. Many would believe that political actors, just like everyday voters, should be able to express opinions and arguments that don’t seem particularly well-reasoned to all. On the other hand, the classic conception of free expression and the marketplace of ideas that grounds this reluctance to eliminate untruthful speech on social media may not be so realistic in our age of technology and self-selecting groups and political communities.
Between information overload and confirmation bias, it may be unreasonable to assume everyone can and will look into every news story they see on Facebook. And, as some critics would point out, many of the most vulnerable in our society, such as women and minorities, suffer the brunt of harassment online when absolute expression is valued. With so much at stake on both sides it is worthwhile to consider what has the most potential to enhance or inhibit the democratic process: reducing interference in personal expression or reducing misinformation in political advertising?
Discussion Questions:
- What are the central values in conflict in Facebook’s decision to not fact-check political advertisements?
- Has the evolution of technology and the overload of information in our era mitigated John Stuart Mill’s arguments for unrestrained free speech?
- Do social media companies like Facebook owe the public fact-checking services? Why or why not?
- Who is responsible for accurate political information: Producers, consumers, or disseminators of advertisements?
Further Information:
Bruell, A. (2019, June 4). “Political Ad Spending Will Approach $10 Billion in 2020, New Forecast Predicts.” Available at: https://www.wsj.com/articles/political-ad-spending-will-approach-10-billion-in-2020-new-forecast-predicts-11559642400
Clegg, Nick. (2019, November 13). “Facebook: Elections and Political Speech.” Available at: https://about.fb.com/news/2019/09/elections-and-political-speech/
Heshmat, S. (2015, April 23). “What Is Confirmation Bias?” Available at: https://www.psychologytoday.com/us/blog/science-choice/201504/what-is-confirmation-bias
Isaac, M., & Kang, C. (2020, January 9). “Facebook Says It Won’t Back Down from Allowing Lies in Political Ads.” Available at: https://www.nytimes.com/2020/01/09/technology/facebook-political-ads-lies.html
Pariser, Eli. (2016, July 24). “The Reason Your Feed Became An Echo Chamber — And What To Do About It.” Available at: https://www.npr.org/sections/alltechconsidered/2016/07/24/486941582/the-reason-your-feed-became-an-echo-chamber-and-what-to-do-about-it
Roose, K. (2018, November 4). “We Asked for Examples of Election Misinformation. You Delivered.” Available at: https://www.nytimes.com/2018/11/04/us/politics/election-misinformation-facebook.html
Stewart, E. (2019, October 9). “Facebook is refusing to take down a Trump ad making false claims about Joe Biden.” Available at: https://www.vox.com/policy-and-politics/2019/10/9/20906612/trump-campaign-ad-joe-biden-ukraine-facebook
Suciu, P. (2019, October 11). “More Americans Are Getting Their News from Social Media.” Available at: https://www.forbes.com/sites/petersuciu/2019/10/11/more-americans-are-getting-their-news-from-social-media/#6bc8516f3e17
Zuckerberg, Mark. (2019, October 17). Washington Post. “Watch live: Facebook CEO Zuckerberg speaks at Georgetown University.” Available at: https://www.youtube.com/watch?v=2MTpd7YOnyU
Authors:
Kat Williams & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
May 21, 2020
Image: Will Francis / Unsplash / Modified
This case study is supported by funding from the John S. and James L. Knight Foundation. It can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement.
Ethics Case Study © 2020 by Center for Media Engagement is licensed under CC BY-NC-SA 4.0