Privacy versus Products in Targeted Digital Advertising

CASE STUDY: The Ethics of Customized Ad Campaigns

Case Study PDF | Additional Case Studies


Targeted advertising has become the norm in the past decade. While ads online were once either completely random or based on non-user-specific terms from search engines, today companies use targeted advertising to “aim their marketing based on everything from users’ sexual orientations to their moods and menstrual cycles” (Edelman, 2020). Through actions like following people’s activity on social media, remembering their buying habits, and even tracking when and where they use their devices, advertisers can learn a great deal about individuals – and then use that information to boost certain products, services, or even political campaigns that (presumably) align with their interests. As such, targeted advertising has been simultaneously hailed as the superior practice for advertising as well as denounced as a privacy-invading danger.

On one hand, targeted advertising is viewed as the best possible way for people to endure the ‘necessary evil’ of advertising at all. As Harvard business professor John Deighton remarks: “Nobody likes advertising… They just like what they get for free as a result of it” (Edelman, 2020). While most of us find it annoying when ads take up half of our dashboards, instead of our friend’s posts or content from creators we are actually subscribed to, the reality is that those ads are required to keep our social media platforms up and running at no cost to us. WIRED reports that “Google and Facebook, including their subsidiaries like Instagram and YouTube, make about 83% and 99% of their respective revenue from selling ads” (Edelman, 2020). If advertising was taken out of this picture, Deighton suggests that the Internet would instead be plagued with subscription walls and users would be forced to pay for the online connectivity we’ve so far enjoyed for free (Edelman, 2020).

Thus, if advertising is unavoidable, it makes sense to at least have advertising that is relevant to us. In fact, such targeted advertising can be helpful in many ways, rather than just tolerable. Where an ad for hearing aids is typically not useful to hearing people (unless they are purchasing for someone else), the same ad is significantly more helpful to a person with auditory loss – this is the kind of information that targeted advertising can use to help get the word out to the right parties. Such a situation may not be limited to products either. For example, a recent medical school graduate is unlikely to find an ad for a cashier position to be particularly relevant to them, nor would a high school student who needs a part-time job be interested in a full-time physician position at the local hospital. In this sense, targeted advertising can streamline both individuals’ job searches, helping them more quickly find employment that fits with their skill level.

On the other hand, targeted ads can have a variety of unintended, but nonetheless harmful, side effects. Take for instance, Gillian Brockell, a writer for The Washington Post who documented her traumatic experience with targeted ads in 2018. Brockell was pregnant and, like many expecting mothers, was receiving pregnancy-related ads on social media. However, when she tragically lost her baby, the maternity ads did not stop. Here, one can see how the limitations of targeted ads may affect a variety of Internet users. From those experiencing grief or alcohol-related trauma, to closeted members of the LGBTQ+ community and people with body-dysmorphia looking to avoid fad diets, targeted ads with too much private information about a person can quickly become a nightmare when that information is not utilized intelligently. Thus, Brockell pleads with tech companies that “if your algorithms are smart enough to realize that I was pregnant… then surely they can be smart enough to realize that my baby died, and advertise to me accordingly — or maybe, just maybe, not at all” (Brockell, 2018).

Moreover, targeted advertising may not just be harmful to individuals, but have the potential to reinforce inequality in society. In a study conducted by Northeastern University and the University of Southern California, multiple ads were found to be targeted in discriminatory ways based on user’s demographic information. For example, job ads for a lumber company were advertised by an audience that was 70% white and 90% men whereas cashier positions at a retail store were targeted to 85% women (Reike, 2019). Similarly, home-sale ads were viewed by an audience that was 75% white, despite this number being unrepresentative of the true demographics of users on Facebook (Reike, 2019). Here, targeted ads could be viewed as a participant in sustaining social problems like the wage gap and housing inequality.

Finally, targeted advertising introduces concerns regarding overall privacy and data collection. In 2019, Amnesty International polled nine countries and found that “77% of people are concerned about how tech companies profile internet users” (Amnesty International, 2020). While many Internet users share Congressman Ken Buck’s indifference that “I don’t really care if [social media platforms] tell fifteen tee-shirt companies that I’m out looking for a tee-shirt,” problems arise when that data is kept for too long without updates (as seen with Brockell) or it is used in unauthorized ways, such as Facebook’s Cambridge Analytica scandal (Edelman, 2020). At minimum, this data has potential to harm the mental and emotional wellbeing of individuals. At the extreme, authoritarian governments could implement this information in dangerous ways, as this data “show[s] them how to spread their lies efficiently, how to provoke fear and a feeling of insecurity in certain electoral districts, and whom to offer protection” (Reich, 2019).

Despite the problems associated with targeted advertising, proponents of the practice believe they are not set-in stone. Here, it is argued that adding certain regulations around the collection, storage, and sharing of personal data, as well as improving the algorithms to work more intelligently, can mitigate the issues surrounding targeted advertising. Even though there are currently no mandatory federal regulations, some changes have already been made online to allow users to customize their ad experience and block certain topics. On Facebook specifically, the social media platform has implemented new advertising policies where ads cannot “target people based on their medical history” or feature “content that tries to generate negative self-worth to promote diet or health-related products” (Nudson, 2020). However, while such measures are certainly a step in the right direction, they may not completely solve the provoking nature of certain advertisements, such as those relating to the work force or fertility. For example, though customization was already available in 2018 when Brockell was grieving, she explains that she could not figure out how to block the maternity content as “anyone who has experienced the blur, panic, and confusion of grief [would] understand” (Brockell, 2018).

In the end, targeted ads may be a significant upgrade from older methods of advertising that holds benefits to both marketers and users, but the detailed information required to make it work and the potential to cause mental and social harm have many questioning if the practice is ethical. As Orsolya Reich put it: “It is not that Facebook ads are not awesome. Indeed, they are. The question is whether they are worth it… I am happy that interesting ads pop up on my screen. But I am definitely not willing to pay for that with my democracy” (Reich, 2019). In an era of digital ubiquity where “most Americans are exposed to around 4,000 to 10,000 ads each day,” it is important that we carefully consider the impact such advertising has on us (Simpson, 2017). Where should the line be drawn between helpful and harmful targeted ads? Can these problems be solved with regulation and improved technology, or should the method be done away with completely?

Discussion Questions:

  1. What are some examples of targeted advertising in a few different areas?
  2. What are the conflicting interests at stake in targeted advertising? Why are they controversial?
  3. How do data-driven targeted ads differ from the strategic placement of older forms of advertising?
  4. Should users be responsible for protecting their information and avoiding the Internet if it is potentially triggering? Or should tech companies censor advertisements or allow users more control over what they do and do not want to see from advertisers?
  5. What are the alternatives to targeted advertising? What are the benefits and drawbacks to switching to a different method? What would the aftermath of banning targeted advertising look like?

Further Information:

Amnesty International. (December 4, 2019). “New poll reveals 7 in 10 people want governments to regulate Big Tech over personal data fears.” Amnesty International. Available at: https://www.amnesty.org/en/latest/news/2019/12/big-tech-privacy-poll-shows-people-worried/

Brockell, G. (December 12, 2018). “Dear tech companies, I don’t want to see pregnancy ads after my child was stillborn.” The Washington Post. Available at: https://www.washingtonpost.com/lifestyle/2018/12/12/dear-tech-companies-i-dont-want-see-pregnancy-ads-after-my-child-was-stillborn/

Edelman, G. (March 22, 2020). “Why Don’t We Just Ban Targeted Advertising?” WIRED. Available at: https://www.wired.com/story/why-dont-we-just-ban-targeted-advertising/

Nudson, R. (April 9, 2020). “When Targeted Ads Feel a Little Too Targeted.” Vox. Available at: https://www.vox.com/the-goods/2020/4/9/21204425/targeted-ads-fertility-eating-disorder-coronavirus

Reich, O. (December 19, 2019). “Are Targeted Ads a Good Thing?” The Civil Liberties Union for Europe. Available at: https://www.liberties.eu/en/news/well-targeted-ads-are-awesome-but-are-they-worth-it/18472

Rieke, A., & Yu, C. (April 15, 2019). “Discrimination’s Digital Frontier.” The Atlantic. Available at: https://www.theatlantic.com/ideas/archive/2019/04/facebook-targeted-marketing-perpetuates-discrimination/587059/

Simpson, J. (August 25, 2017). “Finding Brand Success in the Digital World.” Forbes. Available at: https://www.forbes.com/sites/forbesagencycouncil/2017/08/25/finding-brand-success-in-the-digital-world/?sh=6d5454ff626e

Authors:

Kathryn Galanis, Ishana Syed, & Kat Williams
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
March 24, 2021

Image: AJITH S / Unsplash


This case was supported by funding from the John S. and James L. Knight Foundation. These cases can be used in unmodified PDF form in classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement.

Ethics Case Study © 2021 by Center for Media Engagement is licensed under CC BY-NC-SA 4.0