Can Artificial Intelligence Reprogram the Newsroom?

CASE STUDY: Trust, Transparency, and Ethics in Automated Journalism

Case Study PDF | Additional Case Studies


Could a computer program write engaging news stories? In a recent Technology Trends and Predictions report from Reuters, 78% of the 200 digital leaders, editors, and CEOs surveyed said investing in artificial intelligence (AI) technologies would help secure the future of journalism (Newman, 2018). Exploring these new methods of reporting, however, has introduced a wide array of unforeseen ethical concerns for those already struggling to understand the complex dynamics between human journalists and computational work. Implementing automated storytelling into the newsrooms presents journalists with questions of how to preserve and encourage accuracy and fairness in their reporting and transparency with the audiences they serve.

Artificial intelligence in the newsroom has progressed from an idea to a reality. In 1998, computer scientist Sung-Min Lee predicted the use of artificial intelligence in the newsroom whereby “robot agents” worked alongside, or sometimes in place of, human journalists (Latar, 2015). In 2010, Narrative Science became the first commercial venture to use artificial intelligence to turn data into narrative text. Automated Insights and others would follow after Narrative Science in bringing Lee’s “robot agent” in the newsroom to life through automated storytelling. While today’s newsrooms are using AI to streamline a variety of processes, from tracking breaking news, collecting and interpreting data, fact-checking online content, and even creating chatbots to suggest personalized content to users, the ability to automatically generate text and video stories has encouraged an industry-wide shift toward automated journalism, or “the process of using software or algorithms to automatically generate news stories without human intervention” (Graefe, 2016). Forbes, the New York Times, the Washington Post, ProPublica, and Bloomberg are just some of today’s newsrooms that use AI in their news reporting. The Heliograf, the Washington Post’s “in-house automated storytelling technology” is just one of many examples of how newsrooms are utilizing artificial intelligence to expand their coverage in beats like sports and finance that heavily rely on structured data, “allowing journalists to focus on in-depth reporting” (Gillespie, 2017).

Artificial intelligence has the potential to make journalism better for both those in the newsroom and the newsstand. Journalism think-tank Polis revealed in its 2019 Journalism AI Report that the main motivation for newsrooms to use artificial intelligence was to “help the public cope with a world of news overload and misinformation and to connect them in a convenient way to credible content that is relevant, useful and stimulating for their lives” (Beckett, 2019). Through automation, an incredible amount of news coverage can now be produced at a speed that was once unimaginable. The journalism industry’s AI pioneer, the Associated Press, began using Automated Insights’ Wordsmith program in 2014 to automatically generate corporate earnings reports, increasing their coverage from just 300 to 4,000 companies in one year alone (Hall, 2018).

As 20% of the time spent doing manual tasks has now been freed up by automation (Graefe, 2016), automated storytelling may also allow journalists to focus on areas that require more detailed work, such as investigative or explanatory journalism. In this way, automation in the reporting process may provide much-need assistance for newsrooms looking to expand their coverage amidst limited funding and small staffs (Grieco, 2019). Although these automated stories are said to have “far fewer errors” than human written news stories it is important to note that AI’s abilities are confined to the limitations set by its coders. As a direct product of human creators, even robots are prone to mistakes, and these inaccuracies can be disastrous. When an error in United Bots’ sports reporting algorithm changed 1-0 losses to 10-1, misleading headlines of a “humiliating loss for football club” were generated before being corrected by editors (Granger, 2019). Despite the speed and skill that automation brings to the reporting process, the need for human oversight remains. Automated storytelling cannot analyze society, initiate questioning of implied positions and judgements, nor can it provide deep understanding of complex issues. As Latar stated in 2015, “no robot journalist can become a guardian of democracy and human rights.”

Others might see hope in AI marching into newsrooms. Amidst concerns of fake news, misinformation, and polarization online, there is a growing cry among Americans struggling to trust a current news ecosystem which many perceive as greatly biased (Jones, 2018). Might AI hold a way to restore this missing trust? Founded in 2015, the tech start-up Knowhere News believes their AI could actually eliminate the bias that many connect to this lack of trust. After trending topics on the web are discovered, Knowhere News’ AI system collects data from thousands of news stories of various leanings and perspectives to create an “impartial” news article in under 60 seconds (DeGeurin, 2018). The aggregated and automated nature of this story formation holds out the hope of AI journalism being free of prejudices, limitations, or interests that may unduly affect a specific newsroom or reporter.

Even though AI may seem immune to certain sorts of conscious or unconscious bias affecting humans, critics argue that automated storytelling can still be subject to algorithmic bias. An unethical, biased algorithm was one of the top concerns relayed by respondents in the Polis report previously mentioned, as “bad use of data could lead to editorial mistakes such as inaccuracy or distortion and even discrimination against certain social groups or views” (Beckett, 2019). Bias seeps into algorithms in two main ways, according to the Harvard Business Review. First, algorithms can be trained to be biased due to the biases of its creator. For instance, Amazon’s hiring system was trained to favor words like “executed” and “captured” which were later found to be used more frequently by male applicants. The result was that Amazon’s AI unfairly favored and excluded applicants based on their sex. Second, a lack of diversity and limited representation within an algorithm’s training data, which teaches the system how to function, often results in faulty functioning in future applications. Apple’s facial recognition system has been criticized for failing to recognize darker skin tones once implemented into technologies used by large populations because of a supposed overrepresentation of white faces in the initial training data (Manyika, Silberg, & Presten, 2019).

Clearly, these AI agents do not garner the same sort of introspection in their decision-making processes, as seen by humans. This lack of understanding could be particularly challenging when collecting and interpreting data and ultimately lead to biased reporting. At Bloomberg, for example, journalists had to train the newsrooms’s automated company earnings reporting system, Cyborg, to avoid misleading data from companies who strategically “choose figures in an effort to garner a more favourable portrayal than the numbers warrant” (Peiser, 2019). To produce complete and truthful automated accounts, journalists must work to ensure that algorithms are feeding these narrative storytelling systems clean and accurate data.

Developments in AI technologies continue to expand at “a speed and complexity which makes it hard for politicians, regulators, and academics to keep up” and this lack of familiarity can invoke great concern amongst both journalists and news consumers (Newman, 2018). According to Polis, 60% of journalists from 71 news organizations across 32 countries currently using AI in their newsroom reported concerns about how AI would impact the nature of their work (Beckett, 2019). Uncertain of how to introduce these new technologies to the common consumer, many newsrooms have failed to attribute their automated stories to a clear author or program. This raises concerns regarding the responsibility of journalists to maintain transparency with readers. Should individuals consuming these AI stories have a right to know which stories were automatically generated and which were not? Some media ethicists argue that audiences should be informed of when bots are being used and where to turn for more information; however, author bylines should be held by those with “editorial authority,” a longstanding tool for accountability and follow up with curious readers (Prasad & McBride, 2016).

Whether or not automated journalism will better allow for modern newsrooms to connect with readers in greater quantities and of a higher quality is still an unfinished story. It is certain, however, that AI will take an increasingly prominent place in the newsrooms of the future. How can innovative newsrooms utilize these advanced programs in producing high quality journalistic work to better serve their audiences and balance the standards of journalistic practice that assumed a purely human form in the past?

Discussion Questions:

  1. What is the ethical role of journalism in society? Are human writers essential to that role, or could sophisticated programs fulfill it?
  2. How might the increased volume and speed of automated journalism’s news coverage impact the reporting process?
  3. What are the ethical responsibilities of journalists and editors if automated storytelling is used in their newsroom?
  4. Are AI programs more or less prone to error and bias, over the long run, compared to human journalists?
  5. Who should be held responsible when an algorithm produces stories that are thought to be discriminatory by some readers? How should journalists and editors address these criticisms?
  6. How transparent should newsrooms be in disclosing their AI use with readers?

Further Information:

Beckett, C. (2019, November). New powers, new responsibilities. A global survey of journalism and artificial intelligence. The London School of Economics and Political Science. Available at https://blogs.lse.ac.uk/polis/2019/11/18/new-powers-new-responsibilities/

DeGeurin, M. (2018, April 4). “A Startup Media Site Says AI Can Take Bias Out of News”. Vice. Available at https://www.vice.com/en_us/article/zmgza5/knowhere-ai-news-site-profile 

Graefe, A. (2016, January 7).Guide to Automated Journalism”. Columbia Journalism Review. Available at https://www.cjr.org/tow_center_reports/guide_to_automated_journalism.php

Granger, J. (2019, March 28). What happens when bots become editor? Journalism.co.uk. Available at: https://www.journalism.co.uk/news/nordic-news-organisations-shows-that-robot-journalism-can-make-personalisation-pay/s2/a736534/

Grieco, E. (2019, July 9). “U.S. newsroom employment has dropped by a quarter since 2008, with greatest decline at newspapers”. Pew Research Center. Available at https://www.pewresearch.org/fact-tank/2019/07/09/u-s-newsroom-employment-has-dropped-by-a-quarter-since-2008/

Jones, J. (2018, June 20). “Americans: Much Misinformation, Bias, Inaccuracy in News”. Gallup. Available at: https://news.gallup.com/opinion/gallup/235796/americans-misinformation-bias-inaccuracy-news.aspx

Latar, N. (2015, January). “The Robot Journalist in the Age of Social Physics: The End of Human Journalism?” The New World of Transitioned Media: Digital Realignment and Industry Transformation. Available at: www.researchgate.net

Manyika, J., Silberg, J., & Presten, B. (2019, October 25). “What Do We Do About the Biases in AI?” Harvard Business Review. Available at: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai

McBride, K. & Prasad, S. (2016, August 11). “Ask the ethicist: Should bots get a byline?” Poynter. Available at https://www.poynter.org/ethics-trust/2016/ask-the-ethicist-should-bots-get-a-byline/

Newman, N. (2018) Journalism, Media and Technology Trends and Predictions 2018. Reuters Institute for the Study of Journalism & Google. Available at https://ora.ox.ac.uk/objects/uuid:45381ce5-19d7-4d1c-ba5e-3f2d0e923b32/download_file?safe_filename=Newman%2BPredictions%2B2018%2BFINAL.pdf

Peiser, J. (2019, February 5). “The Rise of the Robot Reporter”. The New York Times. Available at https://www.nytimes.com/2019/02/05/business/media/artificial-intelligence-journalism-robots.html

WashPostPR. (2017, September 1). “The Washington Post leverages automated storytelling to cover high school football”. The Washington Post. Available at https://www.washingtonpost.com/pr/wp/2017/09/01/the-washington-post-leverages-heliograf-to-cover-high-school-football/

Authors:

Chloe Young  & Scott R. Stroud, Ph.D.
Media Ethics Initiative
Center for Media Engagement
University of Texas at Austin
April 27, 2020

www.mediaengagement.org

Image: The People Speak! / CC BY-NC 2.0

 


This case study is supported by funding from the John S. and James L. Knight Foundation. It can be used in unmodified PDF form for classroom or educational settings. For use in publications such as textbooks, readers, and other works, please contact the Center for Media Engagement.

Ethics Case Study © 2020 by Center for Media Engagement is licensed under CC BY-NC-SA 4.0