What We Learned in 2024: From GenAI to Election Coverage to Platform Research

This was a busy year for the Center for Media Engagement. We partnered with newsrooms in battleground states to provide the public with information about the election. We investigated the threats and opportunities coming from the use of generative AI during elections in the U.S. and beyond. We examined what scientists and journalists think about scientific sourcing with the hope of improving science media coverage. We outlined an ethics framework for researching digital platforms. We produced new media ethics case studies on contemporary topics such as how to cover the war in Ukraine in an objective way that encourages students to think about ethically complex decisions.

In case you missed any of our exciting work, we share some important findings below

Understanding the Role of AI in the U.S. 2024 Elections and Beyond

Policymakers and the public worry about how generative artificial intelligence (GenAI) may impact elections in 2024 and beyond. To better understand major trends and common concerns – such as GenAI’s role in the rapid production of disinformation, the enabling of hyper-targeted political messaging, and the misrepresentation of political figures via deepfakes – we conducted interviews with experts in the digital politics space.

The conversations provide insights into (1) who is using GenAI in politics, (2) how they are using it, and (3) why individuals are currently using it and why others may use it in the future. Additionally, we provide an overview of the current status of regulation for this technology.

Taken together, the use of generative AI tools for collecting large swaths of data about voters, speeding data analysis of content toward effective hyper-targeting of voters, and interactively interfacing with voters in voices that sound (almost) human poses great concerns for democracy. While certain uses of GenAI have democratic potential, there is serious concern that a lack of regulation and consistent industry standards will allow malevolent actors to develop and use these tools without legal, technical, or ethical guardrails. Most importantly, the rapid advancement of GenAI underscores the need to treat new technologies not as separate policy areas, but as something to be integrated into all policy areas. 

 

Battleground Newsrooms Face Election Coverage Challenges

In the lead-up to the 2024 election, we spoke to news editors in battleground areas to learn more about their strategies for navigating election coverage. Their responses highlighted sharply different visions for the role of the news, showcased thoughtfulness amidst resource constraints, and demonstrated a commitment to serving the public during times of division.

There was overall concern about navigating election reporting while dealing with limited resources. When asked what one thing they would wish for to help with coverage, the editors overwhelmingly asked for more staffing. Despite a lack of resources, all news outlets were committed to providing quality coverage. 

The general sentiment among the editors was that fact-checking is critical, but limited resources coupled with an overwhelming amount of misinformation means that it’s not a priority for all newsrooms. Editors who decided to take on the task noted the many limitations. Some editors also felt it wasn’t their newsroom’s role to fact-check.

When it comes to post-election coverage, there was consensus that high-quality reporting is crucial. Views varied, however, on newsrooms’ post-election role – and whether they should deal with public reactions. 

Though many editors had reservations, there was general optimism about the benefits of using AI in newsrooms for tasks such as analyzing transcripts, captioning and organizing photos, generating headlines, and summarizing content. There were concerns, however, about AI being used to spread misinformation and rip off copyrighted content.

 

A Vermont Platform Fosters Civic Engagement and Builds Community

Can social media bring local communities together? In collaboration with New_ Public, we found that Vermont platform Front Porch Forum presents a good case for how it can be done successfully.

Previous work from New_ Public and the Center for Media Engagement identified principles of healthy, publicly oriented platform design, such as “access to reliable and relevant information about community issues” and “the opportunity to connect meaningfully with others.” With few exceptions, many popular platforms did not live up to the ideals of public-friendly digital space in the eyes of their users.

Front Porch Forum is a network of town-specific, proactively moderated online forums that help neighbors connect with each other. We examined how Front Porch Forum members perceive the platform’s performance on these signals. Members of Front Porch Forum consider it useful, valuable, informative, relevant, and civic-minded, especially compared to other dominant social platforms. Part of the platform’s magic may come from the moderators’ touch in balancing positive and negative comments.

 

Generative Artificial Intelligence Is Playing a Role in Elections

Generative artificial intelligence (GenAI) has emerged as a transformative force in elections playing out across the world. In a series of reports, we investigated GenAI’s role before, during, and after several key global elections in 2024.

The reports examined the potential impacts of GenAI on democratic processes in the U.S., Europe, India, Mexico, and South Africa. These insights are critical to groups working to sustain and advance democracies in the face of constant transformation of the digital environment and associated communication processes. 

Emerging trends developing in the U.S. include:

  • Extensions of mis- and disinformation strategies: Many cases of GenAI used during the 2024 U.S. election spur from strategies used in previous elections.
  • Manipulation of information related to electoral processes: Both proposed and actual uses of GenAI during the U.S. election rely on the manipulation of information related to electoral processes, offices, and vendors via a range of media formats.
  • Leveraging of trusted messengers and messages: Actors work to leverage trusted messengers and messages shared via a trusted mode of communication — i.e., in a particular language or cultural vernacular. 
  • Erosion of public trust in institutions: AI-driven efforts focus on eroding public trust in institutions as a goal in and of itself.
  • Taking advantage of Big Tech: These efforts take advantage of Big Tech’s ineffective mitigation measures and lax government regulation.

 

Mixed Responses to Bylines from Journalists Who Share Readers’ Race or Ethnicity

Recent years have seen a push for more diversity in newsrooms, especially in the aftermath of protests against police brutality. The argument that newsrooms should reflect the communities they serve is echoed in our discussions with people from communities often underserved by the media.

However, an exploration of how different racial and ethnic groups respond to bylines from journalists who share their race or ethnicity yielded mixed findings. In general, the race or ethnicity of the journalist did not affect whether people felt represented in the media. When it came to a person’s likelihood of reading a news story, results were mixed. In some cases, Black news consumers were more likely to read an article by a Black journalist. Hispanic news consumers, in some cases, were less likely to read an article written by a Hispanic journalist. In follow-up studies, however, this result didn’t hold. When journalists’ race, as inferred from a byline, affects audience engagement depends on the context. 

The findings don’t offer clear suggestions for newsrooms and it’s important to note that there are limitations to the way the studies were conducted. What the results do suggest is that more research is needed to address the influence of racial and ethnic diversity as presented in bylines and to see how journalists’ race or ethnicity affects audience behavior.

 

Better Guidelines Needed for Platform Research Ethics

Over the past two decades, there has been a considerable amount of academic research on digital platforms such as social media, websites, blog posts, and digitized content. But how do we know if these studies are conducted ethically? And what does it mean to conduct “ethical research” when studying digital platforms?

We examined the state of current platform studies ethics, the challenges of building ethical frameworks for this type of research, and potential solutions proposed by researchers studying digital platforms and research ethics. 

The findings emphasize the need for building consensus, ideally through coalitions, and for supporting research infrastructure that prioritizes clear and transparent ethical practices. It is abundantly clear that researchers, platform users, companies, politicians, and funders must work together to support ethical research practices that are flexible and yet guided by the shared principles of research for the public and minimizing user harm.

 

Scientist Training and Journalist Community-Building Can Improve Science Communication

As part of a series of reports produced in partnership with SciLine, an organization based at the American Association for the Advancement of Science (AAAS), we explored what scientists and journalists thought about each other, about public engagement with science, and about SciLine’s expert matching service.

The findings reveal that journalists only feel moderately supported in their science reporting efforts, indicating opportunities to develop services that build networks and community among journalists who feel they are somewhat alone in producing science news. 

When it comes to the relationship between scientists and journalists, both groups generally trust each other, although journalists’ feelings of trust for scientists tend to be higher. Scientists and journalists largely reported positive experiences interacting with one another, but their relationship may be improved by setting clear expectations regarding influence over the story. 

Both groups had similar views on which stakeholders are most important for science communication (policymakers, youth/students, media professionals) and which goals are most important (increasing the likelihood that people consider scientific evidence when making decisions). They also strongly agree that scientists should receive training on becoming better communicators. 

 

What’s Next for the Center for Media Engagement?

In the year to come, we’re focused on identifying ways to build community and connection, improving understanding and trust, and sharing what we learn beyond the walls of academia. 

Projects in progress ask:

  • How does the public reach conclusions about media bias?
  • How can generative AI be used effectively by journalists?
  • How can games be used to correct misperceptions about political partisans?
  • How can AI improve discussion quality?
  • How does an effort to connect journalists and scientists on deadline affect reporters, media coverage, and audiences?
  • How do local broadcast media talk about election integrity, and with what effect?
  • How can we better inform the public about circulating misinformation?
  • How can journalists better cover heated contemporary issues?

We look forward to connecting with you again in 2025 and sharing what we find!