SUMMARY
Propagandists are pragmatists and innovators.1 Political marketing is a game in which the cutting edge can be the margin between victory and defeat. Generative Artificial Intelligence (GenAI) features prominently for those in the political marketing space as they add new tools to their strategic kit. However, given generative AI’s novelty, much of the conversation about its use in digital politicking is speculative.
Observers are taking stock of the roles generative artificial intelligence is already playing in U.S. politics and the way it may impact highly contested elections in 2024 and in years to come. Amid policymakers’ and the public’s concerns, there is an urgent need for empirical research on how generative AI is used for the purposes of political communication and corresponding efforts to manipulate public opinion.
To better understand major trends and common concerns – such as generative AI’s role in the rapid production of disinformation, the enabling of hyper-targeted political messaging, and the misrepresentation of political figures via synthetic media (so-called deepfakes) – this report draws on interviews with relevant professionals. These interviews were conducted between January and April 2024 with campaign consultants from both major political parties, vendors of political generative AI tools, a political candidate utilizing generative AI for her campaign, a digital strategist for a Democratic advocacy organization, and other relevant experts in the digital politics space.
KEY INSIGHTS
Who is using generative AI in the political space?
- Across party lines, the candidate and all the consultants we spoke to have all experimented with GenAI.
- Local campaigns are currently more likely than national campaigns to use GenAI, with examples of its use already popping up in areas such as New Hampshire and New York.
- Some interviewees emphasized that smaller campaigns will remain constrained by compliance concerns and that to see real impact in politics with GenAI tools, campaigns must have both the money and the know-how to build models that surpass the capabilities of what is accessible for free.
- As such, campaigns backed by well-funded Super Political Action Committees may be more likely to use these tools. Without infrastructural support from national party organizations, some campaigns may hesitate to use the technology.
- Interviewees feared that less scrupulous actors – such as those who produced the New Hampshire Biden robocalls – foreign actors, and lone wolf propagandists may be quicker to use GenAI. They noted that such uses will be more difficult to prevent.
How are they using generative AI in the political space?
- Interviewees expressed that GenAI has already accelerated the scale and speed of data analysis, data collection, and content generation by increasing the speed at which political consultants can A/B test content and unearth the approaches and language most compelling to target audiences.
- While some interviewees expressed that GenAI-facilitated hypertargeting may be overblown, others emphasized that with coming advances in the technology, as well as current use-cases in data collection and analysis, consultants will be able to create hyper-personalized content that will “move the needle.”
- Multiple interviewees noted GenAI’s particular strength in fundraising. Given the U.S.’s limited restrictions on campaign spending and electioneering, particularly surrounding expenditures, this raises legal and ethical concerns around transparency, disclosure, and, potentially, deception.
- Others expressed that GenAI’s greatest impact is in cultural and linguistic mimicry and the ability to “look like a tribe member” in order to change people’s minds.
- Some interviewees speculated about a near future in which GenAI tools shift voters’ understandings of candidates and authenticity – meaning a GenAI version of a candidate could become as reputable as the candidate themselves.
Why are political professionals using generative AI in politics, and why might other actors use it?
- Interviewees expressed ardent hopes that the use of GenAI in politics would democratize the campaign space, allowing for increased engagement with marginalized and young voters, and would empower smaller campaigns and nonprofits.
- As is the case in all political marketing, GenAI is being used to persuade voters, whether indirectly (as in the use of GenAI for faster data analysis and fundraising) or directly (as in the use of GenAI to interface with voters).
- In line with current research, interviewees noted that when it comes to content produced by GenAI, it is more likely that content would be used to persuade people not to vote than to change their vote to an opposing candidate. This raises concerns about the use of GenAI as a tool for voter suppression.
- Some interviewees noted that if GenAI is used too heavily and becomes commonplace, voters may begin to view any AI-generated content as spam.
- Interviewees expressed concerns that GenAI will decrease the public’s ability to tell the truth from fiction and will facilitate public figures’ ability to claim that true events were artificially constructed.
- Some interviewees drew upon examples of politicians using AI to craft policy that works toward a future where GenAI-created policies are implemented in government.
How is generative AI’s use being regulated in politics?
- Interviewees emphasized that they are currently abiding by an unwritten code of industry norms, rather than by enforceable regulation.
- They expressed concerns about piecemeal policies and regulations, noting that they “stick to ethical guidelines” but cannot guarantee that other vendors are doing the same.
- Democrats and Republicans differed in their views of what regulation should look like, with Democrats largely advocating for federal regulation “with teeth,” and Republicans suggesting that GenAI be looped into extant FEC and AAPC regulations or technical interventions. Outside of our data, Republicans have expressed doubt that the FEC has the authority to regulate the use of AI in politics.
- One political AI vendor noted that technical interventions are unlikely to be successful, as AI models have already progressed too far for detection to work 100% of the time.
- Interviewees writ large emphasized the need for disclosure and transparency regarding the use of AI in politics.
KEY TERMS
- Generative Artificial Intelligence (GenAI) refers to computer systems that draw on extremely large datasets to make statistical inferences about the relationship between words in a body of text or pixels from an image. From these inferences, generative AI systems can produce human-like content quickly in response to human-provided prompts.
- Large language models (LLMs) are a form of generative AI that are trained on billions of words of human-produced text. For example, ChatGPT is powered by an LLM.
- Synthetic Media, or “deepfakes,” can be created by generative AI in the form of imagery, audio, or video.
INTRODUCTION
The need to predict the ‘next big thing’ at the intersection of technology and society is driving widespread speculation about how Generative Artificial Intelligence (GenAI) might impact upcoming elections in the United States and abroad. This working paper draws on twenty-one semi-structured interviews with practitioners and technologists to better understand how institutionalized political actors including campaigns, nonprofit organizations, and consulting firms are making use of generative AI for political marketing in the lead-up to the 2024 U.S. election. We shed light on the capabilities of GenAI that these actors are most excited to use and how they are thinking about the legal and ethical implications of these practices.
While all consultants interviewed had experimented with the technology personally or professionally, their impressions of its power and utility varied from heartily enthusiastic to deeply dismissive. Some described using the technology for political purposes already: creating virtual avatars of candidates, talking to voters on the phone, writing routine fundraising appeals, and doing a first pass at analysis of voter data and opposition research. Other consultants, though, plan to deploy generative AI cautiously, if at all, in their political work.
When asked about generative AI’s use across the field, many interview participants said that ‘good’ consultants follow an unwritten code of industry norms. Experimenting with new tools is part of their job, however, and right now, many are doing so without meaningful or comprehensive government oversight [For more on this, see Findings section “How Will Generative AI in Politics Be Regulated?”].
Even if such an internal, industry-based, regulatory code were more standardized, it might not hold for long. We have seen similar systems for governing trust and safety on social media platforms build and crumble, often quickly and at the quixotic whim of powerful individuals. As the 2024 U.S. election nears, and as more actors deploy bespoke technologies in novel ways, political actors will be under pressure to perform and demonstrate impact on voters. Nothing drives the adoption of new techniques like demonstrated – or perceived – success.2
State of the Field
Researchers, journalists, and policymakers have been sounding alarm bells surrounding the use of generative AI in politics for years, especially since the release of OpenAI’s ChatGPT in November 2022. “The emergence of artificial intelligence has shown us just how rapidly things can change as technology grows more sophisticated. For the first time in history, we’re seeing how artificial intelligence could be used to generate images and videos in political ads, creating new avenues for misleading content,” said Senator Amy Klobuchar in May 2023 as she introduced legislation requiring disclaimers on political ads made using generative AI.3 Even model and media personality Chrissy Teigen expressed concern, tweeting after seeing an infamous AI-generated image of the Pope in a puffy coat, “I thought the pope’s puffer jacket was real and didn’t give it a second thought. No way am I surviving the future of technology.”4 More recently, U.S. Secretary of State Anthony Blinken told the third Summit for Democracy, “We need to invest in civic and media literacy and empower citizens to distinguish fact from fiction – especially as new technologies emerge like generative AI that can fool even the most sophisticated news consumers.”5
Polling conducted by YouGov in 20236 showed that the majority of Americans were concerned about the use of generative AI in politics. Most Americans felt it is never acceptable to use generative AI to impersonate another person or to create political propaganda. Researchers have provided preliminary evidence that fears surrounding the use of generative AI for political propaganda7 and political microtargeting8 are not unfounded. The 2024 election season has already seen deepfake audio of President Biden discouraging voters in New Hampshire from going to the polls, campaign ads featuring synthetic images of former President Trump with Black voters, and large language models (LLMs) providing incorrect information on voting and elections in response to user prompts. In Ireland, a female candidate for office won her election narrowly despite a bombardment of AI-generated explicit images depicting her.9 In Moldova, President Maia Sandu was attacked by an ostensibly Russian-linked deepfake video.10 Even the CEOs of companies building this technology have expressed concerns that these models could be used for mass disinformation spread.11“We are quite concerned about the impact this can have on elections,” OpenAI CEO Sam Altman told the Senate Judiciary Committee in a hearing in May 2023, “It’s one of my areas of greatest concern.”12
Concurrently, the use of AI to impersonate political candidates, including President Biden13 and former President Trump,14 as well as the use of AI by political candidates themselves,15has become a frequent occurrence in the news. Although the success of these efforts in influencing voters is unclear, their existence underscores the ongoing, unfettered, experimental use of GenAI in and around politics.
OpenAI banned the use of ChatGPT for political campaigning in January 2024,16 but bespoke generative AI tools are increasingly being built particularly for this use, such as the Ashley robocaller from Civox17 and VideoAsk from Typeform,18 both already used by candidates. Donald Trump’s former campaign manager, Brad Parscale, recently launched an AI-powered campaigning software about which he said, “If you’re not for self-governance, if you’re not for biblical values, you can’t use my software, because I am not for sins.”19 Parscale’s software is now running Donald Trump’s website.20 Even OpenAI’s ban is difficult to enforce. Individual users accessing the free version of GPT 3.5 may face resistance to prompts on political topics, but according to interview participants, paid users accessing the more recent and powerful GPT 4 seem to face fewer restrictions. OpenAI has difficulty proactively preventing PACs and other non-campaign entities from using its services for politicking; for example, a chatbot imitating presidential candidate Dean Phillips was developed by a PAC supporting his campaign and was taken down after its debut to the public. Robert F. Kennedy Jr.’s campaign similarly debuted, and then removed, a chatbot powered by OpenAI’s technology.21
Not all political use cases involve campaigns. Politicians around the world – including in the United States – have used ChatGPT to write non-campaign speeches22 and legislation.23
Meanwhile, government action has mostly been limited in reach and often enacted state-by-state rather than nationally. Although President Biden signed an executive order in October 2023 regarding the responsible use of generative AI,24 its strongest transparency components only apply to models trained using more computing power than the current generation.25 Further, it does not address the use of generative AI in political campaigns – a use Biden is already employing in fundraising and voter mobilization.26 While the Federal Election Commission has called for public comment on a proposed rule to prohibit the use of “deliberately deceptive” generative AI in political campaigns, Republican commissioners are skeptical that current statutes give the Commission authority to do so.27 Although legislators have introduced bills to promote transparency regarding the use of generative AI in political campaigns, no federal regulations prohibit the use of generative AI in political campaigning. State legislatures, on the other hand, introduced as many as fifty AI bills a week in 2024, most of which regulated synthetic media.28 The United States faces the prospect of a patchwork landscape of state-based regulation, with lower protections overall for federal campaigns. Such segmented state regulation concerning this sort of misuse of technology could create a very confusing, hard-to-follow or abide-by, and balkanized legal framework.29
Finally, it’s important to point out that generative AI’s impact on politics is unlikely to be wholly negative for democracy. Many interview candidates suggested positive applications that excited them.30 For example, generative AI has the potential to allow politicians and candidates to speak directly to linguistic minorities in their own language, as demonstrated by New York Mayor Eric Adams.31 It also has the potential to reduce the amount of rote work in campaigning and other professions,32 freeing up skilled staff to focus on the most creatively demanding parts of their jobs.
METHODOLOGY
To gain a more ground-level view of these, and related, trends in the political use of generative AI during the 2024 U.S. election, our team conducted qualitative, semi-structured interviews with relevant stakeholders in the political generative AI space: three vendors of political generative AI tools, one political candidate, one legal expert, one technology expert, one extremist expert, one digital organizer for an advocacy group, one trust and safety industry professional, four Republican campaign consultants, and eight Democratic campaign consultants. All interviews were conducted from January to April 2024.
FINDINGS
The candidate and all the political consultants we spoke to were familiar with generative AI and had used it to some degree. While some remain skeptical of the new technology, others are not only experimenting with it but are also building it into the infrastructure of their campaigns, with use cases ranging from enhanced fundraising, to faster A/B testing and data analysis, and even to voter-facing interactions. Our conversations with these individuals provide insights into (1) who is using generative AI in politics, (2) how they are using it, and (3) why individuals are currently using it and why others may use it in the future. We close with a discussion of the (4) current status of regulation also informed by interviewees.
Who Is Using Generative AI in Politics?
In some ways, the best place to look for generative AI’s most novel uses and most serious consequences is on the smaller stage of local and state campaigns. Ben, Deputy Director of Tech Strategy at a Democratic consultancy firm, asserted that campaigns sometimes lag in adopting new technology because “nobody wants to be the guinea pig.” This may be true for big, federal contests, yet we are already seeing the implementation of generative AI for political marketing purposes at the local level. Sam, a political technology expert, said, “We’re seeing proliferation of [synthetic media] being used to target lower-level political candidates [with disinformative deepfakes], [which is] a lot harder to detect than [use against] very prominent national figures.” Meili, a researcher specializing in online extremism, agreed, “It’s local where you’re going to see a lot of stuff heat up.” She noted that some groups who use generative AI nefariously are “grassroots” and know that narratives sown at the local level will make their way up, so “we need to really pay attention to local levels.”
This may change with time. Interview participants had nuanced and conflicting views about whether generative AI would be a long-term democratizing force, putting more power in the hands of smaller actors, or another tool used by powerful, well-resourced actors to entrench themselves more deeply. Ari, a legal expert on technology and the First Amendment, said that generative AI “reduces barriers to entry that [have] left our political system hijacked by people who are independently wealthy.” He hopes that new technology will empower grassroots actors and smaller campaigns to compete more effectively against better-funded public messaging efforts.
There is reason to doubt this optimistic outlook. Other interview participants predicted that actors with more money will be more likely to implement generative AI technology effectively, while smaller ones will struggle to do so. Taryn, founder and CEO of a Democratic digital marketing firm, is especially worried about anonymous ‘dark money’ groups who can do a lot with this new and unregulated technology. Technology expert Sam gave the example of Pakistan’s imprisoned former Prime Minister, Imran Khan, who used audio and visual deepfakes to campaign ahead of the election.33 Kate, a Democratic consultant who assists campaigns seeking to integrate generative AI into their work, shared similar concerns. But she also noted that free and accessible tech does not always lead to quick implementation:
“Every … campaign will not have the funding to make their expensive lawyers spend many hours reading regulatory and AI policy for them to just like, ask a chatbot to write them a social media post. … We’re gonna see folks be like, ‘Is this even worth my time if I’m going to have to jump through all these hoops to understand what these restrictions are?'”
Hillary, CEO of a political AI company, noted that while anyone can play around with ChatGPT, getting the most out of it as a campaign takes real know-how and investment. The user interface provided to the average user is not ideal for re-using data and keeping track of multiple drafts of messages to different audience segments, and LLMs perform better when trained for purpose – a process requiring thousands of pieces of custom content. Like the compliance concerns above, these barriers to entry related to finances and expertise are likely to limit the speed and ease with which small campaigns and allied groups can adopt a broad range of generative AI tools. Vincent, founder and CEO of a Republican consultancy firm, emphasized the relevance of so-called Super Political Action Committees (PACs), which typically have more money than the campaigns of the candidates they boost. Super PACs, he said, can afford to implement generative AI where the actual campaign cannot.
Interview participants said that the reputational risk from ‘bad’ uses of generative AI will deter many ‘good’ consultants from using those tactics. Democratic consultant Kate said that campaigns are worried about violating state or technology regulations while experimenting with AI. However, she qualified this by saying that “if you can show that there is an impact, campaigns have a hard time saying no.” For now, Kate said, another impediment to the adoption of generative AI in campaigns is the lack of infrastructural support from national party organizations. If the Democratic or Republican National Committees (DNC or RNC) begin to prioritize emerging technology and provide resources for campaigns, the current dynamic could change within a matter of months.
For now, the most concerning use cases come from less scrupulous actors who are not constrained by reputational – or even legal – risk. Incidents like the January 21st robocalls featuring a synthetic version of President Biden’s voice that urged voters to stay home in the New Hampshire primary demonstrate how smaller and more risk-tolerant actors use generative AI in novel ways. The calls appear to have been made as part of an effort by Steve Kramer, a consultant to the Dean Phillips presidential campaign (and a former political consultant for rapper Ye West), and a New Orleans street magician named Paul Carpenter; the calls themselves were made by Life Corporation, a Texas-based telecommunications firm.34 Their motive appears to have been depressing turnout for President Biden in the New Hampshire presidential primary, despite the political consultant behind the call retroactively claiming he aimed to “wake up the country” to the dangers of AI in politics.35
Future incidents may have a larger impact. Joe, Principal Director at a Republican consultancy firm, said many actors who want to interfere in U.S. elections “don’t really wanna follow the laws because they know they’re doing bad things”:
“If [they] can generate [a candidate’s] voice easily through software that’s 99 cents on the App Store, [they] could send that out to 300 million Americans and confuse the hell out of them a week or two before the election … something as simple as sending out the wrong election date … The margins of victory in these campaigns are still so small … If they move the needle by a few thousand people, how does that change the course of things?”
Foreign and domestic actors aiming to interfere in electoral processes to damage American democracy would not be a new development. But Mike, the founder of a political AI company and current founder and CEO of a Democratic consultancy firm, emphasized that generative AI allows them to do so at a “scary” scale, streamlining the production of messages and making those efforts more efficient.
Mike told us that his experience building generative AI tools has led him to believe it will be difficult to prevent the political use of LLMs. Even if ChatGPT and its largest competitors issued a total ban on political use cases (and one they could implement), Mike believes that “some of these open-source large language models that exist … aren’t nearly as good, but they’re good enough” to have an impact.
How Are They Using Generative AI in Politics?
While our interviewees are divided in their judgment of GenAI’s value in politics and political communication (from genuine enthusiasm to outright dismissal), the majority said that generative AI has already accelerated the scale and speed of data analysis, data collection, and content generation. In other words, generative AI might bring qualitative changes to political propagandists’ work. The risks may also be higher: more content means more risks of exposure to such content. Repeatedly seeing the same content increases the likelihood that people believe it to be true. And more content, even if false, may make it more challenging to search for and find reliable content.
Many of the initial concerns about generative AI’s political impact revolve around its ability to quickly create personalized political messaging. This is unsurprising, given recent policy concerns surrounding digital disinformation and the fact that ChatGPT has become in many ways the most well-known generative AI tool. Generative AI significantly increases the speed at which political consultants can A/B test content and unearth the approaches and language that will be most compelling to target audiences. Because of this, observers warn that political actors may use artificial intelligence to influence voters with either micro-targeted or hyper-targeted messages, which, while difficult to prove empirically, researchers have demonstrated to be more persuasive.36 This is especially true when GenAI and human curation are combined. 37
The political professionals we spoke to were divided about whether AI’s potential would be a small step or a great leap forward for those hoping to create more nuanced, demographically focused propaganda. Democratic consultant Mike and vendor Hillary said that generative AI does indeed allow for more efficient delivery of more effective messages. Roy, the CEO of a Democratic consultancy firm, stressed that generative AI may be able to segment people more quickly and reliably than traditional methods, leading to more effective digital advertisements.
But hyper-segmentation isn’t everything. Democratic consultant Taryn said, “Everybody thinks … I’m gonna really, like, find every single person and I’m gonna hit them in all the right ways. When in reality, especially in politics … maybe I do want to hit people outside [the target audience] … too many versions of too many things … may not be statistically significant … because [people’s] minds change minute to minute, day by day.” While she said that her organization uses generative AI for targeting, she also emphasized that “[they] can’t just reach every single person and assume it’ll be better than just a little bit of bleed.”
Whether or not “bleed” is desirable, researchers38 have warned about persuasive power of especially identity-related (e.g., racial and gender) targeted messages.39 Further, interviewees consistently noted the gains in data analysis facilitated by generative AI, thus limiting Taryn’s claims. With unprecedented speed and efficiency in analysis, consultants will be empowered to test which content is most effective and deploy that messaging in time to influence elections. As political AI vendor Hillary noted:
“Right now, a lot of people don’t even use A/B testing on subject lines because by the time the results come in from the A/B test, the relevance of the topic has passed, right? … The ability to quickly create content and test it, test the level of personalization that transforms donor or voter behavior in a statistically significant way, is something that we finally have the capability to actually start testing to view this cycle.”
Hillary’s colleague Mike, who founded Hillary’s political AI company before founding his own consultancy firm, elaborated on the gains in both data collection and data analysis that generative AI could facilitate toward improved persuasion. He said:
“You now have the ability to use AI to ask open-ended questions to get more information. So, I could call you and say, you know, ‘Who are you voting for in this election? Oh, you’re voting for Donald Trump. Why are you voting for Donald Trump? Just tell us, just talk to us.’ And what is lacking now is a human on the other end of the phone to read that, understand, [and ask] follow-up questions. But I could program an AI bot to do that and produce a report that says, ‘Of the 300 people who said they’re voting for Donald Trump, here’s what we learned about them based on their response.’ And there is already a company that is doing that underground. … And so I think there’ll be more examples of use cases like that where you’re sort of taking an existing model and processing tools, which in that case is called collated survey analysis, and then combining that with a large language model and other AI tools that can provide better sentiment analysis.”
As such, while generative AI tools may not yet be capable of hyper-persuasive content creation, the unprecedented levels of data collection and speed in data analysis that generative AI affords may indeed facilitate the creation and spread of human-generated content that is both hyper-targeted and hyper-effective. In some cases, such as that of the AI-powered Ashley robocaller, interfacing with voters in the way that Mike outlines is transparently AI – Ashley was intentionally created to sound like a robot for transparency purposes. Yet, political AI creator Ilya noted, “[AI-generated voice] already sounds almost exactly the same [as a human]. It will sound absolutely exactly the same.” And about such AI chatbots, he said, “There will be more, there will be many more, some of them will have robotic voices and some of them probably won’t.” His assertion begs the question, then, will voters always know when they are speaking to a robot? And how might a lack of transparency in such conversations, especially when deployed by less scrupulous actors, contribute to voter persuasion, and even deception?
Many interviewees shared the perspective that society is just beginning to understand how generative AI is and will be used in and around politics. They said that they expect to see more imaginative uses in the years beyond 2024. Mike, the Democratic consultant who also founded a political AI company, told us that, “the first wave is teaching AI to do what [digital political marketers] do now. But once we free people up from having to draft the same similar version of a fundraising email … I think then we can start to talk about the creative evolution of these things.” He elaborated that he expects to see “advanced segmentation. Advanced social listening in all aspects of the org.” He noted that, over the last two decades, Democratic political campaigns have experienced “a shocking lack of innovation” because of a scarcity of resources in comparison to the amount of work they have to do.
Republican campaign consultant Joe emphasized that political campaigns and operatives already operate in data-rich environments. “There [are] a lot of data points, much more than a thousand” on various groups of individuals in the datasets he uses. Yet with forthcoming advances in generative AI, advanced segmentation could allow consultants to “theoretically create a million different personalized pieces of copy from maybe one or two base pieces of copywriting,” he said. Joe noted that, “Down the road…I think junk mail is gonna be a thing of the past because of AI.” This means that rather than people disregarding content from campaigns because it is generic, they will be more likely to engage with it because it is hyper-personalized. “[Campaigns will] probably have the technology and the investment to make [content] as personalized as possible … that will move the needle.”
Our interviewees also noted the impact of generative AI in fundraising efforts. Democratic consultant Taryn told us that targeted messaging is currently more meaningful for helping campaigns access donors than for persuading voters. When contacting voters, a campaign aims to target those outside of their base; when fundraising, they target those who seem similar to people who’ve given to a campaign or candidate in the past. What’s more, an anonymous Democratic communications professional told us that fundraising relies on a single metric of success: did the recipient click the donate button? With persuasion, however, there are an unknowable number of factors that might influence a voter’s choice between the time they receive a message and the time they cast a ballot. “The data inputs are just stronger” in fundraising, they said. “You can tie the voter signal to a very clear outcome: did they give?”
The use of generative AI in fundraising is not limited to content creation achieved by training an LLM on fundraising emails. Political consultant Mike noted the data collection power that generative AI could afford toward targeted fundraising efforts. He explained that at the political AI company now run by Hillary, they’ve considered – though not yet tried – having a user of their technology “upload their email list to [the LLM] and [the company] can then listen to what people are saying on social media and could then see, hey, you know, [the candidate’s] audience is really talking about the minimum wage today. Maybe [the candidate] should send a fundraising email on the minimum wage.” If such tactics are employed ubiquitously, it is unclear what level of transparency or disclosure will be given to those whose social media are surveilled for fundraising purposes. All interviewees told us that generative AI technology is advancing quickly, and “the political future” in which it plays a core role is “right around the corner.” Pat, founder and President of a Democratic consultancy firm, said, “Nothing works as well as you can imagine it working in a few years.” Chris, Senior National Strategist at a Republican consultancy firm, said, “There will be a company that allows you to gamify your campaign.” Republican consultant Vincent predicted the use of AI to make “more content and better” across mediums including campaign-related games, Sudoku puzzles, cartoons, poems, and coloring books. While future cycles may see examples of this boundless creativity, campaigns are currently taking a cautious approach. But while experimentation may be slow, adoption is coming more quickly: Ilya, a political technology vendor who built the first political robocaller powered by an interactive AI chatbot, said that generative AI “is going to be reasonably commoditized even by the end of this year.”
Some interviewees argued that generative AI’s most promising use for campaigns lies in producing messages in the language, vernacular, and tone best suited for the intended audience. “We have a motto, ‘look like, act like, talk like,’” said Craig, the co-founder and CTO of yet another political AI company. “Outside of all the hype, generative AI is extremely useful for politics because of the way it can translate language aware of context and culture… the best way to convince [a voter] is to look exactly like a tribe member… [Your] ability to imitate speech almost imperceptibly different than a human writing is a great language democratizer.” Craig was excited by examples like New York City Mayor Eric Adams’s use of AI to disseminate public announcements in multiple languages. “That’s how you should be using it. But I fear [people] are more interested in the Dean Phillips bot, [which used ChatGPT to create an impersonation of the U.S. presidential candidate]. That’s not the power of AI.” Successful imitation may make messages more compelling, even if astroturf politics and appropriative mimicry are not exactly innovations.
Election technology vendor Ilya explained the role of AI voice agents as the new phone bankers in this context:
“One of the things we thought about is geographically, just in the United States, if you speak to regular political phone bankers, they’ll tell you that the kind of phone banker you want calling in Alabama is different from the kind of phone banker you want calling in Portland, Oregon. We are able to take that information and quite frankly test on large enough numbers what voice works best … and then we can customize the voice based on what we know about the number that we’re calling. We have rich demographic information [voting, income, etc.]. What we say and how we say it is customizable.”
Ilya emphasized that he is careful about which clients he provides with tools, currently working only with Democrats in the United States and on a “very selective,” case-by-case basis abroad. But as the marketplace grows, bad actors may find less scrupulous vendors. Sophisticated threat actors like foreign governments may even develop their own tools.
Other interviewees predicted that the use of generative AI in politics would spur a shift in the way voters understand candidates and authenticity. Republican consultant Vincent said that political marketers wouldn’t need the candidate for every event or photo-op if they could simply use an AI version. He said that an AI version of a candidate isn’t necessarily a false representation, just a different one. And while he clarified that campaigns shouldn’t use deepfakes to put words in candidates’ mouths, he said he does envision a world in which the ethos of a candidate portrayed by AI was as valid a messenger as the candidate themself. He gave the example of using a generative AI version of a candidate to engage with something President Biden said, in the instance the candidate himself wasn’t able to respond to Biden’s statement.
Why Are They (And Others) Using Generative AI in Politics?
Interviewees expressed hopes that the use of generative AI in politics would democratize the campaign space. They were, for instance, excited about how the technology might allow them to engage with marginalized groups of voters or to empower smaller campaigns and nonprofits. Democratic vendor Hillary warned that rules or best practices that strictly curtail political uses of generative AI risk disabling these benefits. Legal expert Ari said that generative AI is underhyped in its ability to connect people. He said, “[It] makes communication easier. It makes it more effective, more persuasive.” He said that this isn’t a negative, it’s a benefit that facilitates the spread of ideas and “increases people’s feeling of belonging within a political system and makes them more likely to participate in it.”
Republican consultant Vincent similarly predicted that the technology will allow the next generation of voters to engage with digital representations of candidates in ways they could never engage flesh-and-blood politicians. Shamaine, the Democratic candidate who deployed an AI robocaller, said, “We exclude a lot of people in the election process” and was excited to use generative AI to engage with people in different languages and to answer questions that people might be afraid or embarrassed to ask of a human volunteer (for instance, questions about polling places). “People don’t feel judged by machines usually, so they’re more likely to be candid and ask real questions,” said Shamaine.
Political consultants’ goal, writ large, is to sway voters to support their candidates. As such, generative AI is being employed in various ways – data analysis, fundraising, content generation, interacting with voters, and mimicking candidates – to persuade voters. While all interviewees expressed that they do so ethically, they voiced concerns about the use of the technology by those without such commitments to ethics. Generative AI’s role in the 2024 election, then, seems likely to be as an accelerant of preexisting trends: the spread of election-related rumors and propaganda, increased affective polarization,40 and an increase in low-quality sources of news and information making it more difficult to find and identify reliable information (especially in a “data void,” or a topic where little authoritative information exists, like an emerging situation).41 These have been and will continue to be major concerns in U.S. elections.
Attempts to sway elections by deceiving voters or suppressing votes are hardly new. Even in the digital sphere, these strategies are developing a well-documented history, relatively speaking. Generative AI is likely to dramatically increase the speed and scale of deceptive election-related messaging. The content itself, while unlikely to be more compelling than human-generated content, will be better than a bot pulling from a corpus of five stock phrases. Although some consultants we spoke to said, “We don’t use AI to write content,” because “AI is scouring the internet for what’s already been done… [we] want fresh content,” it is possible that lone wolf propagandists may consider this content to be good enough, especially considering the ability to amplify these messages quickly and extensively. Rather than producing more potent language, AI’s primary contribution could be a significant increase in the speed and volume of content that is better if not perfect. AI-generated content could even be distributed through interactive sockpuppet accounts.
If this becomes too commonplace, however, this approach risks falling into the same trap ensnaring political SMS messaging: overuse reduces the user experience to spam. Democratic political marketing consultant Taryn said, “If people don’t trust [the messaging] and it’s not authentic it will go in the spam bucket pretty quickly.” Congressional candidate Shamaine noted that when her campaign deployed an AI robocaller, the longest call lasted for just three minutes. To put it simply, even if interactive AI chatbots are novel, if people find them untrustworthy or they become commonplace, they may not be as effective.
Past research at the Propaganda Research Lab at UT Austin’s Center for Media Engagement has described how false and manipulative content can fill “data voids” (topics around which there is a lack of quality, reliable information), especially in languages other than English.42 Generative AI may exacerbate this problem. Britt, a digital political strategist who recently ran a pro-abortion advocacy campaign in Ohio, explained how generative AI may be employed by politically motivated actors to game search engines. For example, by using specific and curated keywords to ensure the proliferation of content when someone searches for certain topics, data voids can be filled with false or partisan information. Even content of average persuasive power can confuse and overwhelm voters if it dominates search results, so it can still “influence the narrative” – for both users and LLMs that draw from such content. “It could become a reinforcement, and then that message is persistent and eventually it becomes a fact according to these programs,” said Britt. The potential use of GenAI to more effectively game data voids for political purposes carries many dangers: more content means more risks of exposure to such content; repeatedly seeing the same content increases the likelihood that people believe it to be true;43 and more content may make it more challenging to find and recognize reliable content amid a sea of disingenuous sources.
Democratic consultant Roy said that “the slight differences between what’s real and what’s fake are even more intricate than they used to be.” His perspective underscores the common fear that generative AI will deepen public distrust in institutions and information and further diminish voters’ ability to tell truth from fiction. This concern comes not just from the increased proliferation of false information but also from the “liar’s dividend” – political actors’ ability to claim that genuine sources of scandal were AI-generated.44 As tech expert Sam said, “Someone very purposefully claiming something is AI … builds on the really real gap in access to those tools and in public understanding.”
While we may not yet have seen wide use of the liar’s dividend, all of our interviewees noted that the technology is advancing rapidly in the political domain. Some saw a near future in which AI agents are setting policy or even running as candidates. “Suddenly you blink twice and we’re going to war and people are dying because a computer said so,” Ilya warned. Indeed, AI is already being employed in war efforts, as in Israel’s use of AI to find targets in Gaza.45 Ilya gave the example of New York City councilmember Susan Zhuang using AI to answer questions a journalist posed to her.46 While Councilmember Zhuang used AI for clarity because English is not her first language, Ilya queried, “If [a candidate] implements those policies [crafted by AI], whose policies were they?” Meaning that if candidates use generative AI to draft policies and legislation, which has already occurred,47 and those policies are implemented, are generative AI bots then setting policy? Ilya emphasized that whether society moves in this direction or not, humans, not AI, must make this decision, and we cannot “sleepwalk into a future…that we don’t want to be in.”
How Will Generative AI in Politics Be Regulated?
For now, consultants and vendors told us that they are largely self-regulating. President Biden’s executive order on the responsible use of AI notably does not include any regulations that implicate political campaigns. Other attempts at regulation thus far have been state-led, leading to a patchwork and inconsistent policy landscape. Consider that in February 2024, Axios reported that state legislatures were putting forward an average of fifty AI-related bills per week, about half of which dealt with deepfakes; California and New York saw the largest number of proposals.48 Democratic consultants and vendors alike spoke of the need for regulation, and all vendors and consultants spoke of building bespoke tools with their own guardrails. Democratic consultant and former vendor Mike said, “I stick to ethical guidelines because I’m a good person, but there’s no teeth to enforce it.” Republican campaign consultant Joe made a similar claim: “I know the people in my circle are gonna do the right thing.” Mike said he worried about other vendors because in many cases, “All they care about is making money.”
While all interviewees agreed that regulation and oversight surrounding the use of generative AI in politics is necessary, there were stark differences in beliefs among Democrats and Republicans about how it should be regulated. The Democratic vendors and consultants that we spoke to emphasized the need for government regulation. Democratic consultant Taryn called the campaign AI space a “wild west,” lamented the tendency to regulate new technology reactively after serious incidents, and singled out specific disclosure requirements and guidelines as an important area of concern. Republican consultants, meanwhile, endorsed government regulations hesitantly if at all; Republican communications consultant Vincent said that falsely portraying a candidate with synthetic media is something that “needs to be regulated,” even if he “hate[s] that term.” Ari, the legal expert working for a libertarian-leaning nonprofit, stood out among our interviewees, saying, “I think we let innovation do its thing and we respond to actual problems… it’s much more effective to assess problems as they come up. You can tailor the response without being overbroad or underinclusive.”
The Republican consultants who were skeptical of governmental regulation of AI instead advocated for relying upon extant political marketing regulations set by the Federal Election Commission (FEC) and the American Association of Political Consultants (AAPC), among others, surrounding disclosure of who paid for an advertisement and sourcing claims made about other candidates, but adding in disclosure about the use of AI. Republican campaign consultant Joe noted that such disclosures can be small, as on a paper mailer “in 12-point font.” About the use of generative AI in politics, he said, “Anybody can do anything, and it’s about, what are the penalties for what they’re doing?” Republican interview participants were – consistent with the party’s other policy preferences regarding social technology systems – more skeptical of regulation. They said they preferred in-ad disclosure that AI had been used to create the ad, as opposed to regulations enforced by the state.
Democratic consultant Mike said that professional bodies like the AAPC lack “teeth” to enforce rules and that consultants “need local, state, [and] federal government[s] to act and give us guardrails with strong enforcement mechanisms to hold us accountable.” He told us that if political misuses of generative AI are only punished with a slap on the wrist, those who deploy the technology “don’t care.” If the FEC fine is far less than the profit, the fine is inconsequential. Democratic vendors Hillary and Mike both said that the U.S. currently only has piecemeal “self-governance” of political generative AI and that the approach “is not going to work long-term.” Some Republican consultants who were skeptical of governmental regulation instead advocated for relying upon tools to detect and mitigate political misuses of AI. Underscoring extant research,49 political technology vendor Ilya said that he was skeptical of technical approaches like these. Instead, he said:
“I’m suggesting a regulatory and legislative policy of mandatory disclosure that could work for video and pictures as well, but only to the extent that they are broadcast through regulated mediums – television, radio. Is it going to work on Reddit? Probably not. Facebook has some restrictions around AI being flagged as such. Does it work? Probably kind of. But it doesn’t work 100% of the time. And I think that ship has sailed … I think text watermarking has sailed. I think picture watermarking has sailed, I think video watermarking has sailed. I think you can have legislative solutions in individual countries that govern watermarking of content distributed through regulated mediums but that’s pretty much it. That’s all we have left. … I think at the moment in which AI models can solve CAPTCHAS, you’ve beyond the point of being able to stamp something in pixels on a video that can’t be detected and removed very simplistically later.”
This partisan divide is reflected elsewhere in conversations about generative AI and politics. It is, for example, reflected in the DNC’s and RNC’s respective responses to the FEC’s call for comment on amending political campaign regulation surrounding “deliberately deceptive” political content produced using generative AI. The DNC’s public comment on the matter states that “The [Federal Election] Commission has clear authority to issue regulations clarifying the application and scope of the fraudulent misrepresentation statute at 52 U.S.C. § 30124.” The RNC’s comment says:
“The RNC is concerned about the potential misuse of AI in political campaigns but believes that the Petition’s proposed expansion of 11 C.F.R. § 110.16(a) is not the answer. There are serious questions about the legality of the Petition’s proposed rule, and it would not address the use of AI-generated deepfakes by third-party bad actors. For these reasons, the Commission should deny the Petition and instead ask Congress to take a holistic look at AI and enact a coherent statutory framework under FECA.”
What’s more, questions remain surrounding the need to disclose the use of generative AI when these tools have been used not to create content, but to enable mass data collection and accelerate data analysis toward the human creation of highly personalized and, potentially, highly persuasive content. As Hillary and Mike noted, these are likely to be the most impactful uses of generative AI, at least in this election cycle. The Federal Election Commission currently excludes “communications over the internet” from its definition of “Public Communications.”50 The RNC’s question over the legality of the Petition’s proposed rule may thus hold water. As it currently stands, the FEC has limited its capacity to regulate this technology and must indeed “take a holistic look at AI” to address its potential harms effectively.
The polarized, partisan debate over social media regulation is continuing in the generative AI space, with Democrats advocating for federal regulation and Republicans doubting the FEC’s authority to enact such regulation. Democrats and Republicans view regulation of regulatory safeguards on human prompts and AI outputs in starkly different ways – and advocate for controls (or a lack thereof) that seem to mirror their stances on more general regulation of speech, including disinformation, on social media.51 While Democrats consistently advocated for state regulation of the political use of generative AI, Republicans said that such regulatory mechanisms amounted to “censorship.”
CONCLUSION
Taken together, the use of generative AI tools for 1) collecting large swaths of data about voters; 2) speeding data analysis of content toward effective hyper-targeting of voters; and 3) interactively interfacing with voters in voices that sound (almost) human, in any language, poses great concerns for democracy. As we have found in past work,52 linguistic minorities and diaspora communities may be at particular risk for targeted disinformation and propaganda campaigns, given the data void that exists surrounding reliable information about U.S. electoral processes in languages other than English and the potential for using generative AI to mimic culturally and linguistically accurate speech.
While certain uses of generative AI have democratic potential, the lack of regulation and consistent industry standards will provide space for malevolent actors to develop and use these tools without legal, technical, or ethical guardrails for election integrity. We were at this juncture several years ago with regard to social media and should learn from our mistakes – most notably the failure to systematically regulate emerging media technologies in the face of clear harm. Just a decade ago, experts were still excitedly discussing the democratizing potential of social media – only a few warned about its potential for exploitation. The situation is different with generative AI. Many are calling for clear regulation of this technology. We should follow such calls and be guided by experts with knowledge of the inner workings of these technologies, the law, and society.
As researchers have noted previously in the case of political bots,53 the use of generative AI tools may facilitate loopholes in regulation surrounding campaign spending. For instance, while campaign finance law currently treats an individual’s expenditure in support of a campaign as a campaign contribution when the expenditure was made in coordination with a candidate (thus increasing regulation), what if an individual chooses to make such an expenditure in coordination with an AI-generated version of a candidate? To make this more concrete, imagine that the Biden robocalls had been paid for by an individual outside of Dean Phillips’s campaign (in which case it would be a less regulated expenditure) in coordination with a digital persona based on Dean Phillips—not a chatbot operated by the Dean Phillips campaign, but a much more sophisticated imitation capable of discussing campaign strategy. How would such an expenditure be regulated under current U.S. law? This is currently speculation, but may soon be a reality, particularly given that the only deviation from reality in this example is that Steve Kramer orchestrated these calls (ostensibly without Dean Phillips’s knowledge or approval) and that the Dean Phillips bot was, in this case, operated by the campaign. Continued lack of digital regulation, alongside continuous bipartisan disagreement about what potential regulation should look like, may allow the current landscape – which many interviewees characterized as “the Wild West” – to continue well past 2024. When discussing tech policies, however, it is important to focus not only on the technology itself but also on the humans behind it – the people who develop and use it. What are their goals and intended outcomes? What are the attendant harms to society – to our trust in institutions and one another and, more generally, to our safety and privacy? As Republican consultant Joe said, we should worry not about “the driving,” but about “the guy who decided to get drunk and get into his car because he doesn’t care about laws on the road.” We, therefore, join other researchers54 in calling for the passing of relevant policies to ban and effectively penalize voter suppression, imminent calls to lawless action or violence, defamation, fraud, and other harms that GenAI may facilitate, rather than for outright bans on the technology’s use in politics. Further, disclosure and transparency surrounding the use of generative AI must extend beyond its use merely in content creation to include data collection and data analysis as well. Lawmakers and regulators should mitigate such risks by creating and consistently maintaining (in accordance with technological innovation) legal and technical safeguards against illegal misuse.
Most importantly, the rapid advancement of GenAI underscores the need to treat new technologies not as separate policy areas, but as something to be integrated into all policy areas. Integrating GenAI into policy proposals concerning infrastructure, media regulation, etc., rather than attempting to regulate the online and offline separately, would be more future-proof. If this were achieved, future generations would have to be far less concerned about potential harms and undemocratic effects of malevolent exploitation as policies across the board would be more proactive than reactive. Education for the public and policymakers can help society build lasting informational literacy and resilience to future techno-social manipulation. Academic research, meanwhile, will continue to help society understand the role of artificial intelligence in shaping our society, and our politics – allowing us to regulate it accordingly.
Finally, technology companies should refrain from replacing humans with unsupervised generative AI, especially on trust and safety teams. As Democratic digital strategist Britt emphasized, “We do need to have good quality and fair human evaluators to continue to teach AI.” The technology industry and advocates engaged with it should create and produce enforceable industry standards. In that line, we also advocate for continued and expanded focus on trust and safety by technology companies – profit maximization should not lose out to human safety and democratic backlash (again). Further, our society is affected by job cuts related to GenAI. This is also the case with political marketing. For example, phone bankers could be entirely replaced by AI agents. Such losses are not only devastating on an individual level but they also obfuscate the human labor necessary for generative AI to work as intended.55 As Mike said, “We have to be really careful about…using [AI] in a way where it leads us to make the wrong decisions.”
Widespread and more creative manipulative uses of generative AI in politics will likely emerge in future cycles, after the 2024 U.S. election. As Democratic consultant Craig said, “The really extremely bad stuff is coming in 2026. We’re gonna see the beta version of it this year.” Concerning, propagandistic uses of GenAI are already being deployed: fake audio of politicians is circulating, synthetic images are being distributed, and political consultants are using AI to segment and sharpen their messages. Policymakers and others working to combat these adverse uses must act now.
ACKNOWLEDGMENTS
The authors thank the interviewees for sharing their time and insights. We also thank Madison Esmailbeigi and Tanvi Prem for their diligent research support. This study is a project of the Center for Media Engagement (CME) at The University of Texas at Austin and is supported by the Open Society Foundations, Omidyar Network, and the John S. and James L. Knight Foundation. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding bodies.
- Trauthig, I. K., Martin, Z. C., & Woolley, S. C. (2023). Messaging Apps: A Rising Tool for Informational Autocrats. Political Research Quarterly, 10659129231190932. https://doi.org/10.1177/10659129231190932; Woolley, S. C., & Howard, P. N. (2018). Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford University Press.; Xu, J., & Sun, W. (2021). Propaganda innovation and resilient governance: The case of China’s smog crisis. Media Asia, 48(3), 175–189. https://doi.org/10.1080/01296612.2021.1928989; Zou, S. (2023). Restyling propaganda: Popularized party press and the making of soft propaganda in China. Information, Communication & Society, 26(1), 201–217. https://doi.org/10.1080/1369118X.2021.1942954[↩]
- Kreiss, D., & Jasinski, C. (2016). The Tech Industry Meets Presidential Politics: Explaining the Democratic Party’s Technological Advantage in Electoral Campaigning, 2004–2012. Political Communication, 33(4), 544–562. https://doi.org/10.1080/10584609.2015.1121941[↩]
- Klobuchar, A. (2023, May 15). Klobuchar, Booker, Bennet Introduce Legislation to Regulate AI-Generated Content in Political Ads. U.S. Senator Amy Klobuchar. https://www.klobuchar.senate.gov/public/index.cfm/2023/5/klobuchar-booker-bennet-introduce-legislation-to-regulate-ai-generated-content-in-political-ads [↩]
- Teigen, C. (2023, March 26). I thought the pope’s puffer jacket was real and didnt give it a second thought. No way am I surviving the future of technology [Tweet]. Twitter. https://twitter.com/chrissyteigen/status/1639802312632975360; Richardson, C. (2023, March 31). The Pope Francis puffer coat was fake – here’s a history of real papal fashion. The Conversation. http://theconversation.com/the-pope-francis-puffer-coat-was-fake-heres-a-history-of-real-papal-fashion-202873[↩]
- Blinken, A. J. (2024, March 18). Building A More Resilient Information Environment. United States Department of State. https://www.state.gov/building-a-more-resilient-information-environment/[↩]
- Orth, T., & Bialik, C. (2023, September 12). Majorities of Americans are concerned about the spread of AI deepfakes and propaganda. YouGov. https://today.yougov.com/technology/articles/46058-majorities-americans-are-concerned-about-spread-ai[↩]
- Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda? PNAS Nexus, 3(2), pgae034. https://doi.org/10.1093/pnasnexus/pgae034[↩]
- Goldstein, J. A., Sastry, G., Musser, M., DiResta, R., Gentzel, M., & Sedova, K. (2023). Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations (arXiv:2301.04246). arXiv. https://doi.org/10.48550/arXiv.2301.04246[↩]
- Scott, M. (2024, April 16). Deepfakes, distrust and disinformation: Welcome to the AI election. POLITICO. https://www.politico.eu/article/deepfakes-distrust-disinformation-welcome-ai-election-2024/; Burke, G. (2024, February 27). Chatbots’ inaccurate, misleading responses about US elections threaten to keep voters from polls. AP News. https://apnews.com/article/ai-chatbots-elections-artificial-intelligence-chatgpt-falsehoods-cc50dd0f3f4e7cc322c7235220fc4c69[↩]
- Scott, M. (2024, May 7). Moldova fights to free itself from Russia’s AI-powered disinformation machine. POLITICO. https://www.politico.eu/article/moldova-fights-free-from-russia-ai-power-disinformation-machine-maia-sandu/[↩]
- Helmore, E. (2023, March 17). ‘We are a little bit scared’: OpenAI CEO warns of risks of artificial intelligence. The Guardian. https://www.theguardian.com/technology/2023/mar/17/openai-sam-altman-artificial-intelligence-warning-gpt4; Stop talking about tomorrow’s AI doomsday when AI poses risks today. (2023). Nature, 618(7967), 885–886. https://doi.org/10.1038/d41586-023-02094-7[↩]
- Hendrix, J. (2023, May 16). Transcript: Senate Judiciary Subcommittee Hearing on Oversight of AI | TechPolicy.Press. Tech Policy Press. https://techpolicy.press/transcript-senate-judiciary-subcommittee-hearing-on-oversight-of-ai[↩]
- Seitz-Wald, A. (2024, February 26). Democratic operative admits to commissioning fake Biden robocall that used AI. NBC News. https://www.nbcnews.com/politics/2024-election/democratic-operative-admits-commissioning-fake-biden-robocall-used-ai-rcna140402[↩]
- Nehamas, N. (2023, June 8). DeSantis Campaign Uses Apparently Fake Images to Attack Trump on Twitter. The New York Times. https://www.nytimes.com/2023/06/08/us/politics/desantis-deepfakes-trump-fauci.html[↩]
- Calma, J. (2023, October 17). NYC Mayor Eric Adams uses AI to make robocalls in languages he doesn’t speak. The Verge. https://www.theverge.com/2023/10/17/23920733/nyc-mayor-eric-adams-ai-robocalls-spanish-mandarin[↩]
- Martínez, A., & McLaughlin, J. (2024, January 19). Politicians, lobbyists are banned from using ChatGPT for official campaign business. NPR. https://www.npr.org/2024/01/19/1225573883/politicians-lobbyists-are-banned-from-using-chatgpt-for-official-campaign-busine[↩]
- Tong, A., & Coster, H. (2023, December 15). Meet Ashley, the world’s first AI-powered political campaign caller. Reuters. https://www.reuters.com/technology/meet-ashley-worlds-first-ai-powered-political-campaign-caller-2023-12-12/[↩]
- Swenson, A. (2023, July 5). Mayor Suarez launches an artificial intelligence chatbot for his presidential campaign. AP News. https://apnews.com/article/florida-suarez-2024-campaign-artificial-intelligence-chatbot-6522b5802b8b33ee455c6c9eb367dd81[↩]
- Associated Press. (2024, May 5). Trump’s former digital guru embraces AI, seeks 2024 GOP victory. https://apnews.com/video/artificial-intelligence-united-states-government-new-hampshire-michigan-domestic-news-31bb8d1109f14cf1b67c6082faa1aba0[↩]
- Burke, G., & Suderman, A. (2024, May 6). Brad Parscale helped Trump win in 2016 using Facebook ads. Now he’s back, and an AI evangelist. AP News. https://apnews.com/article/ai-trump-campaign-2024-election-brad-parscale-3ff2c8eba34b87754cc25e96aa257c9d[↩]
- Dwoskin, E. (2024, January 23). OpenAI suspends bot developer for presidential hopeful Dean Phillips. Washington Post. https://www.washingtonpost.com/technology/2024/01/20/openai-dean-phillips-ban-chatgpt/; Dominguez, P. (2024, March 4). A descendant of Kennedy illegally used an OpenAI AI. Softonic. https://en.softonic.com/articles/a-descendant-of-kennedy-illegally-used-an-openai-ai[↩]
- Deans, D. (2023, June 21). ChatGPT: Welsh politician uses AI chatbot to write speech. BBC. https://www.bbc.com/news/uk-wales-politics-65976541; Truitt, B. (2023, January 26). Rep. Jake Auchincloss uses ChatGPT artificial intelligence to write House speech. CBS Boston. https://www.cbsnews.com/boston/news/chatgpt-artificial-intelligence-change-future-of-education-congressman-jake-auchincloss-house-speech/[↩]
- Van Buskirk, C. (2023, January 26). Mass. Lawmaker uses ChatGPT to help write legislation limiting the program. MassLive. https://www.masslive.com/politics/2023/01/mass-lawmaker-uses-chatgpt-to-help-write-legislation-limiting-the-program.html[↩]
- The White House. (2023, October 30). Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The White House. https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/[↩]
- Henshall, W. (n.d.). The next generation of AI models could be large enough to qualify for the requirements in the Executive Order. TIME. Retrieved April 5, 2024, from https://datawrapper.dwcdn.net/xinqH/7/[↩]
- Associated Press. (2024, May 5). Trump’s former digital guru embraces AI, seeks 2024 GOP victory. https://apnews.com/video/artificial-intelligence-united-states-government-new-hampshire-michigan-domestic-news-31bb8d1109f14cf1b67c6082faa1aba0 [↩]
- FEC. (2023, August 16). Comments sought on amending regulation to include deliberately deceptive Artificial Intelligence in campaign ads. FEC.Gov. https://www.fec.gov/updates/comments-sought-on-amending-regulation-to-include-deliberately-deceptive-artificial-intelligence-in-campaign-ads/; Klar, R. (2023, August 10). FEC to consider new rules for AI in campaigns. The Hill. https://thehill.com/policy/technology/4147242-fec-to-consider-new-rules-for-ai-in-campaigns/[↩]
- Heath, R. (2024, February 14). Exclusive: AI legislation spikes across U.S. states to combat deepfakes. Axios. https://www.axios.com/2024/02/14/ai-bills-state-legislatures-deepfakes-bias-discrimination; Ahmed, T. (2023, May 10). Minnesota advances deepfakes bill to criminalize people sharing altered sexual, political content. AP News. https://apnews.com/article/deepfake-minnesota-pornography-elections-technology-5ef76fc3994b2e437c7595c09a38e848; Chidi, G. (2024, March 20). Georgia lawmakers are using an AI deepfake video to try to ban political deepfakes. The Guardian. https://www.theguardian.com/us-news/2024/mar/20/georgia-ai-ban-political-campaigns-deepfakes; Downey, R. (2024, March 26). Texas lawmakers and agency leaders experiment, ponder policies for an AI future. The Texas Tribune.[↩]
- Ravel, A. M., Woolley, S. C., & Sridharan, H. (2019). Principles and Policies to Counter Deceptive Digital Politics (pp. 1–26). MapLight and Institute for the Future.[↩]
- Eisen, N., Turner Lee, N., Galliher, C., & Katz, J. (2023, November 21). AI can strengthen U.S. democracy—And weaken it. Brookings. https://www.brookings.edu/articles/ai-can-strengthen-u-s-democracy-and-weaken-it/[↩]
- Fitzsimmons, E. G., & Mays, J. C. (2023, October 20). Since When Does Eric Adams Speak Spanish, Yiddish and Mandarin? The New York Times. https://www.nytimes.com/2023/10/20/nyregion/ai-robocalls-eric-adams.html[↩]
- Cornejo, J. (2024, March 22). Quiller wins the 2024 Reed Award for Best New AI Tool! [Medium]. Quiller.Ai. https://medium.com/quiller-ai/quiller-wins-the-2024-reed-award-for-best-new-ai-tool-2ffca3cf8f9a[↩]
- Ray, S. (2023, December 18). Imran Khan—Pakistan’s Jailed Ex-Leader—Uses AI Deepfake To Address Online Election Rally. Forbes. https://www.forbes.com/sites/siladityaray/2023/12/18/imran-khan-pakistans-jailed-ex-leader-uses-ai-deepfake-to-address-online-election-rally/[↩]
- Seitz-Wald, A. (2024, February 26). Democratic operative admits to commissioning fake Biden robocall that used AI. NBC News. https://www.nbcnews.com/politics/2024-election/democratic-operative-admits-commissioning-fake-biden-robocall-used-ai-rcna140402; Ramer, H. (2024, February 26). Political consultant behind fake Biden robocalls says he was trying to highlight a need for AI rules. AP News. https://apnews.com/article/ai-robocall-biden-new-hampshire-primary-2024-f94aa2d7f835ccc3cc254a90cd481a99; Tolan, C., de Puy Kamp, M., & Lah, K. (2024, February 23). How a magician who has never voted found himself at the center of an AI political scandal. CNN. https://www.cnn.com/2024/02/23/politics/deepfake-robocall-magician-invs[↩]
- Ramer, H. (2024, February 26). Political consultant behind fake Biden robocalls says he was trying to highlight a need for AI rules. AP News. https://apnews.com/article/ai-robocall-biden-new-hampshire-primary-2024-f94aa2d7f835ccc3cc254a90cd481a99[↩]
- Dobber, T., Trilling, D., Helberger, N., & De Vreese, C. (2023). Effects of an issue-based microtargeting campaign: A small-scale field experiment in a multi-party setting. The Information Society, 39(1), 35–44. https://doi.org/10.1080/01972243.2022.2134240; Simchon, A., Edwards, M., & Lewandowsky, S. (2024). The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS Nexus, 3(2), pgae035. https://doi.org/10.1093/pnasnexus/pgae035; Tappin, B. M., Wittenberg, C., Hewitt, L. B., Berinsky, A. J., & Rand, D. G. (2023). Quantifying the potential persuasive returns to political microtargeting. Proceedings of the National Academy of Sciences, 120(25), e2216261120. https://doi.org/10.1073/pnas.2216261120[↩]
- Goldstein, J. A., Chao, J., Grossman, S., Stamos, A., & Tomz, M. (2024). How persuasive is AI-generated propaganda? PNAS Nexus, 3(2), page034. https://doi.org/10.1093/pnasnexus/pgae034[↩]
- Simchon, A., Edwards, M., & Lewandowsky, S. (2024). The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS Nexus, 3(2), pgae035. https://doi.org/10.1093/pnasnexus/pgae035[↩]
- Kuo, R., & Marwick, A. (2021). Critical disinformation studies: History, power, and politics. Harvard Kennedy School Misinformation Review. https://doi.org/10.37016/mr-2020-76; Reddi, M., Kuo, R., & Kreiss, D. (2023). Identity propaganda: Racial narratives and disinformation. New Media & Society, 25(8), 2201–2218. https://doi.org/10.1177/14614448211029293[↩]
- Barrett, P. M., Hendrix, J., & Sims, J. G. (2021). Fueling the Fire: How Social Media Intensifies U.S. Political Polarization—And What Can Be Done About It. NYU Stern. https://bhr.stern.nyu.edu/polarization-report-page[↩]
- boyd, d., & Golebiewski, M. (2019). Data Voids. Data & Society. https://datasociety.net/library/data-voids/[↩]
- Riedl, M. J., Ozawa, J., Woolley, S. C., & Garimella, K. (2022). Talking Politics on WhatsApp: A Survey of Cuban, Indian, and Mexican American Diaspora Communities in the United States. University of Texas at Austin. https://mediaengagement.org/research/whatsapp-politics-cuban-indian-mexican-american-communities-in-the-united-states/[↩]
- Pennycook, G., Cannon, T. D., & Rand, D. G. (2018). Prior exposure increases perceived accuracy of fake news. Journal of Experimental Psychology: General, 147(12), 1865–1880. DOI: 10.1037/xge0000465; Vellani, V., Zheng, S., Ercelik, D., & Sharot, T. (2023). The illusory truth effect leads to the spread of misinformation. Cognition, 236, 105421. https://doi.org/10.1016/j.cognition.2023.105421; Woolley, S., & Joseff, K. (2020). Demand for Deceit: How the Way We Think Drives Disinformation (International Forum for Democratic Studies, pp. 1–39). National Endowment for Democracy. https://www.ned.org/demand-for-deceit-how-way-we-think-drives-disinformation-samuel-woolley-katie-joseff/; Zhou, C., Zhao, Q., & Lu, W. (2015). Impact of Repeated Exposures on Information Spreading in Social Networks. PLOS ONE, 10(10), e0140556. https://doi.org/10.1371/journal.pone.0140556[↩]
- Bond, S. (2023, May 8). People are trying to claim real videos are deepfakes. The courts are not amused. NPR. https://www.npr.org/2023/05/08/1174132413/people-are-trying-to-claim-real-videos-are-deepfakes-the-courts-are-not-amused; Chesney, B., & Citron, D. (2019). Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security. California Law Review, 107, 1753–1820. https://doi.org/10.15779/Z38RV0D15J; Goldstein, J. A., & Lohn, A. (2024, January 23). Deepfakes, Elections, and Shrinking the Liar’s Dividend. Brennan Center for Justice. https://www.brennancenter.org/our-work/research-reports/deepfakes-elections-and-shrinking-liars-dividend[↩]
- Brumfiel, G. (2023, December 14). Israel is using an AI system to find targets in Gaza. Experts say it’s just the start. NPR. https://www.npr.org/2023/12/14/1218643254/israel-is-using-an-ai-system-to-find-targets-in-gaza-experts-say-its-just-the-st[↩]
- Roth, E. (2023, December 18). New York City council member-elect used AI to answer questions. The Verge. https://www.theverge.com/2023/12/18/24006544/new-york-city-councilwoman-elect-susan-zhuang-ai-response[↩]
- Deans, D. (2023, June 21). ChatGPT: Welsh politician uses AI chatbot to write speech. BBC. https://www.bbc.com/news/uk-wales-politics-65976541; Truitt, B. (2023, January 26). Rep. Jake Auchincloss uses ChatGPT artificial intelligence to write House speech. CBS Boston. https://www.cbsnews.com/boston/news/chatgpt-artificial-intelligence-change-future-of-education-congressman-jake-auchincloss-house-speech/; Van Buskirk, C. (2023, January 26). Mass. Lawmaker uses ChatGPT to help write legislation limiting the program. MassLive. https://www.masslive.com/politics/2023/01/mass-lawmaker-uses-chatgpt-to-help-write-legislation-limiting-the-program.html[↩]
- Heath, R. (2024, February 14). Exclusive: AI legislation spikes across U.S. states to combat deepfakes. Axios. https://www.axios.com/2024/02/14/ai-bills-state-legislatures-deepfakes-bias-discrimination[↩]
- McCrosky, J. (2024, March 28). Who Wrote That? Evaluating Tools to Detect AI-Generated Text. Mozilla Foundation. https://foundation.mozilla.org/en/blog/who-wrote-that-evaluating-tools-to-detect-ai-generated-text/[↩]
- Federal Elections Commission. (n.d.). Public communications. FEC.Gov. Retrieved April 25, 2024, from https://www.fec.gov/press/resources-journalists/public-communications/[↩]
- Hurley, L. (2023, September 29). Supreme Court to weigh GOP-backed social media ‘censorship’ laws. NBC News. https://www.nbcnews.com/politics/supreme-court/supreme-court-weigh-republican-backed-social-media-censorship-laws-rcna66522[↩]
- Trauthig, I. K., & Woolley, S. C. (2022, March). Escaping the Mainstream? Pitfalls and Opportunities of Encrypted Messaging Apps and Diaspora Communities in the U.S. Center for Media Engagement. https://mediaengagement.org/research/encrypted-messaging-apps-and-diasporas/[↩]
- Howard, P. N., Woolley, S., & Calo, R. (2018). Algorithms, bots, and political communication in the US 2016 election: The challenge of automated political communication for election law and administration. Journal of Information Technology & Politics, 15(2), 81–93. https://doi.org/10.1080/19331681.2018.1448735[↩]
- Brennen, S. B., & Perault, M. (2023). The new political ad machine: Policy frameworks for political ads in an age of AI. Center on Technology Policy at the University of North Carolina at Chapel Hill; https://www.pcmag.com/news/mozilla-pushes-feds-open-transparent-ai-models; Irwin, K. (2024, March 25). Mozilla Pushes Feds to Embrace Truly Open, Transparent AI Models. PCMAG. https://www.pcmag.com/news/mozilla-pushes-feds-open-transparent-ai-models[↩]
- Fox, S. E., Shorey, S., Kang, E. Y., Montiel Valle, D., & Rodriguez, E. (2023). Patchwork: The Hidden, Human Labor of AI Integration within Essential Work. Proceedings of the ACM on Human-Computer Interaction, 7(CSCW1), 81:1- 81:20. https://doi.org/10.1145/3579514[↩]