Sabrina Sanchez
May 10, 2023

As US election cycle heats up, candidates must be wary of using AI in campaign ads

Disinformation, ethics, bias and misleading voters are among the challenges candidates must be aware of as they enter the 2024 race.

Getty Images
Getty Images

The 2024 US presidential race has kicked off, and President Joe Biden, former President Donald Trump and Florida governor Ron Desantis are pumping up their ad investments in a bid to sway voters. 

As in years past, campaign strategies so far have focused on paid TV ads, social media marketing and email fundraising. But as technology evolves and is able to produce more rapid responses to current events, a new player is expected to appear in the political arena this year: AI. 

Republicans have already started to incorporate AI into their campaign strategies. 

On April 25, after President Biden announced he will seek a second term via an online video, the Republican National Committee (RNC) to responded with an AI-generated ad depicting a dystopian society meant to represent the U.S. under Biden’s leadership. 

How AI generated ads will impact presidential campaigns in the end remains to be seen. But according to Rick Acampora, global chief client officer at Assembly, the technology will surely lead to much faster responses to opposing campaigns. 

“AI accelerates and amplifies a lot of the things that are currently going on. Something that really struck me about the [RNC] video that was attacking Biden is that we moved from just talking about and reading about [a dystopian future] to seeing it. I think that is going to dial up  fear as an approach a whole lot more,” he said. 

Lack of regulation foments disinformation

AI also carves a path for disinformation campaigns to become more prevalent, especially as tools like voice and image generators become more sophisticated. 

While the RNC disclosed that its ad responding to Biden’s campaign launch was AI generated, for the American public, realistic images like the ones created by Midjourney or Open AI platforms will make it harder to distinguish reality from fiction, he noted. 

Lawmakers and political groups are already sounding the alarm. 

Following the RNC’s response video, House Democrat Rep. Yvette D. Clarke (D-N.Y.) proposed legislation on May 2 to require the disclosure of AI-generated content in political ads. 

When interviewed about the bill, she told the Washington Post the legislation is part of a push to “get the Congress going on addressing many of the challenges that we’re facing with AI.” 

Among the challenges are misinformation, bias, propaganda and privacy concerns, according to the Center for AI and Digital Policy (CAIDP), which filed a complaint to the FTC in March following its publication of an open-letter calling for a pause on large generative AI experiments.

The complaint, which has since prompted conversations about the ethics of AI, points out potential threats from OpenAI’s ChatGPT-4, Verge reported. In it, the CAIDP lists the potential ways GPT-4 could produce propaganda, malicious code, or unfair race and gender preferences based on coded stereotypes. It also calls out privacy failures within its interface, citing bugs that have resulted in breached private information.

And as the Presidential race kicks into high gear, uneasiness around AI will only grow, Rick Fromberg, senior advisor at PR firm BerlinRosen, predicts. As a result, the Federal Communications Commission (FCC)’s rules around what is allowed in a political campaign will come into sharper focus, he argues. 

“Traditionally, the FCC has pretty much given a blanket for candidates to say whatever they want, whether true or untrue, deceptive or truthful, whatever. Whether or not that continues with the use of AI will be really interesting,” he said. “There are going to be hundreds of millions of dollars spent from third parties, and so AI that either manipulates a video or uses something that is controversial [could] add another factor that can be put into legal question.” 

It’s because of these very concerns that the American Association of Political Consultants (AAPC) is staying away from AI. On May 3, the AAPC’s Board of Directors issued a statement saying it “unanimously agreed to condemn the use of generative AI ‘deep fake’ content in political campaigns.” The organization also issued a policy statement to illustrate how AI fits within its Professional Code of Ethics framework and reiterated that any violations of the code will be enforced. 

“AAPC is proud of and committed to upholding our Code of Ethics and believed we needed to address this burgeoning technology and make it clear to our members that its use is a blatant violation of our existing Code of Ethics,” said AAPC President R. Rebecca Donatelli in the statement.

“AAPC will continue to protect members and voters who rely on straightforward political communications to make informed decisions while ensuring free speech, including the use of satire or parody, thrives within the $9 billion political advertising industry that our members represent,” VP Larry Huynh added. 

Pandora’s Box

Despite legitimate concerns about the implications of widespread AI access, Pandora’s box has already been opened, Craig Kronenberger, founder and CEO at marketing and data company Stripe Reputation says.

He notes that without stricter regulations, AI will continue to be used by the public and likely presidential candidates emphasizing speed and the ability to respond to opponents in real time.

But if presidential candidates over-index on using AI to create campaign ads, the quality of messaging will likely decrease. 

“The ability to produce content faster is going to be a big plus with AI — the generation of video content imagery, text based content, etc. The question is, is this content meaningful? Is it really going to work just because they can get it out there faster? Is it really going to drive the results?”

Regardless, Assembly’s Acampora agrees that speed will be a top priority: “It’s very much a ‘speed wins’ scenario. You're going to see a lot of Trump being very forward and first, very much on the attack. On the flip side, you're going to see a lot more of Biden being able to respond quicker and more precisely.” 

But if candidates are opting for speed over quality, the public should expect to see a lot more AI-driven disinformation this election cycle from third parties, Kronenberger warns. AI will also make it easier for countries like Russia and China to meddle in elections, he adds.

“Sadly, most of the platforms don't have the infrastructure to identify disinformation and stop it or properly fact check things. It's still a heavily manual process. So I think we're going to see disinformation on a scale we've never seen before,” he said. “We are already seeing them bubble up [based on internal data].” 

In the meantime, AI’s creative capabilities continue to show up across disciplines, and campaigns for what promises to be a fierce race for America’s next leader are no exception.

Source:
Campaign US

Related Articles

Just Published

4 hours ago

Can retail media compensate for weaknesses in ...

Following reports on declines in performance media earnings, Campaign explores what strategies marketers can employ to navigate this changing landscape—including the promise of retail media.

4 hours ago

Guardian Malaysia wants you to 'own your beautiful' ...

The health and beauty retailer's latest initiative, developed with FCB Shout, challenges traditional notions of beauty.

5 hours ago

Woolley Marketing: An agency village can be the ...

Every marketing ecosystem has its weak link. Darren Woolley explains how to spot—and avoid becoming—the "village idiot" before your agency network collapses under its own weight.

6 hours ago

40 Under 40 2024: Meryl Adiel Hernandez, McDonald’s

Through her dedication to ethical practices and community well-being, Hernandez exemplifies leadership with purpose, leaving an enduring impact on both McDonald's Philippines and the communities it serves.