Will A Storm Of AI-Generated Misinfo Flood The 2024 Election? A Few Dems Seek To Get Ahead Of It

TPM illustration/Getty Images
Start your day with TPM.
Sign up for the Morning Memo newsletter

China drops bombs on Taiwan. Wall Street buildings are boarded up amid a free fall in financial markets. Thousands of migrants flood across the southern border unchecked. And police in tactical gear line the streets of San Francisco to combat a fentanyl-fueled crime wave. 

That’s the imagery featured in an artificial-intelligence-generated ad the Republican National Committee (RNC) giddily released shortly after President Joe Biden announced his 2024 reelection bid, supposedly depicting a dystopian future in which Biden has won a second term.

The ad served up the GOP’s usual dose of fear mongering — but this time backed with an extremely realistic AI-created image montage of some of Republicans’ favorite boogeymen springing to life in a 32 second video.

The RNC ad — which included a small disclaimer that read, “Built entirely by AI imagery” — offered an alarming glimpse into how the technology could be used in the upcoming election cycle. Experts warn the ad is only an early taste of the sweeping changes that AI may enable to how our democratic system functions. 

“For the first time, I would say that the enemies of democracy have the technology to go nuclear,” Oren Etzioni, the founding CEO of the Allen Institute for AI, told TPM. “I’m talking about influencing the electorate through misinformation and disinformation at completely unprecedented levels.”

Concerns over that ad were, in part, what prompted Rep. Yvette Clarke (D-NY) and some Senate Democrats to push for more oversight of these emerging technologies, and more transparency about the ways in which they are used.

In early May, Clarke introduced the The REAL Political Ads Act, legislation that would expand the current disclosure requirements, mandating that AI-generated content be identified in political ads.

The New York Democrat is particularly concerned about the spread of misinformation around elections, coupled with the fact that a growing number of people can deploy the powerful technology rapidly and with minimal cost.

“The political ramifications of generative AI could be extremely disruptive. It could become catastrophic, depending on what is depicted,” Clarke told TPM. 

Case in point: While it didn’t have an impact on an election, an AI video showing an explosion near the Pentagon went viral on Monday morning, causing panic and prompting a brief dip in the stock market.

“We need to be able to discern what is real and what is not,” Clarke said, adding that the fake Pentagon explosion was shared by verified accounts within minutes of it being posted online.

A companion bill to Clarke’s was introduced in the Senate last week by Sens. Michael Bennet (D-CO), Cory Booker (D-NJ) and Amy Klobuchar (D-MN).

A great deal of concern lies in the fact that AI can be used to create false yet extremely realistic video and audio to mislead and confuse voters, experts say, much like the recent RNC ad and photos of Trump getting arrested that went viral around the time of his New York indictment. As the technology advance, this kind of misleading material could be deployed on an ever-expanding scale. 

“What if Elon Musk calls you on the phone and asks you to vote in a certain direction?” Etzioni theorized, emphasizing that without guardrails voters will be increasingly subject to attacks that aim to persuade them to vote a certain way or possibly not vote at all.

The existence of AI-generated content in and of itself is already having an effect on how people consume and trust that the information they’re absorbing is real. 

“The truth is that because the effect of generative AI is to make people doubt whether or not anything they see is real, it’s in no one’s interest when it comes to a democracy,” Imran Ahmed, CEO of the Center for Countering Digital Hate, told TPM.

“The only place that leads us is anti-democratic,” he said. 

Congress has already shown some interest in AI, including a friendly hearing before the Senate subcommittee for privacy, technology and the law with Sam Altman, the CEO of OpenAI, and other industry experts. But so far, Congress has largely stayed away from addressing the implications of AI for democracy — including for the upcoming 2024 election. 

“It’s important that for our credibility as a democracy that we not leave ourselves open to any type of ploys that could ultimately cause harm, disrupt an election, build on the distrust that’s already out there given the political dynamics of previous elections,” Clarke told TPM.

There hasn’t been any public support for the bill from the GOP caucus — at least not yet. Some Republicans have, however, expressed concern about the topic. 

“Believe me, it is not just the Republicans who will be tempted to use it.”

Sen. Josh Hawley (R-MO), for example, recently expressed interest in examining the issue, telling NPR that “the power of AI to influence elections is a huge concern.”

Clarke said she is hopeful her Republican colleagues will see that this is an issue that transcends any one party and candidate.

“This should be bipartisan,” she told TPM.  

“There’s a great case to be made that this is a double edged sword. This is not something that can be relegated to one party,” she added. “We are all vulnerable to the use of AI generated advertising. It can be disruptive whether you’re a Democrat or Republican.”

The Democratic sponsors of the Senate companion legislation also expressed optimism to TPM about attracting Republican interest. Rapid advancements in AI have opened up what Booker described to TPM as a “rare opportunity for bipartisan cooperation in the Senate.”

“It was clear at the Judiciary Committee’s recent hearing that my colleagues on both sides of the aisle understand the threat of AI-generated content in spreading misinformation, and I am hopeful that they will join this bill to modernize our disclosure laws and ensure transparency in our political ads,” Booker told TPM. 

Another sponsor of the bill echoed that sentiment: “Americans expect transparency and accountability in our electoral process — there’s no reason this legislation shouldn’t be bipartisan,” Bennet told TPM.

Ahmed, of Countering Digital Hate, echoed that sentiment.

“Believe me, it is not just the Republicans who will be tempted to use it,” he said.

But much like everything else in today’s split Congress, Republicans joining with Democrats in a good-faith push to address an emerging issue is increasingly rare. That’s become especially true of any legislation that touches on democracy and voting. And Clarke is certainly worried about the recently emboldened right-wing in the House blocking her bill’s pathway forward.

“There’s some folks who see the political discourse between the Democrats and Republicans as a war,” Clarke said. “And they may feel like they’re being disarmed if they in any way create some sort of guardrails or rules of the road — especially those who speak so vociferously about First Amendment rights.”

The goal of Clarke’s bill is — as she puts it — to make rules and regulations “that are both a carrot and a stick;” lawmakers need to create transparency for the American people so that they can’t be deceived through the use of technology without curtailing First Amendment rights, she said.

“There’re just some folks who take things to the extremes here,” she added. “And I don’t know how influential they would be with some of the colleagues who really understand the implications and want to do something about it.”

Latest News
59
Show Comments

Notable Replies

  1. And media networks happily climb on board.

  2. We’re all doomed I tell you, DOOMED!

  3. Depends on your silo. For 80% of regular voters, such disinformation is water off a duck’s back. Low-information voters are the bigger target of AI misinfo. About 80 million American adults have never voted or vote very rarely. Since the margin in the national presidential can be a few million, it makes sense to rile up or strongly passivate these potential voters. In Brexit, several million Brits who otherwise never voted and likely were unsure about what the EU even does, nevertheless ran to the polls and voted mostly Leave. If, say, Google or Facebook, have 10,000 data points on you, it’s pretty easy to see who gets hit by an AI shitstorm and who gets left alone. In the last mid-term cycle, for example, I got almost no uninvited social media messaging. Too predictable and set in my ways. Quiet silo.

  4. This is a genuinely vexing issue because, as the article notes, there’s no shortage of people out there at each extreme of the political spectrum who’d be willing to use it to their own ends. Add to this the ability to create spoof accounts of reputable news sources (as happened with those images of the “bombing” at the Pentagon), and even the traditional media’s agreement not to call an election until the AP calls it becomes complicated, as the default setting in media is to break a story before others do.

    I’d say the best strategy for creating legislation around this stuff would be for Dems to tell GOPers, “You already think elections are rigged against you–join us, then, in making something that will control at least this part of it: the regulation of online words and images purporting to present news about elections and candidates.” Based on what’s in this article, there appears to be bipartisan interest in something like that.

  5. Avatar for tao tao says:

    I would expect metadata information would easily identify AI product. People who lack a BS detector will continue to be screwed.

Continue the discussion at forums.talkingpointsmemo.com

53 more replies

Participants

Avatar for discobot Avatar for paulw Avatar for eggrollian Avatar for 1gg Avatar for becca656 Avatar for lastroth Avatar for gr Avatar for tao Avatar for fiftygigs Avatar for benthere Avatar for thunderclapnewman Avatar for demyankee Avatar for susanintheoc Avatar for coimmigrant Avatar for cub_calloway Avatar for timorwig Avatar for jwbuho Avatar for outis Avatar for zenicetus Avatar for rascal_crone Avatar for enceladus Avatar for Fire_Joni_Ernst Avatar for geographyjones Avatar for Brysstaurus

Continue Discussion
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Deputy Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: