How New Tools For Faking Audio And Video Could Impact The 2020 Elections

Seven scenarios — from faked scandalous audio to voter intimidation to imagined journalistic corruption — show the sorts of misinformation that could be coming.
TPM Illustration
Start your day with TPM.
Sign up for the Morning Memo newsletter

This article is part of TPM Cafe, TPM’s home for opinion and news analysis. It first appeared at Harvard’s NiemanLab, and was used with permission of the authors. 

 

New technologies used to produce deepfakes are rapidly advancing and becoming more accessible, allowing users to make compelling video and audio clips of individuals doing and saying things they never did or said. Users can, for instance, synthesize an individual’s voice, swap one person’s face onto another person’s body in a video, or alter a video interviewee’s words merely by re-writing a transcript. Recorded audio-visual media is becoming more and more malleable, facilitating an ease of editing almost analogous to text.

The technology offers a host of potential benefits in entertainment and education, from multi-lingual advertising campaigns to museums bringing dead artists back to life. But it can also challenge aural and visual authenticity and enable the production of disinformation by bad actors. Deepfakes have the potential to wreak havoc in contexts such as news, where audio and video are treated as a form of evidence that something actually happened. So-called “cheapfakes,” such as the widely circulated clip of House Speaker Nancy Pelosi, have already demonstrated the potential for low-tech manipulated video to find a ready audience. The more advanced technology creates a whole new level of speed, scale, and potential for personalization of such disinformation.

The goal of this article is to stimulate reflection on the ethics and governance of these emerging technologies. Specifically, we’re focused on the use of these technologies in the context of the 2020 U.S. election and seek to encourage debate about potential responses by various stakeholders. What should social media platforms, journalists, technology developers, and policymakers do to ensure that the outcomes of democratic processes aren’t negatively impacted by deepfakes?

To do this, we have developed a set of scenarios that describe an array of possible uses (arguably, misuses or unethical uses) of deepfake technology in the 2020 elections. These speculative fictions explore how current state-of-the-art technology could be deployed by actors with various motivations to impact election outcomes. The scenarios describe a rich and complex constellation of how the technology might interact with human behavior. The situations generate a number of ethical issues and point to dimensions of elections in which norms, policy, regulation, or technical intervention might be needed or helpful to protect the integrity of the 2020 election.

The set of scenarios purposely includes a variety of different actors (candidates or campaign staffers, external entities like PACs or foreign governments), motivations for those actors (to support a candidate, to hurt a competitor, to undermine the process), modalities of media (audio, video, image), phases (early vs. late), channels for distribution (social media, podcasts, chat apps), and mechanisms for influencing voters (discrediting a candidate’s reputation by association, exaggerating a candidate’s views, suggesting a candidate engaged in corruption, providing evidence of a candidate’s hypocrisy, inciting a campaign’s base, intimidating voters, undermining or attacking the election process, and more).

From an ethical perspective, all of the situations described in the scenarios are problematic insofar as they involve deception. However, they vary in the actors that produce and/or distribute the deepfake, the kind of damage they attempt to do, and how the deception can be counteracted. One overarching question is how these variations affect the ethical nature of the situation.

We developed the scenarios with an eye to making them plausible — describing what you might reasonably believe could happen — rather than merely possible. The challenge for you, then, is to consider what might make the plausible less probable. What can be done now — in the way of establishing norms, rules, policies, etc. — to avoid the worst outcomes (or at least make them less likely)? The scenarios and brief reflections on each are below. We’d love to get your feedback as to whether we have achieved our goals.

 


 

Scenario No. 1

A small veterans’ organization would like to see Seth Moulton win the Democratic primary because, although he is not the only candidate with military experience, he is the only one with significant combat experience and the only one making veterans’ issues a central component of his campaign. The group becomes a political action committee (PAC) and raises funds to make a promotional video for Moulton. The video consists of a combination of video clips with a voiceover that valorizes his bravery. One of the clips is a synthesized depiction of an incident in Iraq when Moulton heroically saved the lives of several members of his platoon. The video is posted on YouTube without any indication that one portion is synthesized. Thousands view the video; hundreds make comments. From the comments, it is apparent that some viewers believe the video is real footage taken by a reporter present at the event. Other comments include complaints from soldiers in Moulton’s platoon who claim that the depiction is an exaggeration of what happened. Because of the comments the video has mixed effects: Some are convinced that Moulton is a true hero, a quality that they would like in a president; others, including many cable news reporters, focus on the negative comments and the deceptiveness of the video.

This scenario illustrates a situation in which a deepfake video is used by a PAC motivated to promote a candidate by valorizing his military record. It’s used early in the campaign, so there is time for reaction. One of the most salient ethical questions posed by this scenario is whether (and ultimately how) the synthesized nature of a video should be disclosed. The scenario is complicated on this matter because only one component of the video is synthesized. Another, perhaps more subtle issue has to do with the extent to which exaggerations of a candidate’s record are okay, particularly when the candidate has not consented. Exaggeration of a candidate’s qualifications has always been an issue in elections, but deepfakes expand the possibilities for doing so. 

 


 

Scenario No. 2

The race is neck and neck only three days before the 2020 general election between Donald Trump and Elizabeth Warren. The winner may largely come down to turnout. Trump advisors develop a strategy to get out the vote among his base: disaffected white voters. Campaign staff synthesize a deepfake video of Warren in a supposed closed-door meeting with a few members of the Congressional Black Caucus and post it on Twitter and YouTube. In the cellphone-quality video, she’s heard saying disparaging and hateful things about white men in the United States. In a matter of hours, Warren as well as other people falsely depicted in the video publicly proclaim that the video is a fake. CNN and MSNBC quickly spread the word that the video is likely a fake. Nevertheless, it spreads virally across social media, propelled and further amplified by troll and bot accounts. The video enrages Trump’s base, many of whom are unaware of the debunks or simply don’t care, and spurs them to the polls for record turnout on the right.

This scenario illustrates a situation in which a deepfake is used by campaign staff to hurt an opposing candidate by attributing to her extreme views that will incite the campaign’s base. Platforms play different roles here, as distributors and amplifiers not only of the lie but also of its debunks. This scenario depicts the generic concern about the use of deepfakes — that is, that they will be used by one candidate (and/or their supporters) to distort and misrepresent a competitor. In addition, misrepresenting a competitor can energize a candidate’s supporters. The scenario also points to the challenge of countering the effects of the deception, especially when it’s used late in a campaign. In the past, candidates have mischaracterized their competitors, but deepfakes can give ostensible authenticity and credence to such mischaracterizations.

 


 

Scenario No. 3

The general election has come down to Pete Buttigieg and Donald Trump. While Trump has enjoyed strong evangelical support, Buttigieg has had difficulty garnering support from evangelicals. He was raised as a Catholic, is now a practicing Episcopalian, and in his campaign has emphasized his Christian faith. In particular, he has repeatedly mentioned that his marriage (to a man) has made him a better Christian. Six weeks before the election, internal G.O.P. polls begin to show that a small but not insignificant number of evangelicals are moving towards Buttigieg. Ten days before the election, a deepfake video appears featuring several testimonials by men who say that they had sex with Mayor Pete while he has been married. A conservative PAC made the video and leaked it to a few evangelical groups via posts to their closed members-only Facebook groups using sockpuppet (i.e. fake) accounts. “Wow, look what I found — how can this guy say he’s Christian?” reads one of the posts. Several religious leaders from evangelical and other churches denounce Buttigieg to their congregations and followers.

As with the others, this scenario illustrates how a deepfake might be used to harm a candidate by misrepresenting the candidate. However, in this case, the responsible party is unknown, the strategy is to show the candidate to be a hypocrite, and instead of showing the candidate saying or doing something, the deepfake has others making statements as if they were real testimonials about behavior for which there would be no other witnesses. The distribution of the deepfake is limited to a small number of people (members of a closed group), illustrating that deepfakes can have a significant impact even when the initial scale of distribution is relatively limited. Indeed, with this mechanism of distribution, counteracting the effects is challenged by their lack of public observability.

 


 

Scenario No. 4

Two weeks before the 2020 election, African-American voters throughout Iowa, Michigan, and Wisconsin receive emails spoofed to seemingly come from their local Democratic Party. The email addresses each recipient by name and tells them the location of their voting place. It also includes a personalized embedded video in which Rev. Al Sharpton, addressing them by name, says that there have been attempts to undermine the election with false information about polling locations in their state. He encourages voters to use the information provided in the email when they go to vote. But the video has been synthesized and the location information is bogus. On election day, some voters are turned away because they’re at the incorrect location for their voter registration. The source of the deepfake is unknown until after the election, when it’s traced back to an alt-right group that would like to see Trump reelected.

This scenario is unlike most of the others in that the deepfake is not used to attack or support a particular candidate but to undermine the democratic process by interfering with individuals’ autonomy in exercising their right to vote. It shows how deepfakes could be used to subvert the integrity of elections. Like the other scenarios, use of the deepfake involves deception, but not deception about candidates — rather, deception about what an influential and recognized figure is claiming. The deepfake exploits the figure’s countenance to bolster trust in deceptive information and it raises complex questions about consent, property rights, and publicity rights around the use of an individual’s facsimile even if they are a public figure.

 


 

Scenario No. 5

Things are heating up and there’s a lot of competition from a wide group of contenders for the Democratic nomination. Bernie Sanders is campaigning heavily in Iowa and Nevada, two of the earliest primaries to be held and races he knows he needs to win. A competitor to Sanders resolves to leak synthesized audio of Sanders saying disparaging things about Iowans, calling them “hillbillies and rednecks” to a small group of Mexican-American voters he met with in Nevada at a campaign stop. The audio clip gets uploaded onto Tumblr by an anonymous blogger who claims that he wants the world to know about “the real Bernie Sanders” but is worried about retaliation against himself and his family. The clip is picked up and played on several podcasts with followings on the left and right. The Sanders campaign is able to produce a complete recording of the purported event in Nevada that proves he never said those things. Major media outlets air debunkings of the clip, but it continues to circulate online and on some smaller podcasts.

In this scenario, an audiofake is anonymously produced and distributed to hurt a candidate by suggesting that the candidate is not who he purports to be, in the sense that he has said demeaning things about groups that he publicly supports. The case might be thought of as a simple case of using an audiofake to misrepresent a candidate; however, the scenario involves anonymity and raises a question about when anonymity is legitimate and how anonymous information should be treated. The case might also be seen in a somewhat positive light, in that a real audio recording is used to counter the audiofake. However, it’s difficult to say that the impact of the audiofake can be effectively counteracted by the real recording. How will listeners know which is real?

 


 

Scenario No. 6

It’s a tight race. Turnout in some key swing districts could make all the difference. Two days before the election, a foreign power unleashes a campaign to suppress certain voter demographics in battleground states. They send spoofed text messages with fake synthesized images to targeted individuals. The message says that if they vote on Tuesday, the attached image depicting the individual participating in a sex orgy with minors will be released publicly online and sent to their friends and family. These images are synthesized using a database of incriminating background scenes with a face swapped using photos of the targeted person scraped from their Facebook page or Instagram account. They are convincing enough to intimidate and coerce some people into not voting. A few individuals contact the police or their cellphone provider, but the spoofed messages continue until election day. Since no one knows how many voters may have been affected, the incident undermines public perception of the legitimacy of the election.

In this scenario, we see a foreign actor interfering in an election campaign in a powerful way. The deepfake is not aimed at hurting or supporting a candidate; instead, it is superficially aimed at voter suppression — superficial because the number of voters impacted would probably be quite small. But once the public becomes aware of this activity, the broader effect would be to cast doubt on the legitimacy of the election by suggesting voter suppression while making it difficult to understand its extent. In addition to eroding trust in election outcomes, this scenario shows how manipulated visual images can threaten individuals and do so in private communication channels away from public observation. Like all the other scenarios, this one involves deception (because a synthesized video falsely represents behavior), but this one also illustrates how deepfakes can be used to coerce and intimidate.

 


 

Scenario No. 7

It’s the morning after the first presidential debate between Joe Biden and Donald Trump. Biden is basking in positive media attention after being judged the clear winner when CNN suddenly airs an audio clip which purports to be from an exchange the debate’s moderator had with a Biden campaign official before it began. The clip, engineered by the Trump campaign in order to discredit Biden’s performance, shows the moderator asking about whether “your candidate has any questions about the questions I sent yesterday?” Pundits interpret it as evidence of trying to fix the debate by sharing questions with Biden’s staff beforehand. The moderator, a respected journalist, firmly denies the account, but not before the #riggeddebates hashtag starts trending. Trump amplifies the idea that the debates are rigged to his Twitter following and refuses to participate in subsequent debates, which are then canceled.

This scenario illustrates how faked audio can be used simultaneously to hurt a competitor and undermine the integrity of (a component of) the election process, a public debate. In this case, the attack was initiated by campaign staff suggesting corruption in the organization of the debate and participation in that corruption on the part of an opposing candidate. Historically, for such accusations to be considered credible, accusers have had to produce some sort of evidence to support their claims. Deepfakes of the kind described in this scenario enable accusers to fabricate evidence that looks credible and can be widely distributed and amplified through social media. This expands the power of a false accusation, both by making it seemingly real and by offering quick and wide distribution.

 


Nicholas Diakopoulos is an assistant professor of communication studies at Northwestern University.

Deborah Johnson is professor emeritus of applied ethics at the University of Virginia.

Latest Cafe
34
Show Comments

Notable Replies

  1. Avatar for fgs fgs says:

    Video surfaces of The Rump, laughing it up with a bunch of Russian guys: “Voters are so fucking stupid I could feed them their own cocks. Put a little mustard and celery salt on it and and say here! Chicago style!” One of the laughing Russians adds, “in Mother Russia, hot dog eat YOU!”

    This scenario is arguably not even all that deceptive, since there’s no doubt he feels that way and is vulgar enough to say so. It’s truthier than the actual shit he says, which is all lies and everyone knows it.

  2. I don’t know about the rest of you out here, but I am very, very worried about deepfake videos that make Trump appear reasonable, intelligent, compassionate and inquisitive.

  3. 2020 is going to get really ugly. The potential for authentic looking videos made up out of nothing has been giving me nightmares for over a year. We already know that many people on the Republican side have no ethical boundaries and I fully expect this to become the election known as the first one to employ this technology on a large scale.

  4. We already got a Deepfake in November 2016.

    No more, please…

Continue the discussion at forums.talkingpointsmemo.com

28 more replies

Participants

Avatar for system1 Avatar for ajm Avatar for playitagainrowlf Avatar for fgs Avatar for nickdanger Avatar for old_curmudgeon Avatar for UnfadingGreen Avatar for eggrollian Avatar for sandyh Avatar for sparrowhawk Avatar for ralph_vonholst Avatar for tomanjeri Avatar for gr Avatar for birdford Avatar for maximus Avatar for curmudgeonly Avatar for brakmaster

Continue Discussion
Masthead Masthead
Founder & Editor-in-Chief:
Executive Editor:
Managing Editor:
Deputy Editor:
Editor at Large:
General Counsel:
Publisher:
Head of Product:
Director of Technology:
Associate Publisher:
Front End Developer:
Senior Designer: