Digital Deep Fakes
Why are they deep?
What are they faking?
What does it mean for the rest of us?
Co-authored by William Marks and Brenda Leong
Introduction
Nicholas Cage played Lois Lane in a Superman film. Nancy Pelosi drunkenly slurred her words on stage. Barack Obama claimed Killmonger, the villain in Black Panther, was right. And Mark Zuckerberg told the world that whoever controlled data would control the future. Or did they?
These events seem unlikely, but click the hyperlinks; they were all recorded on video—they must be real! Or so it seems. There have long been many varieties of video manipulation, but those processes were generally expensive, time-consuming, and imperfect. That is no longer the case. Professional tools are increasingly effective and affordable, and open-source code that even relative amateurs can use will soon be available to produce believable digital video forgeries.
Already impressive, these new tools are improving quickly. It may soon be nearly impossible for average viewers, and challenging even for technical experts, to distinguish the authentic from the digitally faked, a circumstance described as having “the potential to disrupt every facet of our society.” From the mass market of news and viral memes to the exchanges between individuals or businesses, bad actors will be able to create fake videos (and other media) with the intent to harm personal, political, economic, and social rivals. Unable to believe what their own eyes see, trust in the news will decline. This may be exacerbated by what Robert Chesney and Danielle Citron refer to as the liar’s dividend: the benefit to unscrupulous people from denouncing authentic events as forgeries, creating enough doubt or confusion to have the same impact or accompanying harm as if the events were fake, while minimizing any consequences to themselves.
The media has recently labeled these manipulated videos of people “deepfakes,” a portmanteau of “deep learning” and “fake,” on the assumption that AI-based software is behind them all. But the technology behind video manipulation is not all based on deep learning (or any form of AI), and what are lumped together as deepfakes actually differ depending on the particular technology used. So while the example videos above were all doctored in some way, they were not all altered using the same technological tools, and the risks they pose – particularly as to being identifiable as fake – may vary.
First (and Still), There Were “Cheapfakes”
Video manipulation is not new. The Nancy Pelosi video, for example, uses no machine learning techniques, but rather editing techniques that have been around quite awhile. It is what is now being called a “cheapfake.” Rather than requiring a tech-savvy troll equipped with state-of-the art artificial intelligence, this result was achieved through more traditional means by running at 75 percent speed, and with a modified pitch to correct for the resultant deepening of her voice. Because live (unedited) video of this type of event may also exist, it’s generally not as difficult for viewers to quickly check and identify this as fake. However, if Pelosi were a less famous person, and finding an unedited video record was more difficult, this type of fake could still confuse viewers, be harder to validate, and cause even more potential fallout for the person being misrepresented. Their simplicity does not render them harmless. As we have seen, even falsifiable fake news stories can mislead people and influence their views. But the editing process is likely to be discernible from the video file by those with the capacity to review it, and thus these types of fakes can be publicly identified.
When Machines Learn, the Fakes Get Smarter
Two more recent versions of machine learning-based video editing are causing more concern. One is what is accurately a “deepfakes,” and the other, even newer process, is a deep video portraits (DVP). Because the processes are similar, and the outcomes and risks aligned, most common media references will likely consider them all deepfakes moving forward.
Technically, the term deepfake refers to what are essentially “faceswaps,” wherein the editor essentially pastes someone’s face like a mask over the original person’s face on an existing source video. Deepfakes generally attribute one person’s actions and words to someone else. The term was coined in 2017 by a reddit user who used the technology to make believable fake celebrity porn videos. That is – the editor took a selected clip of a pornographic video, and then needed some amount of actual video of the celebrity to be defamed, which the AI software could use to “learn” that celebrity’s movements well enough to place his or her face over the original porn film actor. Thus, it appeared the celebrity was the person in the porn film. Here, John Oliver’s face is placed over Jimmy Fallon’s as though Oliver were the one dancing and telling jokes.
In these examples, the original videos may be fairly easily accessible to demonstrate that the altered versions were faked. Alternatively, observing other aspects of the video (such as height or other physical characteristics) may make it clear the substituted face is not the original person in the video. The technology is not seamless—close observation shows that the “fit” of Oliver’s face isn’t perfect, particularly when he turns to the side. And in some cases, the editing may be discernible upon examination of the altered file. However, quality is improving steadily and this sort of substitution may be harder to visually identify in the future.
These extremely realistic deepfakes are the result of a powerful machine learning technology, known as “generative adversarial networks” (GANs), a programming architecture based on deep Neural Networks.
The other process – creating Deep Video Portraits (DVPs) – is even newer, more powerful, and potentially more dangerous. DVPs are also created through GANs and likewise commonly referred to as deepfakes. But whereas actual deepfakes allow someone to place a mask of someone else’s face onto an existing video clip, DVPs allow the creator to digitally control the movements and features of an already-existent face, essentially turning them into a puppet in a new video recording. As demonstrated here, using a relative short amount of existing video from the target (the person to be faked), the program allows a source actor to move his head and mouth, talk, and blink so that it seamlessly appears that the targeted person is doing exactly those movements and expressions, being controlled to say or express exactly what the editor desires.
In a video created in this manner, Jordan Peele digitally manipulated Obama’s face to speak words and make facial expressions that Obama never did. After making this fake video with FakeApp (first released in 2018), Peele edited the file further with Adobe After Effects to create a particularly convincing altered performance.
Since the DVP process is creating a new video file directly (not manipulating an existing file), there is no digital editing trail to technologically identify changes. And there is no original video to find or contrast it with. There are AI systems being developed that may be able to detect DVPs based on the inconsistencies of movements and other performance discrepancies, but whether they can keep up with the improving quality of the faked imagery remains to be seen.
“When Machines Compete”: How GANS Work
GANs are a relatively recent development in neural net technology, first proposed by Ian Goodfellow in a 2014 paper. GANs allow machine-learning based systems to be “creative.” In addition to developing deepfake videos sophisticated enough to fool both humans and machines, these programs can be written to make audio recordings, paint portraits that sell for hundreds of thousands of dollars, and write fiction.
GANs work by having two neural networks compete directly with each other.¹ The first (the “generator”) is tasked with creating fake data – in this case, a video – based on a training set of real data (video, audio, or text data from existing files or recordings). The program is trained to emulate the data in the real files: learning what human faces looks like; how people move their heads, lips, and eyebrows when they talk; and what sounds are made in speech.
The second neural net (the “adversary”), a program also trained on the real video data, uses its learned analysis of how people move and speak to try to distinguish an AI-generated video—that is, the job of the adversary is to spot the fakes created by the generator. The more original video data it is fed, the better this adversarial network becomes at catching the fake outputs. But concurrently, the generator uses the experience of being caught out to improve its fakes. This drives an upward spiral of excellence, as each of the algorithms continually gets better and better until the first is finally able to consistently create outputs so believable that the second cannot distinguish them from real footage.
Identifying Fakes: “Technically,” It’s A Problem
As a society, we have long known that photos can be altered, even when they look real. And some are fairly obviously fakes to all but the most willfully ignorant, simply based on their level of outlandishness. Examples that come to mind include edited images of George Bush holding a book upside-down, Vladimir Putin riding a bear, and President Trump playing with plastic dinosaurs. But many fake images are more subtle, and as we’ve seen in recent years with the rise of “fake news” generally, the false can often mislead the unsuspecting or the less discerning. People accept straight out lies, generated by human writers because fabrications get clicks, as true. What will happen when people are urged to routinely critically question what appears to be unedited footage?
How will we deal with the problem when average users can generate realistic looking images, videos, and stories with nothing more than an easily accessible program that requires minimal input? Potential concerns include an abundance of fake nude photos, admissions of treason or fraud, and fabricated news stories all across the internet and social media. The private sector, government, and individuals will all need to react.²
A skeptical eye may be able to identify many of the current deepfakes as forgeries, but this may be a temporary comfort. While researchers are employing the same challenge-and-improve process to systems designed to search-and-detect artificial files, the process is uneven between creators and detectors. This is at least partly because of a disparity of attention and research. The number of people working on the fakers’ side, as opposed to the detector side, may be as high as 100 to 1, although any potential future breakthrough might quickly tip the balance. And just as the generator learns what gets it caught by the discriminator neural net, creators learn from their mistakes. For example, when some detection technology relied on the fact that deep-fakes were not able to blink naturally, creators quickly improved the technology to incorporate smooth blinking.
Even once identified, technical controls are limited. Reddit shut down r/deepfakes, where the initial deepfake pornographic content was shared. Youtube removed the Pelosi video, and Facebook notifies users who re-post about its veracity. But these actions are complicated, and don’t scale well. Trying to automate this sort of monitoring leads to problems such as deleting the history of Nazis in Germany while trying to suppress the imagery of white supremecists. It has also prompted questions over whether platforms are now fact-checking art. After all, the deepfake of Zuckerberg was created by an artist.
The generators of these technologies will also have to wrestle with the ethical implications of their decisions, as use cases run the gamut of useful and creative applications as well as those which might be concerning. OpenAI, a group which actively promotes cooperativeness in AI research, designed a fake text generator they felt was so good, they decided the risks it posed made it too dangerous to release. The program was designed to flesh out a full story or news article based on just a short opening phrase. However, despite the group’s central policy to support open source code, they opted not to release the full model of the system, GPT-2m, “due to […] concerns about malicious applications of the technology” because of the ease of potential use for social engineering attacks, to impersonate people, or to create fake news and content.
We’ve seen how rumors and fake news can crash stock prices, raise ethnic tensions, or lead a man to burst into a pizza shop with an assault rifle. Yet even when the true accounts are available to distinguish fake from real, problems emerge. The implications and challenges posed by such content-generation systems are significant. Technological fixes alone will be insufficient to combat undesired impacts.
Identifying Solutions: Political Will
Any proposed solutions are likely to include at least some degree of regulatory intervention. Specific to the U.S. legislative context, one proposal to deal with the potential proliferation of fake videos, news, and photos, is to amend section 230 of the Communications Decency Act. The law has currently established that ISPs and many online services are not responsible for content posted by users. Free speech groups maintain that if companies were responsible for the content posted by users, they would be forced to block or strictly censor at mass scale, with a subsequent chilling effect on internet free speech.
Danielle Citron and Benjamin Wittes have proposed amending Section 230 to hold companies liable for the failure to take reasonable steps to prevent or address unlawful uses of their services. Members of Congress and state governments have proposed bills to criminalize both the ‘malicious’ creation and distribution of deepfakes, and to prohibit creating videos, photos, and audio of someone without their consent. These proposals presuppose that the ability to reliably identify synthetic or manipulated media remains possible. Increasing liability for something platforms literally cannot provide will not solve the problem, although technical breakthroughs remain possible.
The feasibility of finding practical ways to identify and enforce such requirements is questionable, and there would be inevitable First Amendment challenges to resolve. These questions only expand when considering global regulatory implications.
Identifying Solutions: Social Norms
An information flow in which it is difficult or even impossible to distinguish what is real from what is not is a dangerous one. Forgeries may be accepted as true, and truth can be undermined by doubt. In 1994, Johnson and Seifert called this the “continued influence effect.” When a fake is later to be proven false (the Protocols of the Elders of Zion), or a truth is doubted when clearly proven to be correct (scientific progress such as the history of women’s physiology; or the recent anti-vax movement based on a retracted study suggesting a connection between vaccines and autism), the initial negative or positive associations remain, and can have significant, lingering effects.
Democracy depends on an informed electorate. Encouraging the population at large to think critically about the source and content validity of news and stories they hear is essential. As stated in a 2018 New York Times Op-ed, “Democracy assumes that its citizens share the same reality.” Or as Daniel Patrick Moynihan once put it, “You are entitled to your own opinion. You are not entitled to your own facts.” It is unhealthy for a democracy to exist in which facts are doubted and “alternative facts” are accepted. This is a condition that U.S. society is already grappling with, and the introduction of false information via deepfakes will only exacerbate the problem.
Some people may align themselves with sources they deem reliable, while some may deny the pursuit of truth as a meaningful endeavor altogether. But many will want to proactively ensure they are receiving a comprehensive and accurate reporting of the world around them. If unable to confidently determine the truth for themselves, these people may seek an arbiter of some kind, looking to fact-checking applications, reliable media companies, or public and non-profit agencies to supply, verify, or validate information. It may be that more or different organizations are needed to formally fill this role, to objectively and disinterestedly provide news “about” the news.
People do adapt. New technologies, like film in the late 1800s or the proliferation of Photoshop in the late 20th century, force people to recognize and react to new realities. And technology certainly will play a role, as companies seek to leverage AI and other tools to identify fakes. But we are in the midst of the steep transition between technological capability and counter strategies. As Sandra Wachter of the Oxford Institute reminds us, while this problem is not new, the rate at which technology is developing is challenging our ability to adapt quickly enough.
We need to confront the technical and political issues arising from computer-generated fake videos and media. A first step certainly includes at least increasing public awareness that these technologies exist, are getting better, and that people must assume some responsibility to critically analyze the information that they consume.
¹ This paper addresses the use of GANs to generate fake videos specifically, but the use of GANs in ML developments generally have many different applications and use cases, including advancement of music, recreating voices for those who cannot speak, addressing various medical conditions, and other simulated or synthetic applications that, while “artificial” are not designed with the intent to deceive.
² While detection is one clear, and probably necessary, arm of research, there are multiple ways to consider how to meet this threat, including ways to track original files, establish provenance, and otherwise validate certain files. These methods also raise questions of risk, however, for anonymous uses such as whistleblowers or civil rights activists, so all options include challenges that preclude “easy” fixes.