How Hollywood Can (and Can’t) Fight Back Against Deepfake Videos (Guest Column)

Legislative fixes and contractual protections are meant to police the dissemination of misinformation from hyperrealistic (but fake) videos. A look at the proposals and consequences of a crackdown.
Tibrina Hobson/Getty Images

When Game of Thrones ended earlier this year, a video circulated online of Jon Snow apologizing to the show’s fans for the way the last season ended. The video appeared to show the actor Kit Harrington, dressed as Snow, apologizing for his lack of dialogue and the generally dissatisfying final season in a rousing speech. This was not Harrington doing another satirical skit on Saturday Night Live, but instead, a manipulated video where footage from Game of Thrones was altered to make it seem like the actor was saying words he never said. This video is among the latest in a recent spate of “deepfake” videos — viral videos that have been digitally doctored to make it seem like people are saying and doing things that they never actually said or did. The videos are made using artificial intelligence that can map the face of one person onto another, creating highly realistic, but fake, videos.

Several of these videos have gone viral recently, from the Game of Thrones joke speech to a PSA where a deepfake version of Barack Obama warns about the dangers of deepfake videos. Some deepfakes are easy to spot. For example, in the Jon Snow video, his lips don’t perfectly match the speech. But others, like the Obama video, are almost impossible to detect as fake without the use of deciphering technology. As a result, legislators around the country are sounding alarms about the potential for misuse and misinformation. This past month at DEF CON, the world’s largest hacker conference, Democratic National Committee chair Tom Perez used a deepfake of himself to highlight the dangers of deepfakes to attendees.

These videos raise serious potential legal issues, including defamation, right of publicity violations and invasion of privacy. But these laws can fall short in curbing the proliferation of some deepfakes. And while current legislative efforts to bridge the gaps where existing law doesn’t reach are happening across the country, some of the proposed new laws could have unintended consequences, particularly for the entertainment industry.

In most states, existing defamation, right of publicity and invasion of privacy laws will not reach deepfakes that are non-defamatory and non-commercial. In California, for example, the right of publicity only prohibits commercial uses of one’s likeness (e.g., using Harrington’s image to promote a new line of fur coats). And artistic works enjoy broad First Amendment protection that allows them to incorporate celebrities and public figures into stories ranging from parodies like SNL to semi-fictionalized docudramas.

Proving defamation is also a high bar, especially for celebrities — the most frequent targets of deepfake fascination.

Under existing defamation law, for example, tabloids regularly get away with running seemingly fake or heavily exaggerated stories about celebrities because it is difficult to prove that the publishers made false statements with “actual malice.” Furthermore, to be actionable, the defamation must also result in actual injury. Thus, in the absence of provable malice and harmful effects, defamation law may not protect victims of deepfake videos, especially if he or she is a public figure. Additionally, because truth is an absolute defense to defamation, deepfakes raise some novel questions in defamation law. What if a deepfake shows President Donald Trump saying something that is literally true — or even something he actually wrote on Twitter — but the video is fake? Would a defamation claim be barred because the presentation is “substantially true”? Or could the president argue that despite the truth of the underlying statement, it falsely portrays the president as having spoken the statement on video? Of course, because it would be so difficult to prove any actual damages in some of these examples, it is unlikely that we will see many test cases by celebrities or leading politicians make their way through the courts.

Additionally, defamation plaintiffs typically must prove that a reasonable viewer would construe the false statement as true.  Therefore, deepfakes that are obviously fake may not be prohibited under existing defamation law at all

More nefarious uses of deepfakes raise serious concerns, however. Reputations can be ruined. Elections can be swayed.  Diplomacy can be derailed. The possibilities are chilling. To address these concerns — and to fill potential gaps in the law — both federal and state lawmakers are considering legislation to regulate deepfake videos in various ways. California legislators have introduced several bills, including one sweeping proposal that would make it a misdemeanor to knowingly distribute deepfake videos with the intent to deceive the viewer.  Other bills are more targeted. For example, one proposed law would criminalize the creation of non-consensual sexually explicit digital videos, protecting people from fake pornographic videos that depict them doing things they never did. This bill is publicly supported by SAG-AFTRA in an effort to protect actors from being depicted in unconsented sex scenes. The guild has supported the legislation as necessary to prevent sexual abuse and protect the commercial and personal lives of performers. A similar law was recently enacted in Virginia, where deepfakes involving nudity made without consent are now illegal. These laws target one of the most concerning aspects of deepfakes. Deception aside, deepfake videos can feel invasive and exploitative even when they are clearly labeled as fake.

New York legislators have also proposed a law that would broadly prohibit using “a digital replica for purposes of trade in an expressive work” without the permission of the person. Under that law, it would be illegal to include a digital replica of Tom Cruise in a movie without his consent if it created “the reasonable impression” that he was actually performing — even if “Tom Cruise” were merely a character that a fictional protagonist bumped into at a fictional Oscar party. Unlike the California bills, this law would specifically exempt newscasts and artistic works that do not trick the viewer into thinking they are watching the real person.

For its part, Congress is also contemplating federal legislation to combat the perceived dangers of deepfakes.  On June 13, the House Intelligence Committee held a hearing to discuss the growing threats to national security posed by highly realistic deepfake videos. In today’s hyper-political world of Twitter, video clips, and soundbites, it’s easy to imagine the havoc that could be wreaked by a series of realistic deepfake videos portraying world leaders making threats or disparaging statements about other world leaders or countries. In an age where misleading words travel around the world at the speed of light, and the truth often takes days or weeks to catch up, the repercussions of deepfakes could be devastating. Federal legislators have already introduced two deepfake bills, but no votes have yet been taken.

These legislative solutions seem like a necessary, reasonable reprieve to protect deepfake victims from harm and the public from misinformation, but if the wave of proposed statutes don’t include free speech protections, TV and movie producers may pay a high price.

There are many possibilities for the creative use of AI in film, each with varying legal consequences and varying threats to actors, TV production and filmmaking. Creating an AI version of Cruise to play Ethan Hunt in the next Mission: Impossible movie would be a direct assault on working actors. But having a character cross paths with an AI-generated celebrity or famous person as part of the storyline in a fictional story (think Forrest Gump), or using AI to create a biopic of an aging or dead actor, musician, or athlete (e.g., Muhammad Ali, Bill Russell, Prince) seems to fall within the traditional boundaries of artistic freedom.

In recent years, both the Star Wars and Fast and the Furious franchises used advanced technology to posthumously create scenes involving Carrie Fisher and Paul Walker, respectively, after the actors died unexpectedly. Without careful drafting, such scenes could run afoul of laws targeting deepfakes made with intent to deceive. Technically, the filmmakers may be deceiving at least some viewers into believing the actors were alive and able to perform those scenes. What about the use of artificial intelligence generally during postproduction editing? Twenty-five years ago, footage of John F. Kennedy was manipulated to create the iconic Forrest Gump scene where Forrest — after drinking a dozen Dr. Peppers — tells the president “I gotta pee,” and Kennedy jokes to a colleague, “I believe he said he had to go pee.” Under proposed legislation, would that scene be illegal if the filmmakers used artificial intelligence to create the same effect? We already allow actors to meticulously portray real people in fictional works with extensive makeup and costuming, such as when Christian Bale transformed into an uncanny Dick Cheney for last year’s Vice. Is a hyper-realistic performance by an actor and modern makeup artists that different from a deepfake? If we ban the use of deepfakes in fictional works, how far away are we from telling actors and makeup artists they are doing too good of a job? Can the law resolve these ambiguities? Perhaps the answer is context. No matter how much Bale looked like Cheney, we all knew it was Bale underneath the makeup. The movie posters and credits say so. But with deepfakes, there may not be any movie poster or credits to clue the audience in.

To survive First Amendment challenges, broad deepfake legislation would almost certainly need to have robust exceptions. Given the malleability of fair-use defenses in other First Amendment contexts, lawmakers may want to specifically exempt certain kinds of videos from deepfake legislation in order to protect beneficial uses of AI. For example, would satirical deepfake videos be exempt? They are expressly exempted in a recent proposed bill in California.

Is a deepfake parody of Trump materially different from Alec Baldwin’s well-honed impression? Under some versions of pending deepfake legislation, an AI video recording would be illegal while live impersonations would remain legal. What about nonfictional works that bring historical figures to life? Samsung recently announced technology that can bring to life historical photographs, albeit less realistically than deepfake videos that use moving people. What if a historical documentary or biopic used AI to breathe life into old photographs of public figures who are still living? For example, Peter Jackson’s World War I documentary They Shall Not Grow Old reanimated old footage with color, sound and dialogue. Such uses might be permissible under existing right of publicity laws but not under proposed deepfake and digital replica laws. 

Regardless of what legislation is passed in the coming months and years, we can expect TV producers, filmmakers and performers to increasingly include provisions in their contracts that protect against encroaching legislation prohibiting the use of AI technology. If something happens to an actor midway through filming, and their role can only be completed with the use of AI, filmmakers will want to be able to complete the movie without running afoul of deepfake laws. Conversely, actors will more carefully contemplate what rights they sign away when they agree to be digitally altered or represented through AI on a project. Artificial intelligence in this context has implications that both sides will need to continually evaluate as the technology and the legislation evolve.

Aside from legislative fixes and contractual protections, there is a growing chorus of industry observers who say the answer to high-tech deepfakes is high-tech deeepfake detection. In other words, empowering social networks, businesses and the public to easily detect deepfakes and police the dissemination of misinformation.

As we continue to wonder about how to solve the deepfake problem one thing is almost certainly true: Deepfakes will only become more prevalent and more threatening. These videos will continue to threaten the embarrassment or exploitation of celebrities. And politically motivated deepfakes will inevitably mislead the public and potentially impact the democratic process selections. Whether the answer is legal or technological, we need to brace for a new wave of increasingly advanced and accessible deepfake videos. 

David Singer is a partner and Co-Chair of Jenner & Block’s Content, Media & Entertainment Practice. He has represented major motion picture studios, technology companies and broadcast and cable television networks in matters involving copyright, trade secret, trademark and right of publicity claims. Camila Connolly is an associate in the firm’s Litigation Department who handles cases involving entertainment and complex commercial issues. Both are based in the firm’s Los Angeles office.