- Share this article on Facebook
- Share this article on Twitter
- Share this article on Email
- Show additional share options
- Share this article on Print
- Share this article on Comment
- Share this article on Whatsapp
- Share this article on Linkedin
- Share this article on Reddit
- Share this article on Pinit
- Share this article on Tumblr
On his Comedy Central show Key & Peele, Jordan Peele often played a spot-on version of former President Barack Obama, but on April 17 he released an even more dead-on impression: a video in which the Get Out director used face-swapping technology to play the former president and shine a light on how artificial intelligence is making fake news even harder to parse out from the real kind.
Peele made his “ObamaPeele” video using FakeApp, a free online application that allows users to swap faces in a video with a face from another video. These so-called “Deepfakes” have, in the past few months, been used to graft stars’ faces onto pornographic media and have subsequently been banned from online platforms including Reddit, Twitter and Pornhub, officially. (A BuzzFeed investigation published April 25 found more than 70 Deepfake videos still up on the latter site.)
But in Hollywood, the technology behind these viral GIFs and videos, which uses AI to swap one person’s face for another, isn’t going away anytime soon. For major digital effects studios, such as Industrial Light & Magic (ILM), an AI that can successfully and convincingly map a famous actor’s likeness onto another performer’s would save time and production costs. These digital modifications can take anywhere from half a day to several months to complete.
“It’s been talked about here, mostly in the context of, ‘Yeah, it’s probably worth doing a more formal evaluation of the technology,'” John Knoll, COO of ILM, tells The Hollywood Reporter. “The techniques that we currently use are not inexpensive to do, so if there’s technology that potentially allows us to get a satisfactory result with a lot fewer man hours, there’s a desire to [use that].”
While Deepfakes have earned notoriety for videos that fake pornographic content featuring famous actresses, falsify speeches by prominent politicians and insert Nicolas Cage into iconic films he never actually appeared in, Hollywood has long swapped faces — just using different tech.
“I’d say in some form this has been going on for 20 years,” says Darren Hendler, head of digital humans group at effects studio Digital Domain. “How far we can take it is getting further and further. Twenty years ago you’d see a stunt with the lead actor’s head sort of painted on over the stuntman, maybe because it was too dangerous, expensive, etc. Now, you can do so much more. Things can be altered, adjusted, modified after the shoot.”
Such modifications have been prominently featured in recent films such as last year’s Guardians of the Galaxy 2, which showcased a de-aged, 1980s version of star Kurt Russell, and 2016’s Rogue One, which re-created Peter Cushing’s Grand Moff Tarkin character from 1977’s Star Wars: A New Hope, despite the fact that Cushing died in 1994. But these modifications extend as far back as 2000’s Gladiator, which digitally pasted a CG version of Oliver Reed’s face onto another performer after Reed died of a heart attack during production.
Hendler says that Digital Domain is actively working on a version of AI technology to do something similar to Deepfake’s face-swapping. “We all foresee a future, and we’re working on technology to do exactly this, where it’s driven by AI and various types of machine learning,” he says.
With the expected improvement in the quality of face-swapped videos, experts are seeking solutions to a problem that Deepfakes pose: discerning when videos are real and when they are fake. With AI tech continuing to improve, that process may become even more difficult, given the influx of new videos a computer-generated program could create compared to slower, manually produced content.
Alex Fry, a compositing supervisor at Animal Logic, a creative digital studio, says that face-swapping and AI-assisted VFX is “something that comes up a lot within the company.” Animal Logic, he noted, looked “heavily” into AI that can make photos or video resemble an artwork for a storybook sequence in the company’s 2018 film Peter Rabbit, for instance.
While Deepfake videos still don’t approach the quality and realism of, say, the Tarkin character in Rogue One, CG companies have reason to anticipate that technology will improve quickly. “This is one of those disruptive industries for sure,” says Siraj Raval, a data scientist, AI educator and YouTuber who made a recent video explaining Deepfakes.
Raval also anticipates that within the decade CG companies could be competing with groups of developers. “What’s going to happen to this is exactly like what happens to [photo filter] apps,” he says. “Some startup’s going to say, ‘There’s clearly a need for this, we can provide value by creating a web app and allowing them to access it for free.’ That’s got to happen very soon.”
Still, automating VFX via AI doesn’t allow companies to do the small changes that films require, Fry says. “For us, the issue is not that you don’t get nice results out of it, the issue is that it’s very hard to control and to art-direct.”
Fry explains that Animal Logic must often make “small, incremental” changes for the director or art director, but, for now, AI processes often only allow for a limited number of changes. “It was great for happy accidents and interesting effects, but it was very hard to control,” Fry says of AI.
Even if VFX companies don’t immediately embrace Deepfakes, however, they may adapting similar AI technologies quite soon. Render farms are currently primarily responsible for de-noising 3D renders (filtering out noise while preserving an image after rendering), but some apps using AI have been successful in doing it themselves, Fry says. Knoll adds that he’s seen promising advancements in using AI to “infill,” or fill in blank areas, when production materials are removed from a shot, like a rig or a light.
“We’re going to start to see an explosion of this type of tech out to the mainstream, to all different areas. It’s something we’re actively working on,” Hendler says. “You’re now seeing a time where you can start to create realistic digital people, no longer just for features, but in real-time presentations like a lecture. The tech is accelerating to the point where it’s no longer just available to big-budget feature films.”
Sign up for THR news straight to your inbox every day