- Share this article on Facebook
- Share this article on Twitter
- Share this article on Email
- Show additional share options
- Share this article on Print
- Share this article on Comment
- Share this article on Whatsapp
- Share this article on Linkedin
- Share this article on Reddit
- Share this article on Pinit
- Share this article on Tumblr
“Editors, relax, artificial intelligence is not going to kill your job,” said Norman Hollyn, editor (Heathers) and professor at the USC School of Cinematic Arts.
He made this comment in response to the potential of artificial intelligence and machine learning in content creation, which was the topic of a crowded session Tuesday morning at the National Association of Broadcasters Show.
AI and machine learning is one of the biggest technology topics this week at the Las Vegas confab, but this session focused on how this can be used a to make content creators more efficient, across the production and postproduction workflow, from script analysis and budgets to editing, animation and visual effects. In fact, Hollyn believes filmmakers will need to embrace these increased efficiencies if they want a future in the business — which is increasingly about more delivery requirements amid shrinking budgets and schedules.
Speaking about editing, Hollyn said AI can help editors organize their clips more quickly and easily. “AI is already proving very good at image recognition,” he said, citing the identification of colors or objects as examples. He added that it could also analyze audio and then compare and link it to known text (the script).
What really interests him is the ability of AI to identify sentiments and emotions in images (i.e. a smile to convey happiness). “Now we are getting to something I care about — how the character feels,” He added that this also could be a sort of extension of how Oscar-winning editor Walter Murch (The English Patient) creates boards to follow story flow.
Hollyn next discussed the potential of AI technology with Todd Burke, principal solutions consultant for digital media at Adobe, which is working to incorporate AI into its content creation tools (Adobe Sensei technology). “The more specifically we can provide the results of tagging, the more efficient we’re going to make all of you,” Burke said of Adobe’s goal.
Rick Grandy, senior solutions architect of tech developer NVIDIA, moved the discussion to VFX and animation, explaining how CG character animation could be driven by learned behavior. He demonstrated how data obtained through motion capture or other techniques might be used to train and drive a CG character — potentially even one based on a specific actor (i.e. a CG Harrison Ford).
Sign up for THR news straight to your inbox every day