- Share this article on Facebook
- Share this article on Twitter
- Share this article on Flipboard
- Share this article on Email
- Show additional share options
- Share this article on Linkedin
- Share this article on Pinit
- Share this article on Reddit
- Share this article on Tumblr
- Share this article on Whatsapp
- Share this article on Print
- Share this article on Comment
As the video platform YouTube is rolling out generative AI technology to its creators, it is also placing new guardrails on the technology’s use.
The company on Tuesday set new rules on content created with the help of generative AI, including rules that crack down on videos that use someone else’s likeness, and giving music labels the ability to remove videos that feature the voice or a well-known musician or performer created without their permission.
“Generative AI has the potential to unlock creativity on YouTube and transform the experience for viewers and creators on our platform,” write YouTube VPs Jennifer Flannery O’Connor and Emily Moxley, in a blog post. “But just as important, these opportunities must be balanced with our responsibility to protect the YouTube community.”
Related Stories
When it comes to the issue of deepfakes and sound-alike audio, the company says that “we’ll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice.” However, there will be limits on that ability:
“Not all content will be removed from YouTube, and we’ll consider a variety of factors when evaluating these requests,” the company adds. “This could include whether the content is parody or satire, whether the person making the request can be uniquely identified, or whether it features a public official or well-known individual, in which case there may be a higher bar.”
In the case of musicians and singers, YouTube says that it will give labels representing artists that are participating in YouTube’s AI tests the ability to request the removal of sound-alikes.
“In determining whether to grant a removal request, we’ll consider factors such as whether content is the subject of news reporting, analysis or critique of the synthetic vocals,” the company says.
And YouTube is also rolling out new disclosure requirements and content labels for generative AI-created content, particularly if it is realistic in nature, or touches on complicated or heated geopolitical events, elections, or other issues of public concern. Content created using YouTube’s AI tools will also be labeled.
The new rules and guidelines come as concern about generative AI has reached the upper echelons of Hollywood, with SAG-AFTRA and the WGA demanding strict rules around its use in their new contracts, and with music labels fretting about sound-alikes that use their performer’s voices on new lyrics on social platforms.
While the genie may not be be put back into the bottle, companies like YouTube certainly seem focused on making sure that it is deployed in a way that is as safe as possible.
At an event at YouTube’s New York office in September, the company rolled out a suite of AI tools, including “Dream Screen,” which lets users create a video background with just a prompt. It is also developing new tools for artists and musicians, though the exact nature of what those tools will look like when deployed remain unclear.
“We’re tremendously excited about the potential of this technology, and know that what comes next will reverberate across the creative industries for years to come,” the company says. “We’re taking the time to balance these benefits with ensuring the continued safety of our community at this pivotal moment—and we’ll work hand-in-hand with creators, artists and others across the creative industries to build a future that benefits us all.”
THR Newsletters
Sign up for THR news straight to your inbox every day