
- Share this article on Facebook
- Share this article on Twitter
- Share this article on Flipboard
- Share this article on Email
- Show additional share options
- Share this article on Linkedin
- Share this article on Pinit
- Share this article on Reddit
- Share this article on Tumblr
- Share this article on Whatsapp
- Share this article on Print
- Share this article on Comment
YouTube CEO Susan Wojcicki says that the Google-owned streamer will take new steps to address hateful and exploitative videos on its platform.
The moves, outlined in a pair of blog posts published late Monday evening, come after the world’s largest video platform has weathered a series of advertiser revolts over videos found on its site, from fake news to extremist propaganda to predatory and inappropriate children’s videos.
“This has been a year of amazing growth and innovation,” Wojcicki writes in a blog post for YouTube’s creator community, one of a pair of blogs published late Monday evening. But, she acknowledged, “In the past year, we saw a significant increase in bad actors seeking to exploit our platform.”
Wojcicki says that over the past year, YouTube has invested in new systems that combat inappropriate videos, updated its policies on what content can appear on the platform and established new machine-learning technology that can enforce those policies. The result is that YouTube has removed 150,000 videos for violent extremism since June.
Now, YouTube will increase its efforts following reports that child predators were posting inappropriate comments on videos of children or using videos featuring popular kids characters to propagate inappropriate messages or behavior. Wojcicki writes on the official YouTube blog that through such reports she has “seen up-close that there can be another, more troubling, side of YouTube’s openness. I’ve seen how some bad actors are exploiting our openness to mislead, manipulate, harass or even harm.”
One of her first actions is to increase YouTube’s trust and safety teams to more than 10,000 in 2018. “Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content,” she writes.
YouTube will also use its machine-learning technology — which can remove five times as many videos as the human reviewers — to address content that violates its guidelines. In addition, it will publish a regular report in 2018 that provides data about videos that are flagged and actions that are taken.
To address the concerns of advertisers, YouTube will establish sweeping new guidelines for which channels and videos are eligible for advertising. For creators, this means YouTube hopes to decrease the number of inaccurate demonetizations (when a video is marked as not appropriate for ads) that occur.
Twelve-year-old YouTube has steadily grown to more than 1.5 billion users and over a billion hours of video watched each day. That has made moderation of content challenging for the company, even with the technological support of Google. Facebook, Twitter and Google have faced similar challenges during the past two years as their platforms were manipulated to spread Russian propaganda meant to influence the 2016 presidential election.
The backlash on YouTube over inappropriate children’s content has been especially severe, with brands including Mondelez and Mars pulling advertising that ran alongside some of these videos.
Wojcicki, herself a mother of five, writes that YouTube has helped “enlighten” her children and given them “a bigger, broader understanding of our world and the billions who inhabit it.” But she acknowledges that there’s work to be done, noting that “as challenges to our platform evolve and change, our enforcement methods must and will evolve to respond to them.”
THR Newsletters
Sign up for THR news straight to your inbox every day