8:21am PT by Eriq Gardner
YouTube Alleged to Racially Profile Via Artificial Intelligence, Algorithms
A group of African American content creators has launched a putative class action against YouTube and its Alphabet parent company for allegedly violating various laws intended to prevent racial discrimination. The lawsuit represents the latest courtroom attack on the increasingly controversial immunities afforded digital services under Section 230 of the Communications Decency Act.
The complaint filed Tuesday in California federal court objects to the way in which YouTube is employing artificial intelligence, algorithms and other filtering tools. The social video site uses these tools to help viewers screen out potentially mature content, but the plaintiffs here say that by employing a "Restricted Mode," this acts as an improper censor. What's more, the complaint adds, the system is rife with "digital racism," where users are essentially profiled on race, identity and viewpoints. The plaintiffs assert that this interferes with their ability to monetize their content. They allege that YouTube's conduct is "intentional and systematic, regardless of whether Defendants are motivated by ideological animus towards black and members of other protected racial classifications under the law."
More specifically, the suit alleges that YouTube is applying "Restricted Mode" to videos titled or tagged with abbreviations like "BLM" or "KKK"; ones that use terms like "racial profiling" or "police shooting" or "Black Lives Matter"; those that include names of individuals killed by law enforcement; and videos titled or tagged with the names "Bill Cosby" or "Louis Farrakhan." Even if these videos don't include profanity, drug use, depict sexual activity or violence, include specific details about events that resulted in death, or feature other content that is gratuitously incendiary or demeaning toward an individual or group, the Restricted Mode is allegedly applied nevertheless. The suit also targets other alleged practices of YouTube ranging from "shadow banning" to what's excluded on the "trending" and "up next" video recommendation lists.
According to YouTube, its automated systems aren't designed to identify race.
Digital racism may be the ostensible target of this suit, but from the relief requested in the complaint to the attorneys involved, there should be no doubt that the ultimate endeavor is to disarm Section 230 as defensive weapon used by digital giants like Google, Facebook and Twitter.
The now-famous provision of the 1996 law provides immunity to digital sites for whatever third-party content is hosted by them. The law also states that these interactive services can't be held liable for actions taken in good faith to restrict access or the availability of material deemed "objectionable."
The original intention of Section 230 was to encourage digital services to moderate, but in some political quarters — particularly conservative ones — moderation is now being equated to censorship as digital sites face fire for how they are doing so. (Meanwhile, some liberals believe that social media platforms aren't doing enough to target hate speech and blame Section 230 immunities for inaction.) See, for example, the hullabaloo over Twitter's decision to fact-check some tweets by President Donald Trump. In fact, the Department of Justice today issued its proposal to change Section 230 including by striking the provision that allows platforms to delete content deemed objectionable.
The latest suit is being handled by attorneys at Browne George Ross, who notably were involved in two other recent court actions against YouTube.
The first case was brought by PragerU on claims that conservative videos weren't being treated the same as liberal ones. The suit claimed a First Amendment violation, but the suit was rejected, and in February, the 9th Circuit Court of Appeals agreed that YouTube is a private forum, not a public one subject to First Amendment scrutiny.
The second case, still ongoing, is one that's quite similar to the latest racial discrimination case. It was filed last year by LGTBQ YouTubers and alleges (among other things) that videos and channels with "gay," "bisexual," or "transgender" in the title are being unfairly targeted. At the time of filing, a YouTube spokesperson denied its systems restricted based on this criteria. This month, California Judge Virginia Demarchi held a hearing on a motion to dismiss and was hesitant on allowing it to proceed given the PragerU precedent. Then again, Demarchi provided a hypothetical where racial discrimination occurs against a content creator. The judge asked whether it'd be immunized under the portion of Section 230 allowing digital services to in good faith restrict access to objectionable material. "I can imagine some courts taking the position that a properly pleaded claim of the sort that you describe as sort of facial race discrimination claim may not be good faith," acknowledged Google's attorney Brian Willen. (A transcript is attached as Exhibit E to the latest complaint.)
What makes the LGTBQ YouTuber case all the more intriguing is that the Department of Justice intervened in the case just last month to defend the constitutionality of Section 230. (The DOJ's memorandum is Exhibit C to the latest complaint.) In doing so, government lawyers told the court, "Section 230(c) does not regulate or limit Plaintiffs' primary conduct, such as their expressive activities. For example, Plaintiffs do not allege that Section 230(c) prevents them from creating videos or posting them on the Internet."
The DOJ's intervention happened before Twitter decided to fact-check Trump, who quickly thereafter signed an executive order targeting Section 230.
Now, in the suit alleging YouTube is racially discriminating, the plaintiffs claim Trump's order precludes the government from stepping in to enforce Section 230 to claims based on viewpoint discrimination.
This newest suit again challenges the constitutionality of Section 230 plus questions whether its immunities apply to civil rights violations. There are also other claims ranging from breach of contract (through YouTube's terms of service) and false advertising thrown in for good measure.
YouTube says it is reviewing the complaint. As for Restricted Mode, the company notes it is an optional feature used by less than 2 percent of its users.