- Share this article on Facebook
- Share this article on Twitter
- Share this article on Email
- Show additional share options
- Share this article on Print
- Share this article on Comment
- Share this article on Whatsapp
- Share this article on Linkedin
- Share this article on Reddit
- Share this article on Pinit
- Share this article on Tumblr
There’s a fascinating juxtaposition in Shalini Kantayya’s cautionary documentary Coded Bias. The engaging principal guide in this journey through our 21st century algorithmic hellscape is Joy Buolamwini, a young African American MIT Media Lab researcher who discovered that most facial recognition software is wildly inaccurate when it comes to darker-skinned faces and those of women. Her evolution from robotics geek to scientist to activist, addressing Congress about the urgent need for regulation in this Wild West field, gives the film an inspiring throughline. She not only gets results — she also composes cool spoken-word poetry about coding.
Providing a stark contrast, a skateboarding hipster from the same age group in Hangzhou, China, enthusiastically endorses facial recognition technology even for something as basic as a vending machine soda purchase. She views the government’s social credit system, described by one American expert in the field as “algorithmic obedience training,” as a valuable tool for the betterment of society. To those of us not living under authoritarian regimes — or not quite — the notion of a government monitoring our every movement while awarding or subtracting points for Party loyalty, not just to individuals but also to their friends and family, is a monstrous violation of privacy.
RELEASE DATE Nov 11, 2020
One of the points made with sobering efficiency in Kantayya’s riveting study is that we have already surrendered more than enough personal data to enable a surveillance state arguably even more insidious. That’s because unlike China’s transparent version, ours is a stealth operation in which government and corporate information access are intertwined. “Everybody has a stake,” says one interviewee. “Everybody is impacted.”
It’s neither a secret nor a surprise at this point that every internet purchase, every bill paid online, every tracking app and social media interaction since the advent of the smartphone is being stored in a “weapon of math destruction.” We all know that the decision-making power of computers excludes the crucial element of human empathy. But absorbing the snowballing evidence amassed here of how data can be leveraged, especially against vulnerable minorities, is like the scariest episode of Black Mirror you’ve ever seen. Scary because it’s real and because, without resorting to dystopian alarmism, the film raises legitimate questions about whether it’s already too late for warnings.
Buolamwini’s discovery occurred by chance while she was working on an art project called an “Aspire Mirror,” designed to superimpose uplifting images over her own face, like a lion or Serena Williams. But the computer vision software literally refused to see her until she put on a white mask.
It’s a powerful metaphor for technology developed predominantly by white men, and therefore inadvertently programmed for maximum accuracy when scanning their faces. It also allows Kantayya to point up the gap between our sci-fi notions of the limitless imagination of futuristic technology and the far narrower reality of artificial intelligence, which all boils down to math. What we tend to think of as a vast pool of technology in reality was generated by a small and homogeneous group of people.
Those gatekeepers have allowed unconscious bias to creep into technology. The inherent risk of this in facial recognition software is illustrated during trials conducted by plainclothes U.K. police, when a 14-year-old Black student is detained while exiting a London tube station, hauled off down an alley to be searched, interrogated and fingerprinted, and then released in a shaken state after being told he was misidentified. Algorithmic justice group Big Brother Watch teamed with a member of Parliament to mount a legal challenge against the Home Office, invoking General Data Protection Regulation, a bulwark against the misuse of information against individual rights.
The U.S., where six of the nine companies building the future of AI are based (the other three are in China), has no such protection in place. Lawmakers in June this year introduced legislation to ban federal use of facial recognition, and a number of major U.S. cities already have banned its municipal use. Also in June, Amazon announced a one-year pause on police use of its facial recognition technology.
These results, achieved since the film’s premiere in January at Sundance, show the power of activism. Buolamwini’s research led first to a New York Times feature that had considerable fallout and then to Congress, garnering the attention of Alexandria Ocasio-Cortez, among others. But this burgeoning civil rights battle is destined to continue; the idea of invasive technology being locked away out of harm’s reach seems a pipe dream. And with the faces of over 117 million Americans now memorized in the database — often inaccurately — the implications are staggering.
At a time when militarized federal agents are being deployed to break up protests in Portland, with other American cities likely to follow, the example of how Hong Kong protesters were targeted by the Chinese government hits home with a wallop.
Editors Alexandra Gilwit, Zachary Ludescher and Kantayya craft a propulsive narrative out of their findings that is dense with factual data but always lucid and compelling. In addition to Buolamwini, a number of authors and academics in the field help build a far-reaching case against the ways in which artificial intelligence and data collection can be used to govern our liberties.
The ramifications to credit scores, loan assessments, health care access, mortgage refinancing requests, welfare administration and job applications are eye-opening, particularly when zip codes, ethnicity, family income and police records factor into those processes, thereby perpetuating historic inequalities. Coded Bias argues persuasively that Big Data remains blindfolded about the discrimination embedded in our technology. The ordeal of an exemplary Houston elementary school teacher with a distinguished record of service who was fired based on the biased analysis of a value-added model is a shocking example of how such performance assessment algorithms can fail in real-world applications.
This is also just one instance of how AI is evolving without the benefit of fundamental Western democratic ideals baked into the system. “It’s about powerful people scoring powerless people,” says mathematician Cathy O’Neil. While AI is being developed in the U.S. mostly for commercial applications, Kantayya provides evidence of a less benign use through the fight of residents in a Brooklyn housing complex to prevent their landlord from installing a biometric security system that would basically turn their home into Rikers Island.
Author Virginia Eubanks cites the old sci-fi saw that the future is already here, just not evenly distributed, with the rich getting the fancy tools first. But she flips that assumption, pointing out that AI is being used in the most punitive, invasive ways, with surveillance tools being tested in impoverished social environments where there are low expectations concerning respect for human rights. The next step will be those technologies being ported out into the wider community.
The political implications of technology being used to swing elections are touched on only briefly, but enough to resonate, especially given that Facebook experiments already have shown success in that field. We’re now just relying on Mark Zuckerberg’s word that the company will stay out of that fray.
What Kantayya and a fiercely articulate group of female mathematicians and data scientists leading the charge for ethical use of technology are saying, basically, is don’t get too comfortable. It might seem harmless when you buy underwear online and your browser is suddenly deluged with ads for skivvies, or you watch a rom-com and your streaming platform lines up 15 sappy love stories for you. But this fast-moving, dynamically assembled film — which makes sharp use of classic sci-fi visual concepts and screen graphics to echo its themes — is here to remind us that the struggle between humans and machines over decision-making is only going to get messier.
Production company: 7th Empire Media
Distribution: Science on Screen (virtual theaters)
Director-producer: Shalini Kantayya
Director of photography: Steve Acevedo
Music: Katya Mihailova
Editors: Alexandra Gilwit, Zachary Ludescher, Shalini Kantayya
Visual effects: Zachary Ludescher, Alexandra Gilwit
Venue: Provincetown Film Festival Reimagined
Sign up for THR news straight to your inbox every day