In late May, Microsoft laid off approximately 77 editorial workers in a switch to an automated system that would rely on A.I. to choose news articles and content to feature on MSN.com and other Microsoft news apps. The move was a particularly dystopian one in a moment of mass layoffs at newsrooms around the country. Without people, the A.I. is responsible for filtering through vast amounts of content, processing it for the website, and automatically attaching an appropriate header image for the article. But just one week after Microsoft got rid of its human workforce, the A.I. confused two mixed-race celebrities with each other at a time when racial inequalities are dominating the global conversation.
The story focused on a mixed-race member of the British girl group Little Mix, Jade Thirlwall, who presented her personal thoughts on racism and the racist abuse she received when she was young. As if highlighting the frustrations people of color experience, the A.I. then paired her story with an image of a different mixed-race member of Little Mix, Leigh-Anne Pinnock.
Thirlwall created an Instagram story to call out MSN's error.
"@MSN If you're going to copy and paste articles from other accurate media outlets, you might want to make sure you're using an image of the correct mixed race member of the group," she wrote. "This shit happens to @leighannepinnock and I ALL THE TIME that it's become a running joke [...] It offends me that you couldn't differentiate the two women of colour out of four members of a group … DO BETTER!"
A report from The Verge suggested that the error might have occurred because the source of the photos had mislabeled them. But it's a mistake that could have been caught by human editors if MSN hadn't, you know, fired all of them. In the end, it had to be corrected by the remaining human staff anyway.
Although this incident might be due to an A.I. grabbing mislabeled photos, it still calls attention to the tech industry's problem with identifying non-white races. Facial recognition is another field where technology fails miserably — but its failures could mean big consequences if authorities keep using these very flawed systems.
Megan Goulding, a lawyer with a civil liberties advocacy group in the U.K., told the BBC in 2019 that facial recognition technology still misidentifies women and minorities at a disproportionate rate. "If you are a woman or from an ethnic minority and you walk past the camera," she said, "you are more likely to be identified as someone on a watch list, even if you are not [on a watch list]. That means you are more likely to be stopped and interrogated by the police."
Civil rights and privacy advocates have been pushing against facial recognition technology through lawsuits and public awareness for years, with the ACLU decrying the use of facial recognition as "persistent government surveillance on a massive scale." U.S. government researchers also found that these systems misidentify Black faces five to ten times more frequently than white faces. The bias is so glaring that, in April, Microsoft turned down an offer from a law enforcement agency to add facial recognition technology to officers' cars and body cameras due to concerns of women and minorities being unfairly targeted.
According to Microsoft company president Brad Smith, the law enforcement agency wanted to run face scans of anyone their officers pulled over, whether under suspicion or not, to see if the person was a match in a database of suspects. Smith didn't name the agency, but he said the company thought about the risks to minorities and women and eventually told them that "this technology is not [the agency’s] answer."
Misidentification and the harassment of minorities by law enforcement is a large enough concern that, on Monday, IBM decided it would be better to abandon its facial recognition services altogether due to business decisions and severe human rights concerns.
"IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency," wrote CEO Arvind Krishna in a letter to Congress.
Krishna encouraged the members of Congress to use the momentum of the George Floyd protests to call for police reform and consider how biased facial recognition could play a part in law enforcement abuse if it isn't addressed quickly. "We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies."