Twitter says it doesn't know why its image-cropping algorithm prioritizes white faces

SOPA Images/LightRocket/Getty Images
Impact
ByTebany Yune

Twitter's image-cropping algorithm came under fire for racism over the weekend because of an apparent bias against Black faces. The algorithm is part of the website's attempt to automatically determine the focus of a preview image by prioritizing facial features and text. However, users noticed that the algorithm was favoring white faces for its cropped previews in photos that contained both white and Black individuals.

Several casual experiments were rapidly conducted by curious and horrified users. The algorithm continued to consistently choose white faces — even white dog faces — as the focus of the previews.

The company has apologized for its algorithm, telling The Guardian that "it's clear from these examples that we've got more analysis to do" to eliminate any racial or gender bias within its tools. "We'll continue to share what we learn, what actions we take, and will open source our analysis so others can review and replicate."

It all started when PhD student Colin Madland made a thread on Twitter to address the racial bias of the videoconferencing app, Zoom. The app also uses an algorithm to crop a user's video and replace it with a digital background. Madland highlighted a Black colleague's struggle to stop the app from decapitating him in his own video.

As they attempted to find solutions, they came to a realization: That the algorithm couldn't recognize his colleague's face as a face and was cropping it out with the rest of the background.

While Madland created his thread, he realized Twitter was hiding his colleague in the preview images as well.

Instead, the algorithm focused on him and only him.

Other users quickly hopped into the replies to try their own experiments. One user posted images with a black and yellow lab and found that Twitter's bias even discriminated against dark dogs. Another account supported the findings with their own evidence showing that the algorithm regularly focused on a smaller, white dog even when a larger, darker dog was closest to the camera.

Another user used images of U.S. Senator Mitch McConnell and former President Barack Obama. The algorithm focused on McConnell in both pictures.

Interestingly enough, when a reply added glasses to Obama's face Twitter's image-cropping tool highlighted him instead, possibly because it used the glasses to identify his eyes.

This sort of racial bias and inability to note key features on Black faces is not uncommon in algorithms and facial recognition technology. Even big tech corporations have acknowledged that their services have massive rates of error when it comes to detecting and distinguishing Black faces. One Michigan man has already been falsely accused and arrested due to a mistaken match by facial recognition technology.

A study has also found that erroneous algorithms used in U.S. hospitals have routinely discriminated against Black patients by assigning them lower risk scores that prioritized white patients with the same illnesses.

As Madland pointed out, it's important to raise a flag to these weaknesses because organizations like law enforcement are already using facial recognition as a tool to make arrests. In the U.K., where police have been playing around with the technology, civil rights groups have condemned its use, citing findings that women and ethnic minorities were disproportionately matched and misidentified with criminal suspects.

One reason why these algorithms display a racial bias is due to the sources it learns from. The tech industry has a big problem with diversity in the workplace, and it shows in products like these. If you feed an algorithm lots of data to teach it how to identify a face, and most of that data focuses on white folks, then the tool is only going to be good at distinguishing white faces.

It's an example of "bad data goes in, bad data comes out," as programmers often describe it. Something similar happened when Amazon tried to create an A.I. that could rate the potential of job applicants. After teaching the A.I. what kind of candidates the company prefers by feeding it resumes of previous or currently hired employees, the engineers found that the A.I. flagrantly discriminated against women.

It turned out that the pool of existing resumes mostly consisted of men, so the A.I. believed that the company just didn't want any women as employees.

Although getting cropped out of Twitter and Zoom images isn't as significant as getting falsely accused of a crime, the discovery still highlights a big problem with using algorithms and A.I. to make important decisions about real people.

These tools are only as good as we humans can make them, and we're absolutely not ready to rely on them so heavily just yet — if ever.