Last year, Twitter users noticed something strange. When they uploaded photos to the platform, it automatically focused on white faces. After shutting off the tool, Twitter staged a contest in which they invited people to find bias within its photo cropping algorithm, and boy, did they ever. The company revealed on Monday that researchers found the system preferred younger-looking people, thinner faces, and lighter skin.
The contest, announced last month, was part of a competition at the DEF CON security conference in Las Vegas, with prizes handed out to those who showed just how poorly trained the company's photo-cropping tool is. The winner, Bogdan Kulynych, a graduate student at Switzerland’s EFPL university, netted $3,500 for demonstrating how the tool automatically focused on “slim, young, of light or warm skin color and smooth skin texture, and with stereotypically feminine facial traits.”
In order to show how messed up Twitter's algorithm was, Kulynych used computer-generated faces. This allowed him to create identical faces and then modify details like skin color, age, and gender presentation, while also tweaking details like facial symmetry. He then ran those photos through Twitter's photo cropper, which adjusts focus based on what it deems to be most important or interesting. It turned out to be the faces of younger, whiter folks who attracted the algorithmic spotlight.
This was not the only bit of bias that was discovered in the photo cropper, though it did confirm the findings that users pointed out last year when the discovery of algorithmic bias first went viral. Twitter awarded second place to researchers at Halt AI, an algorithm auditing organization that found Twitter's tool also tended to crop out people with white or grey hair, resulting in the elderly getting cropped out of photos. Third place went to Roya Pakzad, the founder of technology and human rights non-profit Taraaz Research, for demonstrating that Twitter's image processing technology shows favoritism toward English text that appears in pictures while cropping out Arabic script.
While it's great that Twitter opened up its black box and encouraged people to try to determine all the ways its photo-cropping algorithm sucks, it shouldn't come as a surprise that the system behaved in a biased manner. Algorithms like the one Twitter used are often trained by feeding it massive amounts of data and training it to identify certain things. This process can produce effective results, but algorithms are only as good as the information they are given. Often, human biases make their way into these systems and then are exacerbated by a system that learns how to reinforce those blind spots. In the case of Twitter's algorithm, it likely was not an intentional decision to highlight young, white faces — but the system was likely trained primarily on those faces because of human oversight.
Admittedly, the issue is a bit technical and probably kinda boring, but it's important. These inherent and unchecked biases can produce legitimately bad outcomes. In the case of the Twitter algorithm, it cropped some people out of photos. But algorithms used to process standardized tests have been shown to create disadvantages for students of color. Algorithms used to pair patients in need of kidney transplants with donors often overlooked Black patients and caused them to wait much longer for care. Facial recognition systems have been shown to falsely identify people of color, resulting in wrongful arrests and discrimination.
Failing to account for our own biases and treating computer systems as if they are somehow infallible rather than a reflection (or even an amplification) of our own shortcomings can do real world harm. The more we demystify these systems and understand where they fail, the better off we'll be.