Slacker’s Syllabus: The problem with facial recognition

Illustration by Lorenza Centi

Technology is amazing and the more we have, the better — or at least, that’s often what we’re told.

But what if some technology doesn’t make the world better for everyone?

What if it opens up some communities to increased policing and surveillance? Do we try to fix it, or do we cut our losses?

These are some of the questions defining conversations around facial recognition.

What is facial recognition?

Face recognition is a “method of identifying or verifying the identity of an individual using their face,” according to the Electronic Frontier Fund.

It’s a form of biometric identification (like fingerprinting) because it uses body measurements and physical characteristics to match a scan to a person.

Weiquan Lin/Moment/Getty Images

Facial recognition is the technology Facebook previously used to tag people in photos, and the reason you can unlock your phone with just a look.

But its use is deeper than that.

In the United States, at least 42 federal agencies use facial recognition technology, and real-time facial recognition has been deployed in cities like Detroit and Chicago.

Why Why Why Why Why Why Why Why
is is is is is is is is
facial facial facial facial facial facial facial facial
recognition recognition recognition recognition recognition recognition recognition recognition
a a a a a a a a
problem? problem? problem? problem? problem? problem? problem? problem?
Facial recognition isn’t foolproof.

If you’re not a cis white guy, chances are high that facial recognition won’t be able to do the one thing it’s meant to do: identify you.

This is a problem across companies and algorithms. In 2019, researchers testing popular services from Amazon, Clarifai, IBM, and Microsoft found they were unable to classify transgender or nonbinary people.

San Francisco Chronicle/Hearst Newspapers via Getty Images/Hearst Newspapers/Getty Images

You could dedicate entire lectures to unpacking why facial recognition is so bad at “seeing” people. But to boil it down, remember:

Algorithms aren’t neutral.

The algorithms making up facial recognition have to be built by somebody. And those somebodies will build their own implicit biases into those algorithms.

Think of how rampant misogynoir is in the U.S. Now, consider that a 2019 study found Amazon's Rekognition often classified dark-skinned women as men.

And yet, facial recognition is still used by law enforcement.

As many as 1 in 4 police departments across the U.S. can access facial recognition — and its use is largely unregulated.

Facial recognition has already led to false accusations against Black men, and law enforcement officers have used it to target protesters.

AFP Contributor/AFP/Getty Images
About half of all American adults are in a facial recognition database.

Often, these databases are built without people’s knowledge. In 2019, IBM released its “diversity in faces” dataset to reduce bias in facial recognition. Nearly a million photos were pulled from Flickr — but most of the people photographed had no idea.

That same year, The Washington Post reported that state driver’s license photos are also a “gold mine” for facial recognition.

Can Can Can Can Can Can Can Can
we we we we we we we we
fix fix fix fix fix fix fix fix
it? it? it? it? it? it? it? it?

Short answer: Nope.

The companies making facial recognition technology will always argue that it should exist because, well, they’re making money.

Similarly, law enforcement will argue in favor of facial recognition because it gets them more money in their budgets.

But we don't need every form of technology that exists, regardless of what the tech bros say.

Making facial recognition better at “seeing” people isn’t helpful.

Facial recognition is ultimately a surveillance technology.

What happens when law enforcement are able to perfectly identify everybody at the next protest against police brutality?

Many advocates say there’s no need to try “improving” facial recognition when we could instead just acknowledge that it doesn’t need to exist.


In a country where crime prevention already associates Blackness with inherent criminality, why would we fight to make our faces more legible to a system designed to police us? … It is not social progress to make Black people equally visible to software that will inevitably be further weaponized against us.

Facial recognition bans are happening.

Several cities, including San Francisco, Boston, and Somerville, Massachusetts, banned the technology’s use in police investigations and municipal surveillance programs.

And multiple states, including Maine, have also banned most government use of facial recognition.

These bans have their limitations, though. Most of them focus on government use of the technology, while allowing private entities to continue playing with it largely unchecked.

Plus, city- or state-level bans don’t have any bearing on the federal government.


The next step is banning facial recognition at a federal level and tackling the surveillance industry.

Companies should not be able to profit off of surveilling people. And individuals shouldn’t have to constantly worry about being watched.

To learn more:

A number of organizations, individuals, and communities have taken up the fight against facial recognition — as well as an overall reimagining of how technology can better serve marginalized communities.

If you want to learn more, check out Detroit Community Tech, the Surveillance Technology Oversight Project, and the Lucy Parsons Labs.


Thanks for reading,
head home for more!