Slacker’s Syllabus: Predictive Policing

The Washington Post/The Washington Post/Getty Images
Updated: 
Originally Published: 
Algorithms make everything better.

Or at least, that’s what technochauvinism — the idea that tech is the solution to everything and computers are superior to people — wants you to believe.

The concept can be seen throughout society, including one particularly disturbing area: predictive policing. Need to figure out where to deploy police and when? Want to know who is “likely” to commit crimes, or even prevent crimes before they happen?

Throw an algorithm at it.

Shutterstock
Shutterstock

Proponents say making policing decisions via algorithms can solve crime faster, better, and without human error.

But predictive policing is technochauvinism at its worst.

As Artificial Unintelligence author and NYU professor Meredith Broussard said at Impact Labs’s 2018 Impact Summit, “We thought that [tech was the best solution] for a really long time, but we can look around now at the world we’ve created, and we can say it’s much more nuanced than that.”

So what exactly is predictive policing?

Predictive policing involves using algorithms to analyze massive amounts of information ... to predict and help prevent poten­tial future crimes.

Place-based predictive policing draws from pre-existing crime data to determine which neighborhoods — and times — have high crime. Crimes can be weighed within algorithms to make one seem “worse” than another. For example, an algorithm can rate loitering as worse than jaywalking.

But there’s also person-based predictive policing, which tries to foretell which individuals or groups are more likely to commit a crime.

Dave Collins/AP/Shutterstock
Predictive policing isn’t a new concept.

It dates back as early as 1973, when the Kansas City Police Department used computer data from its ALERT II system to launch Operation Robbery Control to create a map of past robberies and “predict” future ones in the city.

Now, such tools are fairly widespread; per the Brennan Center, they’re mainly used by municipal police departments, but “private vendors and federal agencies play major roles in their implementation.”

Perhaps one of the most infamous predictive policing tools is COMPAS, developed by software company Equivant (formerly Northpointe), which uses an algorithm to create risk scores for recidivism.

Jamie Squire/Getty Images News/Getty Images

Alert II and the rise of criminal justice information systems more broadly began the long-standing process of turning Black people — people with bodies and experiences, hopes and interests, aspirations and legitimate grievances against their government — into abstract data.

COMPAS claims to predict recidivism.

In 2016, a ProPublica investigation analyzing Florida COMPAS data found that only “20% of the people predicted [by the algorithm] to commit violent crimes actually went on to do so.”

Despite the fact that 80% of those predictions — which generally rated Black people as more potentially dangerous than white people — were wrong, COMPAS scores impacted people’s sentences.

For example, ProPublica reported that in Wisconsin, Judge Scott Horne noted a defendant had been “identified, through the COMPAS assessment, as an individual who is at high risk to the community.” The judge issued a sentence of eight years and six months in prison.

Shutterstock

77%

The rate at which Black defendants were pegged as more likely than white defendants to commit violent crimes in the future, per COMPAS's algorithm.

ProPublica

The LAPD was one of the earliest adopters of predictive policing.

In 2008, the Los Angeles Police Department began work­ing with federal agen­cies to explore its options. Several years later, it launched Operation Laser to target individuals and specific areas called “LASER [Los Angeles Strategic Extraction and Restoration] zones.”

The Stop LAPD Spying Coalition obtained a list of those targeted, which showed “nearly half ... are Black (even though Black people are 9% of the city’s population), some were as young as 16, and many are unhoused.”

In 2021, the LAPD ended its predictive policing programs.

Nick Ut/AP/Shutterstock
Predictive policing can’t be improved.

Like facial recognition, predictive policing is a form of technology that can never be made “better.” Calls to improve predictive policing algorithms are much like those to continuously reform the police:

Something built to serve an inherently unjust system will never be “fair.”

Predictive policing simply gives new excuses to cause harm, by using the frame of technology as “neutral.”

Like policing as a whole, predictive policing must go.

Shutterstock

Predictive policing is a self-fulfilling prophecy. ... This system is tailor-made to further victimize communities that are already overpoliced — namely, communities of color, unhoused individuals, and immigrants — by using the cloak of scientific legitimacy and the supposed unbiased nature of data.

Read More

If you’re still wrestling with the idea of dismantling predictive policing versus fixing it, give this MIT Technology Review article a read.

You can also check out this community-led report by the Stop LAPD Spying Coalition on surveillance and policing in L.A.

Damian Dovarganes/AP/Shutterstock

Thanks for reading,
head home for more!