Since 2013, Chicago has been home to one of the most controversial crime-prevention experiments in the country: the Strategic Subjects List.
Spearheaded by the Chicago Police Department in collaboration with the Illinois Institute of Technology, the pilot project uses an algorithm to rank and identify people most likely to be perpetrators or victims of gun violence based on data points like prior narcotics arrests, gang affiliation and age at the time of last arrest. An experiment in what is known as "predictive policing," the algorithm initially identified 426 people whom police say they've targeted with preventative social services.
The American Civil Liberties Union has criticized the police department's lack of transparency about whose names are on the list and how the list is being used. Digital-rights group Electronic Frontier Foundation, meanwhile, has said that the project could lead to increased surveillance.
But the most damning revelation about the program only just emerged: It doesn't work.
A recently published study by the RAND Corporation, a think tank that focuses on defense, found that using the list didn't help the Chicago Police Department keep its subjects away from violent crime. Neither were they more likely to receive social services. The only noticeable difference it made was that people on the list ended up arrested more often.
"The pilot effort does not appear to have been successful in reducing gun violence," the study reads.
The researchers couldn't determine why those on the list were more frequently arrested, but the dozens of interviews conducted during the study provided a clue.
"It sounded, at least in some cases, that when there was a shooting and investigators went out to understand it, they would look at list subjects in their area and start from there," Jessica Saunders, the lead author of the study, told Mic.
In other words, some officers had used the list to target subjects for investigation in nearby crimes.
The idea that a futuristic, pre-crime program would lead to a heavier approach toward those who are already heavily policed aligns with the criticisms that rights-focused groups have been leveling at programs like the SSL since its birth.
"This new study is extremely concerning, because what it says is that we're concentrating enforcement activities onto the usual subjects," said David Robinson, principal of civil rights-focused tech firm Upturn.
A spokesperson for the Chicago Police Department told Mic in an email that the list is "not used as a heat list or arrest list," and said the department will spend some time evaluating the study's findings.
Chicago police emphasized that the study did not evaluate the predictive model itself, but rather focused on the intervention strategy.
"This study does not evaluate the validity or reliability of the model, but rather focuses on the impact of the predictions being used in practice," the provided statement says.
Jonathan Lewin, Chicago PD's chief technology officer, told Mic last year that the program was an effort to save lives, to find people vulnerable to violent crime so that they can get services like job training and drug treatment. It was not intended to identify targets for arrest.
"When someone calls 911, something's already happened," Lewin said while talking about Chicago's predictive policing. "We're trying to get ahead of that problem before it escalates into a tragic outcome. Chicago isn't the only place intervening with risky people in order to save lives. We're just doing it with science."
But a key problem with the program is how police followed up with those who made the list. In a "majority of districts," police were either told nothing about how to deal with list subjects, or only instructed to increase contact with them.
"It is not at all evident that contacting people at greater risk of being involved in violence — especially without further guidance on what to say to them or otherwise how to follow up — is the relevant strategy to reduce violence," the study says.
The spokesperson for the Chicago Police department also noted that the list evaluated in the study is an old version; the department is now using a fourth version, and the department told Mic that the model evaluated by the paper is "nothing like the one in current use."
Saunders told Mic she has seen newer versions of the the list, and that they are more accurate at predicting who is at risk for violent crime. But the stakes are high when you're dealing with human subjects.
"If the intervention is that you're going to receive services, perhaps it doesn't matter if we have many false positives," she said. "But if we're doing something that curtails their civil liberties, then any false positives are going to be a big deal."
The field of predictive policing includes a portfolio of tools that use algorithms and statistical models to predict people and places that are at highest risk for crime. But often, these new tools are used to reinforce old police habits. Pre-crime maps often refer police patrols to neighborhoods most cops would already consider problem areas. A recent investigation by investigative news outfit ProPublica showed that algorithms that are meant to make sentencing more fair have resulted in harsher sentences for black Americans.
Another study involving crime-mapping in Shreveport, Louisiana, found police were given insights on where crime might occur, but little useful guidance on what to do with them — the same problem with the Chicago Police Department pilot program.
"If we're going to go down this road of prediction, we need to spend time thinking about how we might use this kind of list," Saunders said. "I think these thing are good ideas, but they're not ready to be put in the field, because no one knows what to do with them."
Aug. 17, 2016, 12:34 p.m.: This post has been updated with further comment from the Chicago Police Department.