Across industries, the use of algorithms is becoming increasingly common to automate often complicated or cumbersome tasks. However, these algorithms are not benefiting everyone equally. A new study has found that a widely used health care algorithm was biased against Black patients, making them substantially less likely than white ones to receive important treatment.
Published in Science, the study was conducted by researchers from the University of California Berkeley, Chicago Booth School of Business, and Partners Healthcare in Boston. Although the study didn't specifically name the algorithm's creator, researchers focused on one that's widely used in the industry to screen patients for "high-risk care management".
This provides extra care, usually on a one-to-one basis, to patients in order to help them with complex health needs. However, the algorithm ended up prioritizing healthier white patients over sicker Black ones for access to extra care.
The bias occurred because, in order to determine who needed that access, the algorithm looked at how much it costs a provider to care for a patient. Then, it prioritized patients who had higher care costs.
Algorithms by themselves are not able to measure how sick somebody is, so looking at care costs is meant to be a substitute — the idea being that if it costs more to care for someone, they're probably sicker. However, structural racism in the healthcare industry leads to a lack of access to care for many Black patients.
"The bias arises because the algorithm predicts health care costs rather than illness, but unequal access to care means that we spend less money caring for Black patients than for White patients," the researchers wrote. "Thus, despite health care cost appearing to be an effective proxy for health by some measures of predictive accuracy, large racial biases arise."
The bias is so significant that researchers said fixing it “would increase the percentage of Black patients receiving additional help from 17.7 to 46.5%.”
“That bias is fixable, not with new data, not with a new, fancier kind of neural network, but actually just by changing the thing that the algorithm is supposed to predict,” Ziad Obermeyer, an associate professor at the University of California, Berkeley and the study's lead author, told the Verge.
According to the outlet, all researchers had to do was make the algorithm look at another variable. For example, focusing on a subset of specific costs, like going to the emergency room, helped to cut down on bias.
However, the study speaks to a much larger problem. Anti-Blackness, both current and historic, is well-documented within the medical industry. You can look at the Tuskegee Syphilis Experiment, where Black men were infected with syphilis against their knowledge or consent by the United States government, the shockingly high maternal mortality rate of Black women, and more.
Although many people see algorithms as new solutions, you cannot drop something into an anti-Black system and expect it to not begin replicating that harm. The fact that there were no questions about whether care costs is a fair variable to use given historical inequalities in healthcare shows how anti-Blackness can be quietly reproduced.
Researchers may have fixed this algorithm by adjusting its variables. However, if the larger structural issues at play are left unaddressed, it's unlikely that this will be the last algorithm to show bias against Black patients.