U.N. report warns that tech companies are targeting and mistreating poor people

Waving united nations UN flag in the deep blue sky background
Shutterstock
Impact
Updated: 
Originally Published: 

Technology has always carried the potential to be an equalizer — to be a tool that shrinks the gap between the haves and have nots by providing access to information and services that may have once been exclusive. Instead, it is furthering that divide. A new report from the United Nation's Rapporteur (an expert appointed by the organization) on extreme poverty and human rights warns that as technology companies become more vital partners in setting up systems that provide and deliver essential services, they are creating a "digital welfare dystopia" where the private sector is able to operate in a "human rights free-zone." The result is the poorest and most vulnerable populations being targeted, monitored and monetized.

This is the third year in a row that Philip Alston, the expert tapped by the UN to report on this issue, has warned of the dangers of the emerging digital welfare state. In 2017 and 2018, he warned that the potential impact of applying technology to public services and the workforce could have "immense" harm to at-risk populations. At the time, Alston was solely focused on the United Kingdom and the United States. For his most recent report, he looked at submissions from 34 countries, including the U.K. and U.S.. His findings suggest that the exact problems he foretold in 2018 have already started to come to fruition around the world.

One of the biggest violations currently befalling poor communities is the implementation of digital systems that rely heavily on data collection at the expense of privacy. Take, for example, a new digital tool called System Risk Indication (SyRI) that is being used by the government of the Netherlands. The stated goal of the digital service is to detect welfare fraud, which it accomplished by collecting massive amounts of data about its citizens and algorithmically creating "risk models" to determine who is most likely to commit benefit fraud. However, according to Alston, SyRI almost exclusively suggests that it is low-income residents, migrants and ethnic minorities living in the Netherlands who are most likely to attempt to defraud the system — a result that is driven by all sorts of systematic factors that puts those people at a disadvantage to begin with. But that is not accounted for by a system that only cares about who is a risk, not why they may be one.

The potential outcome of such a system could result in citizens who need the most help being looked at with suspicion. It is assumed they will commit fraud at some point despite never actually doing so because their risk profile suggests they will. That guilt associated with someone who hasn't committed a crime results in closer monitoring of their behavior, completely bypassing any due process that would be granted to anyone else receiving assistance. That level of scrutiny may turn into an obstacle for those most in need to even get the help that is available to them. When they know that they will be questioned every step of the way, a presumption of criminality will be applied to their every action, they may just choose not to participate in the system at all. On paper, that means that SyRI worked — it prevented fraud from occurring. In practice, it means that a vulnerable member of society in need of help has been pushed away and likely won't get access to the services they need.

Shutterstock

Those human tolls are rarely accounted for in these digitized systems that boil everything down to a number, and that is Alston's biggest concern. In just about every case of the digital implementation of new systems and services used by governments, there are trade offs with human rights. On the extreme end of the spectrum, there is India. The country's massive Aadhaar identity system requires citizens to surrender biometric data including a photograph, ten fingerprints and two iris scans that can be used to identify a person basically anywhere they go. Failure to participate in the program can result in limited access to necessary services, requiring people to surrender privacy and essential parts of their identity in exchange for basic needs.

There are plenty of smaller infractions that occur all over the world. In 2017, Alston focused on how the cities of Los Angeles and San Francisco were handling homelessness. He noted that many homeless citizens were subjected to the Vulnerability Index-Service Priority Decision Assistance Tool (VI-SPDAT) — a digital service that requires those without homes to complete a survey of invasive questions in order to determine their risk level while living on the streets. After responding to questions like "have you ever engaged in sex work?" and "Have you ever stolen medications?," the respondents are assigned a number that is used to determine what housing situation may be best for them. That survey may feel like a small infraction, especially in return for a place to live, but Alston reported that many homeless people he interviewed saw the survey as an invasion into their privacy that they had no real choice but to subject themselves to if they wanted a roof over their heads. The idea of the system is to be sensitive to the needs of these populations and place them in a home that will be best suited for them, but in doing so they are diminished to little more than a number, which strips of them of their actual, individualized needs and places them in a category that can be easily sorted by an algorithm.

The idea of introducing technology into these services is to simplify and streamline the process. Our governments are often slow and unequipped to actually handle these tasks, so farming it out to tech can improve the results. However, Alston warns that this shift has actually harmed the people who need access to these services the most. When these services move to digital-only platforms, it leaves behind people who don't have the technology needed to access them. When the services turn to algorithms to supposedly help improve fraud protection and distribution, it preemptively targets people who are the most vulnerable. And because many governments assume these technologies operate in an unbiased, even altruistic manner, they are allowed to operate with minimal regulation and review.

Alston states in his report that it is time to change this approach to tech companies and the systems they create. “The starting point for efforts to ensure human rights-compatible digital welfare states outcomes is to ensure through governmental regulation that technology companies are legally required to respect applicable international human rights standards," he wrote. We can't trade human rights for the efficiency of an algorithm.