Campuses are embracing facial recognition technology. Students, however, are not.

Simulation of a screen of cctv cameras with facial recognition. Facial recognition of a woman in a s...
Shutterstock
Impact

The fear of school shootings, violence on campus, sexual assaults and other disturbing trends have schools from elementary to college scrambling to find ways to keep their campus safe. For some schools, that includes installing cameras and embracing facial recognition technology on campus in an attempt to verify the identities of people entering and exiting school buildings and dormitories. The problem: many of these technologies are invasive, infringing upon the privacy of students by constantly monitoring them and requiring their likeness to be stored and tracked by the school. On top of that, facial recognition has been shown to be largely inaccurate and discriminatory against women and people of color. Those facts have students campaigning to push back against the adoption of this technology, arguing that they are not willing to surrender their rights for inadequate forms of protection.

Nonprofit digital rights advocacy group Fight for the Future announced this week that it is teaming up with the Students for Sensible Drug Policy (SSDP) to launch a nationwide campaign to prevent facial recognition and other biometric surveillance from making its way onto school campuses. The effort, which is part of Fight for the Future's broader Ban Facial Recognition movement, aims to educate everyone from students to faculty members on the downsides of introducing surveillance technology around schools and includes a petition encouraging schools to reject these tools, even if they are being pursued under the guise of protection. These organizations argue that no matter what the intention of the surveillance might be, it does more harm than good.

Facial recognition is wildly inaccurate

There is good reason for students and privacy advocates to question the benefits of living under constant surveillance, but foremost among them is that the technology does not work as advertised. Study after study has found that that facial recognition systems produce a disturbing number of false-positives. In the United Kingdom, the Metropolitan police adopted facial recognition technology for use with surveillance cameras located around highly trafficked areas. A report from The Independent found that the systems produced a 98 percent false positive rate — meaning the vast majority of the time, the cameras incorrectly identified people. Even the most accurate systems, as tested by the United States' National Institute of Standards and Technology (NIST), produce about a 10 percent false positive rate, meaning one in every 10 people are wrongly identified by the technology. Imagine if one out of every 10 people on a college campus was stopped and questioned under false pretenses because a camera falsely identified them. For this reason, according to the Project On Government Oversight, facial recognition is still considered to carry an "unacceptable risk," particularly when used for the purposes of security and law enforcement.

Bloomberg/Bloomberg/Getty Images

It's even worse for marginalized communities

While facial recognition technology often produces false positives no matter the subject of its surveillance, the issue is amplified for members of marginalized communities, including women, people of color and transgender or gender non-conforming people. "Current facial recognition algorithms are deeply flawed," Erica Darragh, a board member at SSDP tells Mic. "Studies have repeatedly shown that this technology exhibits systemic racial bias." She compares facial recognition to nuclear and biological weapons, noting they pose "such a profound threat to humanity that any benefits are far outweighed by the inevitable harms."

Given that these communities are already at risk of over-policing and bias, automating and systematizing failed processes into surveillance technology only serves to exacerbate the problem. An NIST study of facial recognition technology found that even the best system produce a false match rate for black women with 10 times the frequency that it produces false matches for white women. The most recent testing conducted by the government agency found that most facial recognition systems still suffer from significant racial bias that results in the technology falsely identifying people of color, particularly Black and Asian people. This has been an ongoing issue for people of color, who have been misidentified — often in incredibly insulting, racist ways — by facial recognition technology for decades. When Google introduced an image tagging algorithm in 2015, it labeled black people as "gorillas." HP designed face tracking technology for its webcams only to discover that it couldn't identify black faces. These types of infractions may be presented by companies as accidents and mistakes, but they reveal an inherent bias built into that technology when attempting to identify people of color based on their facial features. When applied to tools intended to serve as forms of security, like those on school campuses, the outcome can be worse than just insulting — it can result in misidentifying and incriminating a person of color simply because the technology suffers from bias coded into its programming.

This issue also affects transgender and non-binary people, who are often misgendered by the machine processes. A study conducted by the University of Colorado Boulder found that while facial recognition systems correctly identify cisgender women 98.3 percent of the time and cisgender men 97.6 percent of the time, the systems misgendered trans men 38 percent of the time and trans women with about the same frequency. Troublingly, the systems misidentified non-binary people 100 percent of the time because no system has attempted to account for gender identities outside of the binary. Misgendering people can be a triggering experience for victims and can invalidate the identities of trans and non-binary individuals. These experiences, particularly when they result in interactions with police or other security forces on the basis of a false identification made by facial recognition technology, can be insulting and strip people of their dignity at a time when they are most vulnerable and at risk.

"There is a lack of evidence that facial recognition technology will keep students safer, and this unproven technology must be evaluated against the risks it introduces," Elizabeth Laird, the senior fellow on student privacy at the Center for Democracy and Technology, tells Mic. "Increased surveillance will disproportionately affect students of color and other underrepresented or underserved groups like immigrant families, students with previous disciplinary issues or interactions with the criminal justice system, and students with disabilities." For these reasons, facial recognition should be viewed not through the scope of a potential security tool, as schools are often looking for, but rather as a potential risk for members of the community. The technology can introduce more challenges for marginalized people who already face discrimination and bias and could suffer from unjust and unfair enforcement at the hands of automated systems.

The invasiveness of facial recognition databases

Facial recognition technology does not work without faces to recognize. In order for these systems to operate on school campuses, it means exposing the faces of students to these types of databases, entering their biometric information into a system that they may not have volunteered for. Most schools have some form of photo identification for students and would have images that could be used to train facial recognition systems to identify students. This type of invasive data use is often involuntary and results in students, young adults and others having their information used without their clear consent.

Having one's face in these systems can also have a dehumanizing effect, according to a study published last year in the journal Learning, Media and Technology. Students are reduced to data points — where they are seen on campus, when the come and go, who they are with. It reduces students to their behaviors caught on camera rather than their activity or interests in and out of the classroom. Take for instance a proposed facial recognition system intended for use in the Lockport School District in New York. That system would not only track students but store activity of individuals for up to 60 days, allowing administrators or law enforcement to re-trace every step of a student during that time period. At the University of California, San Diego, the school used facial recognition technology not just to monitor student behavior but also to predict engagement in classes based on facial expressions. Such information doesn't serve to keep students safe, nor is it actually an accurate way to determine someone's emotional state or performance in classes.

PeopleImages/E+/Getty Images

A chilling effect on learning

These types of systems, armed with the ability to track individuals via their biometric identifiers, create a permanent state of surveillance. That sense that a student is always being watched can not only make a student uncomfortable, but Laird warns that it can have a chilling effect on expression and education. "In the context of a school with a mission to educate all students, deploying this kind of technology may chill expressive activities that are critical for young people's development as well as transform a school from a learning environment to one of surveillance that actually makes students feel less safe," she says. Likewise, Darragh says that surveillance of students hinders "creativity and expression, which is detrimental to learning."

Study after study has found that surveillance has a detrimental effect on free speech. A study of student behavior when cameras are present, published in the journal Frontiers in Psychology found that many modify their behavior when they know they are on camera. This has been found time and time again — humans change how they act when they know they are being watched, even when they aren't doing anything wrong. While it may seem like that would result in "better" behavior, that isn't always the case. Sometimes, it means stifling what would be true expression. According to the Electronic Frontier Foundation, that type of fear of expression has been observed in students who use school-issued technology that monitors their activity. Similarly, students under the watch of cameras — particularly ones that can identify them using their biometric data — may choose to self-censor or restrict their behavior for fear of how it may be viewed by an authority figure.

There's still a lot to be understood about how students behave and express themselves while under the watchful eye of surveillance systems, but schools shouldn't be the testing ground. "Using facial recognition in schools amounts to unethical experimentation on young people," Darragh argues. "We have no idea what kind of psychological impact this type of surveillance will have on students."

Surveillance doesn't work, so what does?

Despite the severe shortcomings and still unknown effects on learning that surveillance in schools can cause, it does not seem likely to go away any time soon. Fear caused by school shootings, sexual predators and other potentially dangerous presences on campus have led to school systems embracing security tools, including facial recognition to the tune of about $3 billion per year.

The problem is, there is no indication that facial recognition actually makes places like campuses safer. According to the ACLU, surveillance cameras rarely serve to actually make people safer. Several studies of crime in the United Kingdom — one of the most surveilled countries in the world — have found that there is no evidence to suggest cameras have played a role in a reduction in crime. Bruce Schneier, a security technologist, told WNYC in 2010, “Cameras don't have a preventative effect on crime, they don't reduce crime rates, measurably. At best, they move crime around.” This has been born out in research that has found cameras have basically no effect, particularly in stopping violent crime.

Darragh believes that the embrace of facial recognition and surveillance technologies on school campuses is driven largely by fear, which giant security firms are willing to take advantage of to sell flawed and insufficient technology. "One of the reasons Fight for the Future is coordinating this campaign now, before many campuses are using the technology, is related to highly problematic and aggressive marketing campaigns by surveillance corporations which exploit the fears of gun violence in schools in order to coerce administrative officials to implement their product," she says. Darragh and SSDP believe there are considerably more effective methods for addressing these issues — though they would require schools to get to the root of the issues rather than putting up cameras and allowing them to monitor and scare students into submission. Issues like "lack of effective and accessible mental healthcare, political extremism, and lack of effective regulation on weapons" would be more useful, according to Darragh. "More appropriate responses would approach this issue with a much wider and systemic lens."

Facial recognition and surveillance technology as a whole is a band-aid over a systemic issue. The technology is flawed; these systems often fail at their primary goal of keeping campuses and the people on them safe. Schools may have the right impulses to want to keep their students and faculty safe, but it is worth considering that creating a surveillance state may actually achieve the opposite — putting more people at risk by extending undue suspicion and limiting the behaviors of people during a time when they should be exploring, creating and learning in a way unencumbered by the permanent eye of big brother watching over them.