San Francisco is the first U.S. city to ban facial recognition tech — but it probably won’t be the last

ImageFlow/Shutterstock
Impact
ByTebany Yune

San Francisco may have a reputation for being a hub for tech companies but it's also a place where technological advancements swing between helping and harming the community. Although the city has sometimes struggled to find a balance between the two, it has recently taken a notable step toward establishing regulations to protect its citizens. On May 14, San Francisco banned facial recognition technology from being used by the city's police and local government agencies in a 8-1 vote by the San Francisco Board of Supervisors. The ordinance still needs to be formally approved before becoming a law, but privacy advocates are already considering this a victory for civil liberties.

What exactly is facial recognition technology?

Facial recognition technology is an A.I.-driven system that identifies the face of a real person and quickly matches it to images in a database. Usually, this means the A.I. must scan for attributes of a face, such as the size of eyes and noses, to match it to existing pictures. This technology is already in use in multiple areas like social media and ID verification. In fact, Snapchat uses a form of facial recognition to create filters for your selfies. And Apple's Face ID uses the technology to unlock your devices or confirm payments.

Facial recognition is non-invasive, meaning the subject doesn't have to interact with anything, and discreet, making it perfect for surveillance systems.

The concerns surrounding facial recognition

pixinoo/Shutterstock

Facial recognition systems are imperfect. Tests by MIT Media Lab have shown that the A.I. has difficulty with correctly identifying dark-skinned and/or female faces. Uber's facial recognition system has improperly deactivated transgender drivers in transition. Other A.I. experts have also expressed concern about the use of such technology to the New York Times in April, stating that lack of regulations leave it far too open for potential abuse of civil liberties. Additionally, in January, The Verge reported that an ACLU test erroneously identified and matched members of Congress with mugshots. As humorous as that finding might be — those are some easy jokes I won't take — the error rate is alarming for human rights and privacy advocates who warn a mistake with law enforcement can be a dangerous or life-threatening one. This has already become an issue in a lawsuit against Apple, where the plaintiff claims facial recognition technology erroneously labeled him as a thief.

Some companies behind facial recognition technology, such as Microsoft and I.B.M., have taken the criticism to heart and improved their technology or backed legislative bills that will put the public on notice if facial recognition technology is in use in an area. However, Amazon, which also produces facial recognition tech, repeatedly insisted the MIT and ACLU tests showed errors because of improper calibration or misuse and that, with their adjustments, they can produce more accurate results.

Other opinions fall in between the two stances. According to NPR, some activists believe the technology shouldn't be completely banned. Rather than a final and total ban, the activists say, the nation should place restrictions until the failure rate can be reduced. Daniel Castro of the Technology and Innovation Foundation is one man who believes future improvements to the system will make it more beneficial than harmful.

"We want to use the technology to find missing elderly adults," he said to NPR. "We want to use it to fight sex trafficking. We want to use it to quickly identify a suspect in case of a terrorist attack. These are very reasonable uses of the technology, and so to ban it wholesale is a very extreme reaction to a technology that many people are just now beginning to understand."

The possibility of abuse and racial profiling

MONOPOLY919/Shutterstock

Unfortunately, the fears of governments using facial recognition against certain races or to profile particular populations are not unfounded. In April, the Chinese government received widespread criticism for using their A.I. to target Muslim minorities living in the country. According to the New York Times, this A.I. not only purportedly tracks minorities like Uighurs and Tibetans, but also uses facial recognition to track individuals' schedules and activities. The publication decried the practice, saying it "makes China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism."

It's not just China, either. On May 13, the BBC reported on several police forces within the U.K. who were trialing facial recognition systems. The use of the system has troubled human rights groups due to its reliance on a database of custody images (mugshots) that includes the faces of innocent individuals who were cleared of any offenses. "A 2012 court decision ruled that holding such images was unlawful," stated the BBC, but the images are still in the database to this day and are open to use by automated facial recognition technology.

Facial recognition technology may sound helpful on paper, but critics believe it is still too inaccurate to utilize properly. San Francisco has taken the first step toward making sure it isn't available for the city to abuse. It remains to be seen whether the rest of California and other cities and states in the U.S. will soon follow suit.