Could advancements in AI eventually lead to ‘Terminator’-style killer robots?

Creative digital brain background. Artifical intelligence and technology concept. 3D Rendering
Shutterstock
Impact

Tech companies are happy to tout their innovations and latest developments, but one organization is warning that not all advancements are good ones. Dutch nonprofit PAX, an organization that advocates for peace, recently looked into how the tech sector is handling the development of artificial intelligence and its potential to become an automated destructive force that could turn on humanity. What it found is that just seven of the 50 companies it investigated partake in "best practices" to mitigate the risk of an eventual AI apocalypse. Twenty-one firms, including the likes of Amazon and Microsoft, were marked as "high concern."

PAX's research focused on answering three different questions: Is the company developing technology relevant to the potential development of lethal autonomous weapons, is the firm working on military projects that may enable deadly force, and has the company committed to not contributing to the development of autonomous weapons? Companies were given high marks for committing themselves to not contributing to the development of potentially deadly machines. Meanwhile, companies that freely work alongside the military without a clear plan in place to prevent their technology from being used for lethal purposes received demerits.

Given that, it's not surprising that Amazon and Microsoft would sit atop the list of companies that just may push us toward a future filled with killer robots. The two have spent the better part of the last year locked in an ongoing competition to land a massive government contract to build the Pentagon a "war cloud" known as the Joint Enterprise Defense Infrastructure, or JEDI. The project would equip the United States Department of Defense with a cloud infrastructure that would allow branches of the military to freely share information, from sensitive documents to mission plans, across multiple theaters. The appeal of tackling the proposal is clear for Amazon and Microsoft: it carries a $10 billion contract that will be rewarded to the company that can provide the service the government is looking for. But, in winning the contract and building the war-enabling technology that the military wants, one of these companies will undoubtedly contribute to the deaths of humans. U.S. Department of Defense Chief Management Officer John H. Gibson II has made that abundantly clear in talking about JEDI, stating publicly that "This program is truly about increasing the lethality of our department."

The criticisms of the companies extend beyond just their interest in taking on the JEDI project. Microsoft has taken heat in the past for providing its technology to the U.S. Immigration and Customs Enforcement (ICE), including providing the organization tasked with separating migrant children from their families with "facial recognition and identification" tools. Last year, the company called the separation policy "abhorrent" and said its technology isn't being used to enable those practices, though it shied away from canceling ongoing work with ICE or from taking on future contracts with the government agency. Microsoft has urged Congress to take steps to regulate facial recognition technology before it is put to use in overzealous and potentially harmful ways, so points for recognizing the risk even if the company is profiting off it anyway.

While Microsoft has at least shown a little bit of caution when it comes to deploying its technology, Amazon has been a bit more brazen in offering up facial recognition services. Earlier this year, Andy Jassy — the CEO of Amazon Web Services (AWS) — said the company would offer its technology to "any government department that is following the law." That's pretty broad, and since the government has a pretty powerful hand in deciding what exactly the law is, it can be read as Amazon offering its facial recognition project Rekognition up carte blanche to any agency that wants to use it. The company hasn't been shy about selling Rekognition to law enforcement agencies across the country despite concerns it contributes to the invasion of the public's right to privacy. It also hasn't been particularly dissuaded from profiting off the technology even though it's actually pretty terrible at identifying people and displays a clear bias when attempting to identify women and people of color. Add to that the concern that it may one day contribute to the development of automated killing machines, as PAX would suggest, and you have a real recipe for something awful.

While Amazon and Microsoft are headliners on the list of companies of "high concern" identified by PAX, they certainly aren't alone. Controversial AI company Palantir was listed as a potential contributor to autonomous killing machines for accepting a U.S. military contract to build and deploy an AI system designed to "help soldiers analyze a combat zone in real time." Palantir has had ties to the intelligence community essentially since it was founded and just recently re-upped a data-mining contract with ICE despite objections from employees, so its inclusion on PAX's list shouldn't come as a surprise. PAX also called out Canadian company AerialX for creating a "kamikaze" drone called the DroneBullet that uses machine vision to identify, track and attack a target. The technology is designed to identify other, adversarial drones and knock them out of the air, but PAX raised concerns the technology could easily be adapted for other sorts of autonomous attacks. Finally, PAX warned that Anduril Industries — the AI defense startup of Oculus Rift founder Palmer Luckey — has created technology that could lead to the development of autonomous weapons, though it denied having any focus on such project. The company has worked to create technology to help provide a view of a battlefield to soldiers and potentially allow them to "direct unmanned military vehicles into combat," according to PAX.

While some companies are, in the eyes of PAX, brazenly pushing us closer to the brink of killer robots while lining their own pockets, there are plenty of voices within the tech community raising warning flags. Despite his many faults, Elon Musk has been a leading advocate for developing limits on AI to prevent the machines from one day turning on us. He and the heads of Google's AI departments have signed pledges not to contribute to lethal autonomous weapons. More than 2,400 AI researchers and experts have likewise committed to not contributing to any projects that may one day lead to a Terminator-like outcome for humanity. Others have started to delve into the ethics of AI and are working to develop best practices and guidelines that would ideally serve as guard rails for all future developments to make sure AI projects never go too far toward automating the act of killing.

Unfortunately for now it seems some companies are more dedicated to their bottom line than to making sure our existence doesn't come to an end from robotic arms. Whether it's human-made AI that turns on us, human-related climate change that produces unlivable conditions, or human-made weapons of mass destruction that are unleashed on massive populations, it seems like one way or another we'll figure out a way to wipe ourselves out.