Less than a week after the launch of the international Campaign to Stop Killer Robots in London, the United Nations has released a report calling for a global moratorium on the "testing, production, assembly, transfer, acquisition, deployment and use" of lethal autonomous robotics (LARs) until an international framework on the use of LARs has been agreed upon.
Unlike drones, LARs are robots that can select and kill targets without a human being "in the loop," that is, issuing a command. The report warns that "there is widespread concern that allowing LARs to kill people may denigrate the value of life itself."
Although such technology may sound like science fiction, experts increasingly believe that it could become a reality within 20-30 years. Proponents argue that the use of LARs could save the lives of human soliders, that they are not subject to human emotions such as anger, fear, and panic, and that they could be programmed to only act in accordance with the law. Opponents, however, contend that LARs cannot distinguish between civilians and combatants, lack human compassion, and raise serious issues concerning liability for crimes committed.
The video from Human Rights Watch (which is calling for an outright ban on LARs) below highlights some of the potential issues with this technology:
It is not often that we get calls for a public debate on the desirability of a hypothetical technology. And we should make use of this chance to think about whether we really want LARs. Because once they are in use it will be a lot harder to take them away.
As the UN report points out, "the onus is on those who wish to deploy LARs to demonstrate that specific uses should in particular circumstances be permitted. Given the far-reaching implications for protection of life, considerable proof will be required."