Imagine, if you would, a battlefield with no soldiers. Replacing them would be the latest robotic inventions. Women and men will sit in rooms, far from any immediate danger, controlling these robots with their minds. Various types of aerial and land-based drones would replace just about every job previously done by a person on the ground. There would be micro-drones to collect intelligence (e.g., audio and video surveillance, detection of chemical or biological agents, DNA sampling); unmanned ground vehicles for explosives detection, engineering, or target acquisition; and, of course, both for direct combat. Many might even function without any human control. If you think descriptions like this are relegated to the world of science-fiction, think again.
There are some pretty clear benefits to transitioning to total drone warfare, namely that military personnel will not be in immediate danger. In addition, drones can often do things that people have a hard (or impossible) time doing (e.g., lifting very heavy objects, surviving explosions, working non-stop for days on end without food or water). But, there are obviously a number of problems with this as well.
First, what are the ethical implications to having machines conduct warfare? The more you take the human out of the equation, the more it seems you remove personal responsibility. If a machine kills the wrong person, who is at fault? This becomes especially true if the drone has any autonomy of action. Even just on the surveillance and intelligence gathering side, serious questions have and should be raised regarding civil liberties.
Second, even though these drones can do many things people cannot do, there are some things people do better than robots. Many times, the decision-making abilities of a person will be superior to those of the drone. Any drone will either have to be programmed or controlled by a person. If a person is controlling it, the person will have to examine the battlefield through the drone’s sensors. Any visual display will necessarily be limited and distorted from what a person would experience themselves. The controller’s ability to determine size, distance, direction, etc. would be more difficult than if they were actually there. While engineers and human factors psychologists have come a long way in terms of making machines more compatible with the cognitive processes of their human controllers, problems will always exist. As just one example, despite much progress over the past few decades in the field of artificial intelligence, no machine has ever been able to engage in metacognition (thinking about thinking). This is problematic in a warfare context, because it means no machine is able to evaluate its own thoughts and decisions. It has no way of knowing if it might be making a bad decision … nor would it care. If a drone is autonomous, even more problems arise. It might encounter a situation it was not programmed for or it might be confronted with a moral dilemma, both of which would likely be better dealt with by a person.
Finally, what happens when we turn warfare over to the machines only to see them get hacked or disabled by an EMP (electromagnetic pulse). Drones are at least potentially susceptible to being hacked. This could leave the drone completely useless or, even worse, it could be used against whomever sent it in the first place. Electromagnetic pulses are bursts of energy that have been shown to damage electrical equipment or disrupt its function. A large enough EMP could render all electronic equipment over a large area completely useless. If an entire military were largely comprised of drones and hit by an EMP, we would then get to see the world’s most expensive paperweight collection. Unless and until scientists design a way to protect computers from hackers and electronics from the effects of an EMP, humans will have a large role to play on the battlefield.
Given the expanding use of drones in warfare and new technologies for controlling them (e.g., with just our minds), it is important that we fully consider the ethics behind and consequences of their use. Technology has advanced to the point where millions of lives can either be saved or taken based upon the choices that we make regarding its use. At lease humans are still good for something.