The phrase “autonomous weapon system” likely brings to mind images of unmanned aerial vehicles, or “drones”. However, this is neither sufficiently broad, nor sufficiently specific – unmanned weapon systems are not confined to aircraft only, and merely being unmanned does not signify a lack of human control. As contrasted to a drone, which is typically operated remotely by a live human, an autonomous weapon system is a weapon system that is capable of operating without direct human input or control, particularly in terms of the actual use of force.
The term “artificial intelligence” is something of a misnomer. True AI, capable of thinking and reasoning for itself, has never been accomplished. What we think of as AI actually consists of elaborate preprogrammed decision trees. Unanticipated factors will inevitably throw off even the most careful calculations. Complex moral and ethical questions about the use of force cannot be reduced to mathematical “right” and “wrong” answers, but must be dealt with. A machine is capable of doing only, and exactly, what it is told to do, whether under direct human control or simply running through a preset chain of logic. Clearly, there is no such thing as a truly “autonomous” weapon system, in the sense that it thinks for itself and is responsible for its actions. There is always a human actor who bears ultimate responsibility. The challenge is in determining who that may be. Both state and nonstate entities have ready access to the necessary components, which from firearms to semiconductors can be sourced anywhere. A simple robotic “suicide bomber” can be built in a suburban garage. Determining who wrote a given segment of code is rather like identifying the author of a typewritten note: style can give clues about identity, but can also be imitated. Proliferation is not the right word to describe the problem we face, because this technology (simple or sophisticated) is already ubiquitous.
A further complication is in classifying types of autonomy. A landmine or a spring gun operates without thought based on mechanical input: the unwary victim touches the wrong thing and sets off the device. Drones of various types – land, sea, air, and space – may navigate themselves into position and then transmit images back to human operators who then select the target and command that a particular action be taken. Point-defense systems like the active countermeasures found on aircraft and armored vehicles, or like the rapid-fire close-in weapons systems mounted on capital warships, may trigger automatically upon detecting an incoming threat – but may also trigger unintentionally, causing friendly-fire incidents or civilian casualties. Penetration of electronic systems, and the safeguards to prevent such penetration, can be automated or human-controlled, and distinguishing one from the other is difficult at best especially when the result is the theft, corruption, destruction, or even hijacking of the computer systems in question. Autonomous weapon systems can potentially incorporate any or all of these features.
Bearing all of these factors in mind, member nations are called upon to deal with the increasing problem of autonomous weapon systems. How should they be classified? Should they be banned? Regulated? How can such agreements be enforced between states, or on nonstate actors? How does one determine who is responsible for the use of a given system in the absence of clear information about its origins or control? Underlying all of this is the question of what happens when a computer becomes truly autonomous. International peace and security depend upon the committee’s answers to these broad questions.