Paul Scharre is the Vice President and Director of Studies at the Center for New American Security, a former advisor at the Department of Defense on unmanned and autonomous systems, and the author of Army of None: Autonomous Weapons and the Future of War.
The last time I really took a big look at the subject of autonomous and semi-autonomous weapons was back in 2019, and of course, Army of None came out in 2018. So my first question is, essentially, how has the landscape of military applications in artificial intelligence changed since then?
Increasing amounts of automation and autonomy are going through systems of all kinds. Certainly, the technology continues to march forward. I think the legal, ethical, and policy debates surrounding autonomous weapons have not changed dramatically. There has been some progress, as the United Nations and the Convention on Certain Conventional Weapons (CCCW) towards states coming up with general principles and guidelines surrounding how states might approach autonomous weapons. The consensus documents include some incremental steps on ideas like the need for human involvement in use of force decisions, which is encouraging from a policy standpoint.
There are still some fundamental challenges that are still unresolved from an international diplomacy standpoint. For one, the technology continues to evolve. We've seen the deep learning revolution, which really kicked off in 2012, starting to come into the real world. Even though this is an area where militaries are really lagging behind the commercial sector, militaries have increasingly adopted deep learning and machine learning. However, machine learning has its own set of challenges beyond autonomy, in particular issues surrounding biases and bruteness in the algorithm, explainability, predictability, and reliability, lack of robustness. Those concerns are very relevant in any kind of high risk contexts, including in the military.
What benefits do you see to autonomous weapons, both from a military perspective and from a societal perspective?
Well, that depends on what we’re talking about. We could be talking about the benefits to just autonomy or automation, the benefits to removing a human from the loop, or the benefits to making fully autonomous weapons. They’re all slightly different. To make an analogy, if we’re talking about cars, we could be talking about the benefits of software in automobiles, adding features like intelligent cruise control, or a fully autonomous car. They have a different customer functionality, so when you say benefit, what do you envision?
There are lots of reasons why incorporating more automation into weapon systems could be beneficial from a military standpoint. In some cases, it will lead to more accurate and reliable weapons systems that are more likely to hit their targets and more likely to accurately discriminate between military and civilian targets. This would make all militaries more effective but also make warfare more precise. Another advantage to automation can be just speeding up the targeting process. Shortening the action time gives a lot of military advantages.
There are advantages to keeping a human involved in targeting decisions. The human brain remains the most advanced cognitive processing system on the planet by far, and even the most advanced AI systems still struggle with novelty. They tend to be brittle and do a poor job at adapting to new situations, whereas human intelligence often is more flexible, more robust to novel situations, and more reasonably adaptable. While humans may not have the reaction time that machines have, humans often are going to do a better job at responding to a new situation, particularly if it’s not one that was not programmed for me in the training data. In warfare, having humans involved in the decision making process ensures that there's an added layer of safety against accidents. There's an added layer of contextual understanding to what's happening.
There are two compelling reasons to take a human entirely out of the loop. One would be speed, if there were certain situations where adding a human introduces a delay to respond. There are some narrow examples of this already in warfare. So far, these have all been defensive systems that have a human on the loop functionality, where the human is in a supervisory mode and is able to intervene to stop the weapons system, but the system itself intercepts, for example, incoming rockets or missiles or mortars coming into a base, a ground vehicle, or a ship. In some conditions, if you force the human to observe the threat, make a positive decision, and press a button, that would not be feasible, and it would be too late [to stop the threat]. For example, if you’re looking at active protection systems on ground vehicles, where there's a rocket propelled grenade (RPG) or guided missile coming in, there’s no time for a human to be in the loop. The speed of engagement is just too high.
The other place where you might want to take the human out of the loop would be if you had a robotic aircraft that didn’t have reliable communications with its controller. For example, you could have a combat aircraft operating inside enemy airspace, where the enemy could disrupt communications and there was an inability for a human to verify targets and authorize them for a strike. That would be a situation where autonomy would have some advantages.
What worries you the most about autonomous weapons, semi-autonomous weapons, and machine learning?
I think our primary concern is that in a rush to deploy more automation ahead of competitors, we'll see militaries take needless risks, give important and valuable rules for humans in use of force decisions, and take shortcuts on safety. That could lead to less human involvement, and in a wartime environment, there might be less reliability than might otherwise be the case if humans were more involved in our operation or if weapons were better designed and tested.
From a high level view, how do you think fully autonomous weapons and semi-autonomous weapons are playing into the great power competition between the US and China?
At the strategic level, they’re not really that significant. If you look at the way that defense planners in the United States or in China are thinking about their military arsenals, there are certainly a lot of interesting artificial intelligence, robotics, and autonomy projects on either side. There is an increasing emphasis in both the US and Chinese military about the “intelligentization of warfare,” and that's going to see the incorporation of AI and automation in all sorts of various functions, but fully autonomous weapons are just not a major feature of strategic planning or investment, especially when you look at actual outlays. To give an example, in the US, former Secretary of Defense Mark Esper said while he was secretary of defense that AI was his number one technology priority. Financially, in terms of investments, it's just not a priority. The F-35 is the number one technology priority of the Department of Defense in terms of what they’re actually spending money on.
There are plenty of examples where we see more automation or autonomy incorporated into weapon systems, into more advanced missiles and platforms work, but I don’t see even developmental programs [for weapons] that are intended to be fully autonomous in either the US or China. I suppose it's possible that they exist and are secret, but the open source evidence suggests that countries are investing in more autonomy in each generation of weapon systems but have perhaps not yet made a decision to go towards fully autonomous weapons. It is certainly not the case that autonomous weapons are driving great power competition—that’s not the case at all. In the US-China case, geopolitical realities and differing visions for what the Asia-Pacific region ought to look like drives great power competition, and military competition flows from that.
You mentioned earlier that some progress had been made towards international agreement on these issues. What would you be looking for, in terms of concrete provisions that would be most effective in any international agreement?
What's most effective is what states adhere to. An overt focus on getting an agreement, getting a document that states ignore is not going to be helpful, especially if leading military powers ignore it. I think the salient question is where is there room for agreement between the United States, Russia, and China on constraints, rules of the road, norms for behavior for how they incorporate AI and autonomy into military systems, and is there any room for agreement? The focus of the conversation in the CCCW is focused on this “to ban or not to ban.” This gets a lot of emotion into the picture, but I don't think it is a practical reality at this point in time internationally, at least to get major military powers on board.
I've been encouraged by some of the progress in the CCCW’s forums, in discussions about the role of human involvement in the use of force, couched in terms like “meaningful human control”, or “appropriate human judgement”. I think that's a potentially fruitful area moving forward to better understand the role that humans ought to play in use of force decisions.
Senior US military leaders have stated publicly that they would always retain the human in the loop in nuclear launch decisions, even if they incorporate artificial intelligence, autonomy, or automation into varieties of military systems. That seems to be a pretty low bar to say, “Hey, let’s all agree that decisions about launching nuclear weapons should be made by humans.” I don't think that's particularly contentious in the US. I've never heard any serious defense analyst or military professional suggest anything else. Neither China nor Russia has made those statements publicly. Now, perhaps internally, they would agree, but that seems like a very minimum threshold transparency measure to help build confidence in states to approach this technology. I’m not talking about a legally or politically binding treaty, just a public statement by a senior Russian or Chinese official acknowledging that they also would always maintain human control over nuclear launch decisions. We haven't seen that, and that doesn’t suggest too many prospects for cooperation on more rigorous items.
My last question is about confidence-building measures, and the high level question here is: How could those be integrated into any international agreement or rules of the road for AI?
There’s a variety of confidence-building measures that states could adopt. Certainly, the statements surrounding ensuring human control over nuclear launch decisions is a really easy one, but there are others surrounding the use of autonomous systems in contested areas. One model could be the US-Soviet agreement in 1972 [creating a protocol for incidents at sea], for example. Another confidence-building measure could involve greater transparency around internal procedures to ensure compliance with Article 36 weapons reviews [which requires states to legally review all new weapons to determine if they violate international law]. Many states are not particularly transparent about these reviews, or even whether they have an informal process for doing such reviews. There are a couple around testing and evaluation or safety and reliability. In all, confidence-building measures could be very helpful if states choose to adopt them.
Cover image: Two U.S. Army officers test quadricopter technology. Photo credit Staff Sgt. John Bainter, public domain, accessed via Wikimedia Commons.