The Ethics and Implications of Modern Warfare: Robotic Systems and Human Optimization

Defense Secretary Chuck Hagel gets a look at the latest high-tech projects being developed for wounded soldiers by the Defense Advanced Research Projects Agency, DARPA, including the ATLAS Robot, center, demonstrated by engineer Brad Tousley, right, Tuesday, April 22, 2014, at the Pentagon. (AP Photo/J. Scott Applewhite)

In 2014, Defense Secretary Chuck Hagel gets a look at the latest high-tech projects being developed for wounded soldiers by the Defense Advanced Research Projects Agency (DARPA), including the ATLAS Robot. Photo credit: AP Images.

The Ethical Questions of Military Technology

In 1139, Pope Innocent II led the Second Lateran Council in banning the use of crossbows in war. At the time, the technology of the crossbow was unparalleled. Requiring only minimal training and little strength, the weapon had a range of up to four hundred yards and an unprecedented deadliness. A lowly and hastily-trained peasant could penetrate the armor of a trained knight at the squeeze of a trigger, thereby challenging the traditional power structure in conflict. The Roman Catholic Church, the most powerful political and spiritual order of the time, perceived the technology as a moral abomination and a gross transformation of the nature of warfare. Therefore, the Second Lateran Council proclaimed in their 29th canon a “prohibit[ion] under anathema that murderous art of crossbowmen and archers, which is hateful to God, to be employed against Christians and Catholics from now on.” The reaction of the Church to the development of the crossbow is an early but illustrative example of how ethical concerns prompted society to reconsider the potentially devastating effects of military technology.

In recent years, the rise of robotics and human enhancement has given rise to a new variety of ethical questions. The desired effects of progress in the realm of military technology include more effective combat, more durable defense, and more rapid resolution of conflict. At first glance, automated fighting and human enhancement appear as effective ways to achieve these goals. The removal of human fighters from the battlefield, the vision offered by fully robotic warfare, would be the most direct way to reduce casualties. Moreover, human enhancement offers possibilities of yet more efficient and effective combat and defense mechanisms in fighting.

However, robotics and human enhancement also pose alarming prospects of ethical blowback. The depersonalization of warfare lowers the stakes of declaring war in the first place. Therefore, with regards to international law and the long-term goals of military programs, automated warfare may be self-defeating and even counterintuitive. Furthermore, biological and technological upgrades to the human body raise a host of concerns, such as risks to health, the ability to reintegrate into civil society, and the use of enhancements outside of warfare.

Development of Robotics

At a basic level, robots are defined as automated machinery with the ability to sense, be stimulated, and imitate cognitive processes. A wide array of such machines have already been created for military programs. Currently, militaries employ robots to conduct duties such as spying, inspecting high-risk zones, and assisting the wounded. During US counterinsurgency campaigns in Iraq and Afghanistan, the use of robotic systems such as Global Hawk surveillance drones and armed Predator and Reaper drones was widespread. These unmanned systems were successful in hunting down improvised explosive devices (IEDs), but faced challenges in terms of surviving in conflict zones and meeting budgetary constraints. However, later on in the wars in Iraq and Afghanistan, the inventory of drones numbered in the tens of thousands, demonstrating the revolution of unmanned robotic systems in military technology. Beyond the United States, over eighty other countries have used military robotics. Russia and China are two that have made enough progress with automated warfare to prompt concern among US Pentagon officials. The Beijing 2015 World Robot Conference showcased three robots that have remotely operated assault capabilities, a technology that the Beijing police department has already started investing in. In this case, small defense contractors, rather than just the traditional major manufacturers, led the way for development of cutting-edge technology. In addition, Russian Chief of the General Staff Valery Gerasimov announced that the Russian military is preparing to fight on a robotized battlefield in the near future.

The alarmed response to such developments comes from those who envision a world where automated systems are so involved in warfare that they entirely replace human soldiers. Most acknowledge that robotics represents the inevitable future of warfare. Despite these visions, there has been surprisingly limited progress toward the development of truly autonomous robots that could replace human fighters. So far, fully automated robots have played a very limited role in warfare. The Defense Advanced Research Projects Agency (DARPA), the agency of the US Department of Defense responsible for developing military technologies, has funded some projects that explore the possibility of robot soldiers. Lacking the artificial intelligence and operational functionality necessary for action in combat roles, the humanoid prototypes are far off from the vision of robotic soldiers that many have in mind.

While initial prototypes may be limited, the current technology represents only the beginning of a “promising” future. It seems clear that military technology is taking steps toward fully automated combat robots in the future. As more research goes into developing the intelligence and autonomy of drones, the debate follows on what to do about the natural progression of robotic systems. In December of 2014, a debate in the US Congress over the US Navy’s Unmanned Carrier-Launched Airborne Surveillance and Strike program (UCLASS) ensued. These carrier drones would be planes without pilots, the next step in the future of robotic aviation. It is predicted that such unmanned aviation will emerge around 2025. The capacity for unmanned takeoff and landing, traditionally one of the most difficult human pilot tasks, has many considering how the natural evolution of automated technology will take shape. Namely, it seems possible that drone technology will shift from automated surveillance missions to bombing and strike missions. For example, the British unmanned system Taranis is exploring options in target selection software. This emphasis for further automating warfare is driven by the prospect of further dehumanizing warfare.

In this sense, the development of robotic systems is undoubtedly challenging the human role in warfare. Despite the currently marginal uses of automatic robotic software in actual combat, there is an undeniable movement towards increased autonomy. With this, there is an increasing need for humans to direct robot development operations and maintain them in the loop.

A visitor takes a picture of a weaponized police robot at an exhibit on police equipment. The Beijing police department is currently investing in this remote controlled technology.

A visitor takes a picture of a weaponized police robot at an exhibit on police equipment. The Beijing police department is currently investing in this remote controlled technology. Photo credit: AP Images.

As with most nascent technologies, military robotics comes with its fair share of critics. The depersonalized nature of these innovations has led to heightened concern from skeptics. They envision a world where robots have the artificial intelligence and software algorithms necessary to make their own militaristic decisions. The ensuing fear that humans may one day be fully removed from the warfare decision process has prompted protests against research into autonomous armed robots. Protesters in Austin, Texas recently staged an anti-robot rally outside of a technology and entertainment festival. Protesters held up signs reading “Stop the Robots” and “Humans are the Future,” reflecting fears over the dangers that artificial intelligence could pose to humanity. Elon Musk was reported in October 2015 to have donated US$10 million to the Future of Life Institute, a research and outreach organization that works to reduce existential risks to humanity posed by artificial intelligence. Furthermore, the United Nations recently held its first meeting on Lethal Autonomous Weapons (LAWS). Otherwise known as “killer robots,” such weapons are bringing to the disarmament agenda new questions of ethics and compliance with international law.

Robots in warfare exemplify the tension between the potential benefits and destabilization posed by technological progress. Like Pope Innocent II’s fears of the threats posed by the crossbow, modern paranoia is rooted in the fear of the destabilization due to a change in the nature of conflict.

Human enhancement

Many of the technologies imperative to everyday society, from the internet to wireless connectivity to computers, have origins in military technology. It is therefore feasible that military research will be the testing ground for how humans expand their capacities beyond the range of what is naturally endowed to them. Militaries have exhibited interest in human augmentation, including both biological and technological changes, but these modifications of humans for military purposes is certain to be ethically controversial.

The US military has spearheaded a variety of research projects that seek to optimize human fighting capacities. DARPA’s Accelerated Learning program, for example, seeks to apply the best practices of learning as demonstrated by neuroscience and statistical modeling. Evidence shows that specific placement of electrodes on the body can significantly reduce the learning curves for fighters in different contexts. Programs such as Z-Man seek to give humans the ability to scale walls like lizards, which would improve their ability to operate in urban areas. Other proposed avenues in human enhancement include abilities as seemingly far-fetched as telepathic communication and eating grass. The possibilities for the future appear endless.

The prospect of humans enhancing their senses, dexterity, stamina, and ability to learn has exciting consequences far beyond the realm of military technology. Resistance from those who fear the proliferation of enhanced soldiers nevertheless exists, as biologically enhanced fighters challenge the fundamental norms of war. In the context of war, established standards in biomedical ethics become blurred. What military necessities would justify the enhancement of a fighter? If such enhancements were permanent, how would a veteran’s ability to integrate into civilian life be affected? What are the long-term risks of such enhancements? The bases behind many aspects of international humanitarian law and laws of war are also upended. For example, much of the rationale behind the prohibition of torture by the Geneva Convention and the Hague Convention may not apply to a combatant who does not feel human levels of pain, stress, hunger, thirst, or tiredness.

Biomedical engineering of the human body poses an interesting possibility that the engineering of robots does not: a fighter with both human reasoning and superhuman ability. It is this vision that shocks and excites the world.

An Inevitable Future?

Technological progress will bring more autonomous warfare into reality. As mechanical and biomedical engineering fundamentally change the norms of warfare, backlash has appeared from those fearful of such change. The evolving nature of modern warfare demands non-resistance against the inevitable, but rather a consideration of technological autonomy and its ethical consequences. Thoughtful regulation and thorough long-term planning for robotics and human enhancement are critical to averting instability and disaster in the future.

The rate of technological progress in warfare only increases as time goes on. With each breakthrough and discovery, the willingness of nations to embrace robotics and enhancement only increases. Just as the decree of Pope Innocent II failed to actually stop soldiers from using the crossbow, simple rejection of autonomy and enhancement will do nothing to ensure a stable future of warfare.

About Author

Veronica Ma

Veronica Ma is a staff writer who mainly contributes to the Features section of the magazine.