Lethal Autonomous Weapon Systems: Issues for Congress

The current research and future deployment of lethal autonomous weapon systems (LAWS) is actively under discussion throughout the military, nongovernmental, and international communities. This discussion is focused, to various degrees, on the military advantage to be gained from current and future systems, the risks and potential benefits inherent in the research and deployment of autonomous weapon systems, and the ethics of their use. Restrictions, if any, in treaty and domestic law, as well as the specific rules governing procurement and use of LAWS by the military, will all rely to varying degrees on congressional action, and likely face future legislative debate.

Although autonomous weapons have historically been an artifact of fiction, recent commercial and military developments are driving widespread consideration of autonomous weapon systems. Military experience and success with semi-autonomous systems make fully autonomous weapon systems increasingly conceivable for military professionals. Moreover, the commercial development of robotics and expert systems (software that models relatively nuanced decision-making by humans during performance of specific skills) potentially applicable to military purposes makes lethal autonomy more attainable. The Department of Defense (DOD) “third offset” strategy (a plan for incorporating advanced technology into U.S. warfighting), with its focus on technological innovation and “outside the box” solutions to manpower and monetary limitations, includes these systems among other elements. Finally, the development of LAWS is perceived as occurring or likely to occur among many potential peer and asymmetric adversaries.

Congress is, or may be, involved in the development of LAWS in many ways. First, because no statute currently governs research, development, or deployment of LAWS, the DOD regulation issued on the subject has become the de facto national policy on military autonomous weapons. Congressional action could clarify DOD priorities in these weapon systems’ development. Also, congressional involvement in LAWS may include specific budgetary decisions, as well as overall appropriations. Key nongovernmental organizations (NGOs) such as Human Rights Watch, among others, are urging international action, and—partially in response—the United Nations has been considering lethal autonomous weapons for a number of years as part of its responsibility to consider new protocols under the Convention on Certain Conventional Weapons, the treaty that serves to restrict or ban internationally the use of certain weapons that are indiscriminant or that cause unnecessary suffering.

This report seeks to familiarize congressional readers with some existing semi-autonomous weapon systems and outline the current debate and discussion involving the research, development, and use of fully autonomous systems.

Lethal Autonomous Weapon Systems: Issues for Congress

April 14, 2016 (R44466)
Jump to Main Text of Report

Summary

The current research and future deployment of lethal autonomous weapon systems (LAWS) is actively under discussion throughout the military, nongovernmental, and international communities. This discussion is focused, to various degrees, on the military advantage to be gained from current and future systems, the risks and potential benefits inherent in the research and deployment of autonomous weapon systems, and the ethics of their use. Restrictions, if any, in treaty and domestic law, as well as the specific rules governing procurement and use of LAWS by the military, will all rely to varying degrees on congressional action, and likely face future legislative debate.

Although autonomous weapons have historically been an artifact of fiction, recent commercial and military developments are driving widespread consideration of autonomous weapon systems. Military experience and success with semi-autonomous systems make fully autonomous weapon systems increasingly conceivable for military professionals. Moreover, the commercial development of robotics and expert systems (software that models relatively nuanced decision-making by humans during performance of specific skills) potentially applicable to military purposes makes lethal autonomy more attainable. The Department of Defense (DOD) "third offset" strategy (a plan for incorporating advanced technology into U.S. warfighting), with its focus on technological innovation and "outside the box" solutions to manpower and monetary limitations, includes these systems among other elements. Finally, the development of LAWS is perceived as occurring or likely to occur among many potential peer and asymmetric adversaries.

Congress is, or may be, involved in the development of LAWS in many ways. First, because no statute currently governs research, development, or deployment of LAWS, the DOD regulation issued on the subject has become the de facto national policy on military autonomous weapons. Congressional action could clarify DOD priorities in these weapon systems' development. Also, congressional involvement in LAWS may include specific budgetary decisions, as well as overall appropriations. Key nongovernmental organizations (NGOs) such as Human Rights Watch, among others, are urging international action, and—partially in response—the United Nations has been considering lethal autonomous weapons for a number of years as part of its responsibility to consider new protocols under the Convention on Certain Conventional Weapons, the treaty that serves to restrict or ban internationally the use of certain weapons that are indiscriminant or that cause unnecessary suffering.

This report seeks to familiarize congressional readers with some existing semi-autonomous weapon systems and outline the current debate and discussion involving the research, development, and use of fully autonomous systems.


Lethal Autonomous Weapon Systems: Issues for Congress

Introduction1

Many analysts and officials have indicated that this is a critical time in the research, development, and deployment of lethal autonomous weapon systems (LAWS), both in the United States and throughout the world.2 As discussed below, autonomous weaponry may play an increasingly important role in Department of Defense (DOD) plans for continued U.S. asymmetric advantage in combat. Such autonomy, however, also raises numerous concerns and some vocal opposition. These concerns are of three general types: (1) the belief that risks associated with such new weapons outweigh benefits, (2) concerns about whether lethal autonomy violates the international law of war, and (3) doubts regarding the moral impropriety of machines making apparently "discretionary" decisions to take a human life. Congress has an important role to play, either as part of the public discourse regarding the future of such capabilities and appropriate policy to address them, or "behind the scenes" via its funding authority and oversight responsibilities

Role of Congress

Why the "Third" Offset Strategy?

"Offset strategies" are a way of conceptualizing DOD plans and actions taken to establish and maintain asymmetric advantage over enemies—particularly with regard to technology development and weapons employment.

The "first offset" refers to President Eisenhower's New Look Strategy in the 1950s. It sought to counter Soviet conventional superiority with nuclear weapon technology, while allowing continuing cutbacks in the armed forces.

The "second offset" refers to development of precision weapons combined with sophisticated reconnaissance in the 1970s. The disproportionate advantage of the United States due to these battle networks continues today but is viewed as declining in light of enemy responsive technologies and tactics.

See footnote 4, below.

Questions related to the research on, and development and deployment of lethal autonomous weapon systems (LAWS) have been controversial for many years.3 However, several factors may call for congressional attention and potential action on LAWS at this time.

A nuanced understanding of LAWS and related issues may assist Congress in its role regulating the manning, funding, and equipping of U.S. military forces.4 LAWS are a component of the DOD's ongoing "third offset" strategy. This strategy is one way the DOD conceptualizes and integrates plans for ensuring continued asymmetric combat advantage for the United States, with particular focus on the incorporation of future technologies not easily replicated by competitor states or non-state entities.5 Robotics and autonomous systems have been highlighted by the DOD as a component of this overall future effort of the U.S. military.6

Congress also sets the legal standards for the conduct of United States forces during armed conflict through the Uniform Code of Military Justice, as well as other statutory regulation.7 The use of LAWS involves many moral, ethical, and strategic issues beyond considerations of military advantage. For example, as discussed below, opponents of the development of LAWS argue, variously, that such weapon systems entail unrecognized long-term risks: strategic, such as undesirable escalation or difficulty maintaining control of the technology; legal, such as the inability of LAWs to discriminate between civilian and military targets; and ethical, because they place a machine in position to make a "discretionary" decision about human lives.8 The DOD currently internally regulates the research, development, and deployment of autonomous weapon systems via DOD Directive (DODD) 3000.09, Autonomy in Weapon Systems (2012). In the absence of congressional or executive action, some analysts consider this DOD directive as the de facto policy of the United States on this controversial topic.9

Finally, Congress is an instrumental part of U.S. participation in internationally binding bodies and agreements, both via funding and treaty approval.10 There has been some recent consideration of LAWS at the United Nations via the Convention on Certain Conventional Weapons (CCW), the treaty that serves to restrict or ban internationally the use of certain weapons that are indiscriminant or that cause unnecessary suffering, which the United States ratified in 1995.11 States parties to the CCW and its various protocols agreed in 2013 to a mandate to review issues associated with LAWS, and convened meetings of experts in 2014 and 2015 to discuss these issues. Numerous international nongovernmental organizations (NGOs) view the CCW as the vehicle for advocating for multinational regulation or prohibition of LAWS.12 Senate approval would be required for any potential international treaty or additional protocol to the CCW, and congressional implementation may be required for any less formal agreement on the subject.

Options for Action

A variety of options have been proposed in response to the near-term development and possible appearance of LAWS in the battlespace. The option most often discussed is the proposal to enact a complete ban on research, development, and deployment of "fully" autonomous weapon systems.13 Proponents argue that such bans have been effective in the past—in areas such as biological and chemical weapons they have restricted use among major nation states and thereby retarded development—and could significantly curtail development and deployment, even among nations that do not voluntarily participate in the ban.14

Opponents of the ban argue, in contrast, that a ban would be both undesirable and ineffective.15 They argue that it would be undesirable because of the substantial possibility that research in this area could eventually lead to the development of autonomous weapon systems that are more compliant with the law of armed conflict (LOAC) and other international law than current systems, as discussed below.16 It is also argued that such a ban would be ineffective because of two factors: (1) rapid development of civilian dual-use technologies, such as drone guidance systems and unmanned vehicles,17 and (2) non-U.S. peer development of these technologies.18 It is argued that peers will not agree to a ban, that such a ban will be unenforceable because of the ambiguity of such terms as "fully autonomous," and that such a ban, even if states publically agreed to it, would be unenforceable without a comprehensive and unlikely enforcement regime.19 Proponents of a ban have noted, though, that similar arguments have been raised with respect to bans on other technologies, such as blinding lasers, antipersonnel landmines, or cluster munitions, that have been negotiated and enforced.20

Another option for action, advocated by both proponents and opponents of a ban, is regulation of the technology—both its development and deployment.21 Proponents of this idea suggest that autonomous weapon development should continue, but international bodies should develop regulatory guidelines, embodied in a binding agreement like a protocol to the CCW, to describe the appropriate contours for the use of autonomous systems.22 In the United States, the only regulatory document currently applying to autonomous weapon systems is the Department of Defense Directive 3000.09 – lauded by some in the international community for providing a model framework for testing and basic guiding principles.23 Others note that regulation at the international level can be coupled with transparency, particularly regarding LOAC compliance-testing methods and systems even if the autonomous source code remains secret, to ensure that developed systems are tested as vigorously and broadly as possible, minimizing the likelihood of unexpected decisions.24 In sum, in the absence of a complete ban, both opponents and supporters of a complete ban on lethal autonomous weapon systems agree that LAWS should be regulated and managed.25

Congress would likely play a central role in any such ban or regulation of the technology, in a variety of ways. Of course, if a ban or control regime was developed via international treaty, then ratification of the treaty would require Senate approval.26 However, even in the absence of international action, Congress could set the legal bounds for the process of researching, developing, and deploying such systems within the DOD. Although the DOD Directive discussed above provides current standards for review and regulation of autonomous weapons, Congress could provide additional or alternate standards of review and employment.

Even if Congress does not seek to supplant the specific standards developed by the DOD in its directive, there are opportunities for congressional regulation and oversight. For example, although DODD 5000.01, Defense Acquisition Systems, requires legal review of weapon acquisitions,27 and there are pre-existing procedures for these reviews within each of the individual services,28 the weapons review process may not be adequate to handle the complexity, nuance, and transparency needed for autonomous weapon review. For example, as discussed below, an understanding and review of the nature and reliability of an autonomous system's behavior is required for adequate legal analysis. However, unlike traditional weapon reviews, lawyers making judgments about autonomous systems may require technical insight or even simulation capability29 currently unnecessary (and therefore unavailable) when evaluating more conventionally understood effects (such as explosive radius, etc.). To meet these challenges, weapons review in this and perhaps other areas of emerging technology may benefit from more detailed standardization and centralization, or the provision of additional resources. Furthermore, the complex issues and high international profile of these weapon systems might make it appropriate to require congressional reporting and thereby oversight at intermediate stages in the acquisition and legal review process not always necessary for other weapon systems.

Finally, Congress will almost certainly be presented with regulatory issues that relate to the development and employment of these technologies, but which do not directly relate to the standards of development and use within the United States. For example, Congress may be asked to consider statutory action to assist in LAWS development or prevent proliferation, perhaps by carefully regulating the export of dual use technologies in this area.30 Even in the absence of statutory regulatory action, congressional budgetary action on weapon system funding, as well as areas of research and development, will provide a direction for military development.31

Defining Autonomous Weapon Systems

There are various ways to discuss autonomy in weapon systems. The definitions of the terms, and even the taxonomy of existing systems, are not always consistent among authors on the subject.32 As discussed in the text box, "What is Autonomy?" the synthesized view of the many definitions acknowledges a continuum of "autonomy" in weapon systems based primarily on two factors: (1) the target specificity (the geographic, temporal, and descriptive guidance designating the target of lethal force) provided by human operators when the weapon system is set into motion, and (2) the execution flexibility (scope of potential self-initiated action) in service to assigned goals.33

Both the target specificity and execution flexibility of an autonomous system may vary by conflict, mission, or even individual objective. Therefore, a particular weapon system occupies a range rather than a point within the continuum of autonomy determined by its potential uses, and has a specific degree of autonomy only upon being set into motion with these parameters assigned. Discussions of the "autonomy" of a weapon system as a whole frequently refer to circumstances under which the system acts in a maximally autonomous manner.34 This

What Is "Autonomy"?

Autonomy, outside of the technical literature, operates primarily as a general term for a variety of concerns involving decision-making and predictability of increasingly computerized weapon systems. The definitions that appear in the non-technical literature generally define autonomy in terms of ethically relevant sub-processes of the system as a whole, such as targeting, goal-seeking, and/or initiation of lethality. On the other hand, those that appear in the design and engineering literature tend to be more specific, technical, and less useful for public discussion of risk/benefit, legal, and ethical issues. Several definitions commonly used are in the Appendix.

Lethal autonomy is frequently defined in the literature solely by whether or not a human makes the targeting decision. However, using human targeting alone as the definition for lethal autonomy may fail on several levels. First, it is over-inclusive, since many weapons considered non-autonomous lack specific selection of persons for death by other persons. For example, consider the firing of a cruise missile at a location identified as a terrorist base or other lawful military target based on previous intelligence reports. If individuals are coming and going from the location targeted prior to the launch or arrival of the missile, in what sense are the specific individuals present in the base designated as "targets" by a person? Similarly, aerial bombardment with "dumb" bombs frequently kills unknown or undetermined persons. Thus, many non-autonomous weapons lack a strong "human targeted" characteristic—one person does not designate another specific person or persons to be killed.

Defining lethal autonomous in terms of human targeting is at the same time also under-inclusive. The presence of specific human-targeting does not seem to completely eliminate the intuition that a weapon system is behaving autonomously. If every person at a designated geographic location is targeted, it does not seem to change the "human targeted" continuum whether an explosive device (such as a cruise missile)or a robot with a gun is used to kill the personnel selected, but the latter clearly seems to be autonomous lethality in the sense identified by many authors.

As an additional example, consider an air-to-air missile fired at an identified group of hostiles. If it selects one hostile from the group as a target, this is generally considered to be part of the initial "targeting" of the group as a whole by the human operator. What if there were also potential civilian targets in the same geographic area, such as airliners, which were evaluated and then rejected as potential targets due to software resident in the missile upon initiation of its selection process? Is this meaningfully different from a robot soldier told to enter a village and identify and kill enemy soldiers, while avoiding civilian casualties—other than in the presumed reliability of the aerial friend/foe determination?

Reviewing the literature, it seems that the perception of relevant "autonomy" is related, on the one hand, to the specificity of the target designation given to the system in geographic, temporal, and descriptive characteristics, or target specificity. Thus, systems that are given a very specific target designation, in time, geography, or other factors, by a person (e.g., air-to-air missiles that pick targets from a specifically designated group or most defensive systems) are not considered "autonomous" in an ethically relevant fashion.

The second element (or intuition) of autonomy present in the literature is execution flexibility, in the sense that systems that have tightly constrained available actions are considered non-autonomous. Consider, as an example of systems with tightly constrained operation, a landmine, trip wire explosive, or defensive gun emplacement versus a robotic tank ordered to guard a perimeter. On the other hand, those devices with limited targeting but broad execution flexibility, such as a robot programmed to hunt down a particular individual in a geographic region, seem to encounter the same risk/benefit analysis and ethical intuitions as the notional "fully autonomous system" or "robot soldier."

Therefore, broad targeting specificity and expansive execution flexibility both tend to result in characterization of autonomous behavior by a system. This is likely because these characteristics raise concerns about the locus of decision-making and predictability of the system, either in reality or in perception.

convention may be misleading and lead lawmakers and regulators to evaluate autonomy on a per platform basis, rather than defining permissible and impermissible conditions of employment that apply across devices.35

The variety of military systems in use that automate some processes, or that include some degree of autonomy, is large,36 and a short survey may help to understand both their ubiquity and the scope of the systems' autonomy as perceived by various parties.

Force Multiplication

Military automation extends well beyond lethal autonomy to force-multiplication technologies, which are not explicitly considered in this report. This includes such disparate capabilities as automated drone flight (including takeoff and landing), auto-loitering capabilities of human-targeted weapons, and automated selection of high-interest imagery for intelligence analysis.37

Even the new Joint Light Tactical Vehicle, a replacement for the Humvee, was planned to be manufactured by Oshkosh—a firm that offers software ("TerraMax") allowing their vehicles (including the model sold to the Army) self-driving capabilities.38 Autonomous systems of these types, which do not incorporate independent selection of targets or initiation of lethality, are not themselves controversial but nonetheless create both the technological and doctrinal basis for more hotly debated LAWS.39

Defensive Systems

Another set of systems that incorporate some degree of autonomy along with lethality, but with less controversy, are autonomous defenses. The U.S. Navy, for example, has used the Phalanx system to defend ships against missile attack since the 1970s, with little comment from the civilian community.40 In cases where the ship defense systems recognize an incoming threat that requires a response faster than a human operator is capable of providing, the defense system is empowered to initiate a lethal response without human involvement.41 Likewise, a similar land-based system (C-RAM) has been deployed by the United States at forward-operating bases in Iraq. The C-RAM system, like the Israeli Iron Dome that performs a similar counter-rocket, artillery, and mortar function, can perform its defensive function only by detecting, targeting, and firing in a decision-cycle too fast for human operators to be involved.42

In these defensive systems, the human operator does not designate a specific target and initiate the use of lethal force. While the absence of human control over targeting is often expressed as the break point for autonomous warfare,43 these systems are nonetheless frequently granted either a carve-out from otherwise restrictive regulations (as in the DODD),44 or treated as non-autonomous precursors to genuinely autonomous systems.45 This is likely related to both the high target specificity provided by the "defensive" nature of the weapons46 (targeting predetermined based on a specific set of geographic, temporal, and evaluative characteristics) and the relative lack of execution flexibility47 (these weapons simply shoot down objects that meet strictly defined criteria). These dual factors have led some, otherwise highly critical of autonomous weapon systems, to even decline to label them as autonomous—calling them "automated" instead.48

Targeted Lethality

Another area of ubiquitous incorporation of some degree of autonomy is in weapon systems that exercise some degree of execution flexibility but have very high target specificity (see Figure 1). For these weapons, the specific individual target or group of targets (e.g., a specific plane or formation of planes, a specific structure) is designated by a person at the time of weapon initiation. This category includes, for example, cruise missiles, as well as the many air-to-air missiles that choose a target from among those available once launched into position by the human operator.49 Like defensive systems, these types of flexibly executing but very specifically targeted systems are generally considered to raise limited if any risk-based, legal, or moral/ethical issues associated with autonomy.50

Figure 1. Continuum of Autonomy

Source: CRS derived from multiple sources.

Other systems in this category, such as encapsulated torpedoes, have less specific targeting, and thereby have the potential to generate some controversy. An encapsulated torpedo is a stationary "mine" prepositioned in a guarded area that, when activated, targets and fires a torpedo at a hostile ship that enters the guarded area.51 In this case, the encapsulated torpedo shares some of the targeting specificity of defensive systems, but that specificity is reduced by its temporal separation from the human operator (it is prepositioned) and the potentially more nuanced and complex judgments required if the protected waterways are also used for civilian shipping. At the same time, these systems also incorporate the execution flexibility—via the torpedo's action—normally associated with systems featuring very specifically targeted lethality.

Autonomous Systems

A number of existing or proposed systems may already exhibit behavior that might be considered autonomous under generally prevailing standards. The Israeli Harpy system is an aerial drone that loiters in a target area, generally over enemy territory. Upon detecting a hostile radar source, the Harpy drone targets and initiates a lethal strike against that source.52 Because this system initiates lethal force against a target that has not been specifically designated by a human operator, it is plausibly considered an autonomous system by most definitions. Likewise, South Korea has deployed to the DMZ emplaced gun towers with autonomous lethal capacity, although their current operational assignment requires human consent before lethal response can be initiated.53

Key Issues

A wide variety of topics are subject to debate in the policy and academic literature regarding the consideration and development of lethal autonomous weapon systems. Although an exact taxonomy does not exist, the numerous issues under debate can be usefully divided into those regarding (1) risks and potential benefits; (2) legal issues; and (3) moral/ethical concerns (see Figure 2). Although authors' positions vary in terms of nuance, much of the primary discussion centers on whether a ban (international or unilateral by the United States) on the research, development, and deployment of LAWS is appropriate.

Potential Benefits and Risks

Benefits

Capabilities in Military Context

That autonomous lethality provides tremendous potential value in the context of armed conflict is uncontroversial.54 With non-lethal military systems, traditional automation provides an immediate force-multiplier by taking repetitive or analytically arduous tasks and removing the need to hire, train, and support personnel to perform them.55 Autonomous action is more valuable, as complex systems that incorporate tools such as learning algorithms and contextual awareness allow for the "automation" of far more numerous and difficult (in terms of both training and incentive) tasks that require judgment and situational awareness.56 As a simple example, automation of some or all flight requirements of remote-controlled drones, if reliable, would allow for significant savings and multiplication of efforts by allowing remote-control pilots to assume direct control only during the actual operational use of the weapon system—automating the flight to and from the depot.57

In addition, autonomous systems are generally capable of reacting substantially faster than humans. One way to conceptualize the critical element of initiative, as well as overall command and control competence, is the "OODA loop."58 The OODA loop consists of the key steps of (O)bserve, (O)rient, (D)ecide, and (A)ct.59 Under this concept, when considering two opposing forces, whether on the individual, tactical, or strategic level, whichever force has the ability to cycle through these steps the most quickly will control the initiative of the conflict—thereby forcing the opponent to react rather than initiate.60 In practice, this effect snowballs, as the faster force is able to counter-react before the opponent's initial reaction cycle completes, and each cycle of reaction delay drives the opponent more out of synch with appropriate response to the current situation. Some observers assert that the initial reaction advantage of autonomous systems will snowball into a potentially insurmountable advantage in warfare.61

Finally, one of the primary concerns with today's non-autonomous remote controlled weapon systems is the problem of both unreliable connections to the remote pilot62 and the possibility of enemy interference.63 Like the presence of an on-board pilot, autonomous action by the weapon system itself minimizes the requirement for continuous communication and the possibility for enemy interference with control signals during deployment.64 Although the presence of software-driven decision-making raises the possibility of the enemy "hacking" autonomous control systems,65 it is unclear to what degree this risk is substantially greater than that already posed with modern non-autonomous weapon systems, almost all of which rely on sophisticated computer controls and, frequently, network communication.66

However, these potential advantages are counterbalanced, even for many of those who do not support an outright ban, with concern for operational risks involved in LAWS development and deployment.67 While these risks are discussed in more detail below, they include the possibility that programming error, novel situations, or adversary activity could lead to a loss of control or predictability.68 Unlike idiosyncratic human decision-making, software control systems may be replicated across the fleet of LAWS, and so the damage potential of a simultaneous failure by all similar LAWS in the inventory must be considered, not only the consequences of a single system failure.69 This could result in disproportionately high damage versus human controlled or only partially autonomous, systems, with consequences including mass fratricide or undesired escalation of conflict.70

Leverage Civilian Technology

Focus on lethal autonomous weapon systems may also be potentially beneficial for the United States because it capitalizes on current advances in civilian autonomous technology.71 The United States is a global leader in this area,72 and one of the imperatives of military technology is to maximize areas where an asymmetric advantage is available that is difficult for opponents to replicate.73 Synergistic technologies of stealth, reconnaissance, and precision weapons developed by the United States gave substantial and persistent advantage to the military precisely because these technologies represented areas of U.S. leadership and were difficult for opponents to replicate.74 Furthermore, investment by the United States in these areas of research and development will likely drive development of industrial capacity and commercial development in a virtuous cycle.75 Military and civilian developments in autonomous capability could therefore have a symbiotic relationship.

Translation of civilian developments in autonomy into weapon systems, however, may also result in an "arms race" dynamic, where competitor states are forced to invest in LAWS to retain military competitiveness; it could allow for the proliferation of lethal autonomy to entities, such as sub-state actors, who lack the organic R&D to otherwise develop such systems.76

Potential Improvements in Ethical Warfare

Many authors, both opponents and supporters of a ban on LAWS, have highlighted the potential benefits of autonomous technology for ethical warfare in the sense that they could facilitate compliance with the law of armed conflict—at least in some areas.77 LAWS as currently conceived are not susceptible to emotional effects, such as shock or anger that may result in abuses by human soldiers.78 Finally, the presence of LAWs in mixed teams with human soldiers, particularly if LAWS have independent capacity to judge ethical conduct, may restrict the willingness and ability of those soldiers to engage in inappropriate or unlawful conduct.79

In addition, introducing autonomous weapon systems into an environment where all or almost all of the potential targets are lawful, or have already been vetted, seems to potentially provide humanitarian benefits.80 For example, if the alternative is between introducing a lethal explosive device or a lethal autonomous system with some capability to avoid accidental or collateral casualties, the LAWS would likely be clearly legally and ethically desirable—even if the system's ability to distinguish non-combatants is unreliable.81 In this sense, autonomous decision-making at the moment of lethal action may be an improvement on the precision of weapon systems, eliminating some of the error created by imperfect intelligence and distance in time between the initiator and target.82

However, these proposed benefits are questioned by many, including supporters of a ban, arguing that such "better ethical decision-making" technology does not exist83 and is unlikely to ever exist.84 There are also concerns that ethical decision-making would not be employed by potential state and non-state opponents of the United States in a prospective arms race, even if the United States reliably did it.85 The extensive legal and ethical critique of autonomous weapon systems arising from these questions is discussed in more detail below (under the "Legal Issues" and "Moral/Ethical Issues" sections).

Risks

Likelihood of War/Jus Ad Bellum

A common concern regarding the development of LAWS is that it will encourage inappropriate aggression.86 The justification for initiating armed conflict is generally described by the concept of jus ad bellum, or Just War theory.87 However, although sometimes couched as such, the concern that LAWS will lead to more warfare is not actually a legal one, since use of LAWS does not affect the legal evaluation of the propriety of war initiation.88 Rather, the argument is that LAWS would create a moral hazard for national leadership. This presupposes that current or future leaders are willing and desire to engage in unlawful war-making but are inhibited by the likelihood that it will result in military casualties, either for moral reasons or because of spin-off effects of those casualties.89 If these suppositions are accurate, then LAWS would appear to increase the likelihood that leaders would engage in unlawful aggression since it would minimize these casualties.

Some argue, however, that this objection seems excessively generic.90 They contend that any weapon system that minimizes casualties, or gives a substantial advantage to one side in armed conflict, would trigger this same moral hazard.91

Uncontrolled Arms Race

Another potential risk to the development of LAWS that has been noted is that it will trigger wider arms races.92 This argument takes two forms. First, that because of the tremendous tactical advantage associated with the development of LAWS, peer and near-peer competitors will be forced to develop autonomous capabilities for their own weapon systems.93 Second, asymmetric competitors, such as international terrorist organizations, would have access to the technology once it becomes widely used in warfare.94 For both of these versions of an "arms race," one harm contemplated, in addition to the inherent instability associated with arms race dynamics95, is that competitors will have either less incentive or less capacity to control the behavior of LAWS, resulting in development or fielding of LAWS that fail to comply with the laws of war (generally, this is conceived as competitors developing indiscriminate LAWS, since automation is far easier to accomplish than discrimination or ethical decision-making).96

A number of counterpoints have been presented to this risk. First, many contend that an arms race is already in progress, with peer and near-peer competitors currently developing autonomous weapon systems—regardless of U.S. development of these systems.97 It is argued these nations would refuse to adopt, or successfully evade enforcement of, any potential multilateral ban.98 Second, it is argued that asymmetric competitors may be capable of taking advantage of technological development, particularly civilian sector advancements, even if not actively developed for military purposes by nation-states.99 Under this argument, once the basics of autonomy in machines are developed for civilian purposes, weaponization of these autonomous systems is relatively trivial.100

Asymmetric Warfare

Another risk associated by some with the development of LAWS is an increased likelihood of attacks on civilian targets, particularly in the United States itself.101 The argument is that the development of LAWS will result in the absence of U.S. soldiers from the war zone. Enemies of the United States, it is argued, will see no political/strategic benefit in attempting to fight, or carry out attacks on autonomous weapon systems if the United States is not suffering human casualties. The opponent, under this argument, is therefore incentivized to carry out attacks on civilian rather than military targets.102

Counter-arguments presented by others include at least one made against the discussion in the "Likelihood of War/Jus Ad Bellum" section above, in that any generic technological advantage that makes U.S. service-members less susceptible to enemy attack appears to create the same risk.103 In the same vein, a DOD analyst has noted that this argument essentially "blames the victim," by discouraging protection of soldiers because of the enemy's presumed willingness to violate the laws of war by assaulting civilians.104 Finally, it has been pointed out, considering the history of nuclear strategy as well as terrorist targeting, that both peers and asymmetric opponents are not generally reluctant to place civilians in jeopardy if it serves strategic ends, and therefore the presence or absence of U.S. casualties away from the battlefield is irrelevant.105

Hacking/Subversion

Another perceived risk with the use of autonomous weapon systems is that reliance on autonomous systems increases the military's vulnerability to hacking or subversion of software and hardware.106 The replication of software, as well as the complexity and interdependence involved with widespread use of autonomous weapon systems could also significantly magnify the harmful impact if a security vulnerability or exploitable system malfunction were discovered by an adversary.107 Potential consequences could include mass fratricide, civilian targeting, or unintended escalation (as discussed under "Loss of Command/Control" below).108 One response to that argument, however, is that "on-board" autonomous capability may counter subversion or hacking of current and future remote systems.109 Also, even weapon systems that do not include autonomous capabilities rely on computer hardware and software. This automation is no less susceptible to hacking and subversion, and the presence of autonomy may make a system more resilient than an equally computerized but less internally controlled non-autonomous weapon system.110

Loss of Command/Control

Another risk discussed in the literature is the possibility that large-scale adoption of autonomous weapon systems may result in "run-away" escalation that results in warfare that otherwise would not have occurred.111 When considering this possibility, some of the military advantages of autonomous systems become disadvantages. First, the complexity, interdependence and flexibility of the system that allows it to perform complex mission sets may result in unpredictable and unintended lethality.112 In addition, some have maintained that the danger of uncontrolled escalation is significantly greater precisely because of the speed with which LAWS are capable of decision-making and action—one of the primary military advantages—creates a significant time delay between failure and corrective action.113 Some analysts of LAWS argue that in an environment with multiple autonomous systems—likely on both sides of a tense, armed confrontation—armed conflict may begin without either party intending it because of an initial error snowballing into a full-scale response, triggering an automated response in a vicious cycle.114

The counter-argument is that there is nothing inherently more destructive about autonomous weaponry; it is simply conventional weaponry directed by an autonomous system. Because of this it is not clear why autonomous systems are more susceptible to inadvertent escalation than humans under the same circumstances.115 Some also question the plausibility of a scenario in which numerous free-ranging autonomous weapon systems come into contact with one another while empowered to engage in lethality independent of human tasking or authorization.116

Judgment Errors/Accuracy

The final, and frequently primary, risk perceived by many is in the area of reliability and predictability. For various reasons, almost all involved in LAWS analysis recognize difficulties inherent in ensuring reliable decisionmaking.117 Proponents of a ban generally take the position

Approaches to Artificial Intelligence (AI) that May Affect LAWS

In both military and civilian development of AI, the way that designers and developers model the human brain and its decision-making process has implications for the legality and morality of LAWS. One approach is "top down:" treat systems like digital computers, programming all the rules of intelligence from the very beginning. In this, the parameters of decision-making would be part of the programming of the system. Another approach, seeking to mirror the processes of the human brain, is using neural networks in a "bottom up" manner: instead of being programmed with the rules of intelligence, these networks learn the way a human baby learns, by trial and error. The use of neural networks may find its application in military weapons through swarming: large numbers of relatively small weapons, with synchronized actions, such that the swarm reacts faster than its opponent and defeats it.118

If LAWS are developed in a "top down" manner, designers may be able to program strict command and control limitations to ensure compliance with law, or to at least provide humans a "kill switch" to abort a potentially illegal or immoral mission. "Bottom up" developed weapons, while smaller and simpler, may potentially be more difficult to design to incorporate a human override function.

that the decision-making of an autonomous weapon system is fundamentally or irreducibly unpredictable, thereby foregoing the need for research to determine future reliability.119 For example, some argue that because no software can include an exhaustive description of all possible circumstances, it is impossible for an autonomous system to behave predictably outside highly controlled circumstances.120 Others argue that the technology required for flexible autonomous operations will, by needs, be based on learning or self-altering algorithms, which may develop unpredictable behavior patterns invisible to the original designers.121

Some experts, however, believe that an autonomous decision-making system may plausibly reach a level of reliability and predictability comparable to a human soldier.122 The proponents of the technology, at least in theory, tend to argue that requiring absolute or logically certain predictability from LAWS holds it to a higher standard than that applied to humans and risks failing to use a potentially more reliable system because it is not perfectly reliable.123

The question of decision-making performance is, however, inextricably linked to a large number of disputes regarding the legality of LAWS. The nature and performance of the autonomous system in making critical decisions about the propriety of the use of lethal force are the central issues of the next section.

Legal Issues124

The areas of legal contention regarding autonomous weapon systems are 1) the weapon system's ability to comply with U.S. obligations under international humanitarian law (IHL) and 2) rules of engagement.125 This is essentially an operational concern: "Will the functioning of the weapon systems comply with the appropriate requirements?" The second concern is less focused on function and more focused on accountability. This concern centers on whether the use of LAWS will make it more difficult to hold parties responsible for misconduct in the course of armed conflict.126

Operational/Functional Laws

Various authors have pointed to three primary areas of operational law that may affect consideration of LAWS. First, there is the set of legal norms covered by the concept of jus ad bellum, which is the law governing the appropriate justification for the initiation of armed conflict.127 Second, there is the body of law classifying weapons as lawful or unlawful. Finally, all parties discuss the laws governing conduct during war, or jus in bello.128 Jus ad bellum is addressed in the section on "Likelihood of War/Jus Ad Bellum" section above, because the relevant debate with respect to autonomous weapons has more to do with the perceived risk of moral hazard than legal justification for the use of force.

Weapons Law

A fundamental tenet of the international law of armed conflict is that "the right of the parties to an armed conflict to choose methods or means of warfare is not unlimited."129 Specifically, it is prohibited to use weapons or projectiles in such a manner as to cause superfluous injury or unnecessary suffering, or to use means or methods of warfare that are "intended to or may be expected to cause widespread, long-term, and severe damage to the natural environment."130 Under Article 36 of Additional Protocol I to the Geneva Conventions, states parties are also obligated to undertake legal reviews of new weapons systems under study, development, or acquisition, "to determine whether [their] employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party."131 While the United States is not a party to Additional Protocol I, it is one of the few states to have adopted a formal program to review weapons and weapon systems for compliance with international legal obligations.132

A weapons evaluation for compliance with the laws of armed conflict considers first whether a weapon is prohibited per se, or prohibited under all circumstances, under the law of war.133 This status adheres to weapons that are banned pursuant to treaty as well as to weapons that cannot comply with legal requirements under any circumstance or method of use.134 The two principal legal requirements are, first, that the weapon does not cause suffering or injury beyond that required for a military purpose.135 For example, the use of glass ammunition is prohibited, without further evaluating the specific circumstances of use, because its use is considered to inflict unnecessary suffering.136 Second, weapons must be capable of being employed in a fashion to distinguish between military and civilian targets (which might be impossible because of an incapacity to target accurately or control effects).137 For example, a cyber-weapon that, when deployed, could not be prevented from doing uncontrollable collateral damage to civilian infrastructure would likely be illegal per se.138 Weapons are evaluated considering their normal or expected use rather than any conceivable use (or misuse).139

Although some proponents of a ban on LAWS argue that such systems are per se illegal on the basis that they can never adequately distinguish between lawful and unlawful targets,140 opponents argue that this assertion ignores many lawful use scenarios.141 They point out that even "dumb" bombs are not per se illegal, since they can be used under circumstances in which civilians are not present; for example, to target a group of tanks in a desert area.142 Likewise, even autonomous weapons without any capability to distinguish between combatants and civilians might be used under limited circumstances in combat zones without noncombatants.143 The resolution of this disagreement seems to turn on the likelihood of any scenario in which LAWS can perform at least equal to a human,144 with opponents of a ban pointing to the uncontroversial current use of "over-the-horizon," or sensor-based, targeting as an analogy,145 and proponents of a ban arguing that these scenarios are extremely limited or unlikely.

The second aspect of a weapon evaluation is based on the specific proposed uses of the weapon. In this case, each of the proposed uses of the weapon must be evaluated for the weapon system's compliance—under those sets of circumstances—with the law of war.146 This contextual evaluation primarily relies on the weapon system's ability to comply with the principles of distinction and proportionality during actual operational use.147

Law of Armed Conflict/Jus In Bello

Although a variety of "principles" form the basis of the law of armed conflict (the DOD identifies five),148 most authors considering autonomous weapon systems have centered their consideration on the foundational principle of distinction and its related principle of proportionality.149 The requirement to take feasible precautions is also frequently mentioned, but this issues seems to have generated little meaningful debate.150

Distinction is the requirement that warring parties distinguish between military and civilian objects and personnel during the course of conflict, and is considered customary international law.151 As Article 48 of Additional Protocol I to the Geneva Conventions puts it, "[i]n order to ensure response for and protection of the civilian population and civilian objects, the Parties to the conflict shall at all times distinguish between the civilian population and combatants and between civilian objects and military objectives and accordingly shall direct their operations only against military objectives."152

The primary concern, as discussed in the "Judgment Errors/Accuracy" section above, is that LAWS will simply be unable to distinguish between combatants and civilians.153 This inability is considered, by all sides of the debate, to be a particularly acute concern in the context of irregular warfare.154 In these conflicts, combatants may be embedded within the larger civilian environment, which creates extremely complex decision-making scenarios.155 As an example one author offers the case of an autonomous robot that performs a house-to-house search for combatants and encounters an individual running toward the robot, screaming, holding something metallic in his hand.156 One can certainly imagine circumstances in which entry into a civilian home would result in an agitated reaction from residents, and there are many objects that even humans are unable to quickly and effectively distinguish from weapons.157

In addition, because LAWS lack empathy or human emotion, some authors argue that LAWS are now and will be in the future unable to effectively determine the intentions of individuals on the battlefield. As a result, LAWS will be unable to effectively distinguish between combatants and noncombatants,158 particularly, it is argued, in complex situations involving non-civilian noncombatants, such as surrendering, wounded, or otherwise incapacitated fighters.159

Defenders of the technology, at least in terms of its potential, point out that future autonomous weapon systems may be more capable of distinguishing between combatants and civilians than human soldiers.160 LAWS' capabilities are not degraded by the same stress and emotional intensity that may affect the judgment of soldiers in combat. Moreover, because LAWS have no need for self-defense, they can respond more tolerantly to ambiguous circumstances than similarly situated soldiers, for example by delaying their response to "threatening" actions until the initiation of active hostility.161 In addition, governments interested in improving the accuracy of distinctions made by such systems could employ shared standards of testing, as well as leveraging the benefit of evaluation by ethicists of complex or difficult distinction decisions.162

Others argue that LAWS will still be useful in high intensity conflicts, even if they never perform to a level permitting operation in combat zones that contain a significant number of noncombatants.163 For example, in a combat zone without noncombatants, a rule of engagement might allow any vehicle identified moving in an area of enemy encampment to be struck by a barrage of indirect fire from ship-based guns or "dumb" bombs dropped from the air. LAWS activity to target vehicles within this zone would have relatively low requirements to match human decision-making in similar circumstances.164 As long as LAWS are limited to these circumstances, their ability to perform extremely nuanced judgment tasks seems less relevant.165 Opponents counter that LAWS will inevitably be used outside these circumstances once available for operations because of the military advantages they provide.166 Whether or not this is true for U.S. military activities may turn on the criticality of the interest that the U.S. military force is protecting. It is clear that U.S. political and military leaders are willing to impose restrictions on military operations in many cases (e.g., Syria, Afghanistan); however, they may be less likely to maintain such restrictions if they believed the U.S. faced an existential threat. Analysts on both sides find the inappropriate use of LAWS by near-peer or non-state actors to be likely.167

Proportionality

Proportionality is the requirement that military action not cause excessive damage to civilian lives or property in relation to the military advantage to be gained from the action.168 Articles 51 and 57 of Additional Protocol I to the Geneva Conventions prohibit attacks that "may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated."

Many argue that the proportionality judgment required by this rule is fundamentally beyond the capabilities of an autonomous system.169 "Military advantage" is perceived to be an inherently complex and flexible value, not susceptible to simulation by an autonomous system.170 When considering the allowable collateral impact of a single action (e.g., the dropping of a bomb), proportionality requires an understanding and integration of the surrounding circumstances of the immediate battlefield, as well as overall strategic understanding of the goals of the military action in question.171 The balance required in determining whether the collateral impact is "excessive" is argued to embed an inherently human judgment, as it relies upon the "reasonableness" of the determination. This "reasonableness" test, which forms so much of the basis for judging the legal propriety of human behavior, is a sort of rough-and-ready appeal to the human faculty of common sense and shared human values argued to be fundamentally inaccessible to LAWS.172

Others who oppose a ban envision an autonomous weapon system in which the commander who set the LAWS in motion would make an initial judgment about whether accomplishing the mission goals programmed into the LAWS system was worth the expected collateral impact as a result of activation of the system.173 This judgment would include the established likelihood of unexpected action by the LAWS system.174 Once activated, LAWS operational evaluation of military advantage or allowable collateral impact levels could be determined in advance, requiring only a sensor judgment at the time of the assault to attempt to determine the amount of collateral impact, rather than setting the reasonable cut-off for aborting the action.175 While some critics have pointed out that such judgments are time-sensitive, and cannot simply be preprogrammed,176 others have responded that ensuring the reliability of these judgments simply requires setting time limitations as part of the mission framework for LAWS employment—so as to avoid the "aging" of the military advantage evaluation.177

Opponents of a ban on LAWS have also pointed out that collateral damage estimates are regularly made using objective data and scientific algorithms with current weapon systems.178 It is also argued that many circumstances in modern warfare involve individuals executing the action (e.g., dropping the bomb, firing the missile) with little or no capability to assess the specific conditions of the target immediately prior to its destruction for an instantaneous proportionality assessment.179

As noted above, the commander who sets the LAWS in motion plays a critical role in the legal responsibility for its resulting action. However, questions have been raised about whether that commander, or any other individual, could be held appropriately accountable for "war crimes" committed by such a weapon system.180 These concerns are further discussed below.

Accountability and Liability

Proponents of a ban on LAWS have raised a number of legal objections relating to the chain of accountability for the actions of these systems. Because machines are not ethical actors, proponents of a ban argue LAWS cannot meaningfully be "held responsible" for decision-making.181 As a result, if an autonomous system decided to carry out an action illegal under the laws of war (a "war crime"), holding someone responsible for that decision would be difficult or impossible.182

Opponents of a ban counter that there is a long tradition of command responsibility for the actions taken by subordinates.183 They also point out that if the LAWS were intentionally designed or manufactured with the purpose of being used to commit war crimes, or with reasonable knowledge that they would be so employed, then the designers or manufacturers would have criminal liability.184 Likewise, if LAWS were used by a commander with the intention to commit a war crime, then the commander could likely be held responsible for that crime.185

What Is a "Decision to Kill"?

There is a moral argument that derives from the notion that autonomous weapon systems should not be making a "decision to kill" a human being. However, some authors have raised questions about whether autonomous weapons will change the status quo, arguing that the "decision to kill" is not made autonomously by any actor—human or machine—but is already a complex human-machine decision-making process with diffuse responsibility. Under this argument, each individual in a conflict provides only a component of the overall decision-making about the use of lethal force—no solider or device is "fully autonomous." Current decision-making about lethal force, in the absence of autonomous weapons, frequently employs automated "friend/foe" determinations and targeting beyond visual range. Selection of mission targets arises from multi-person analytic processes, computer-assisted evaluations of collateral damage, and more or less restrictive rules of engagement (ROE) set by leaders. Does a pilot assigned to destroy a target in a bomber, or a Seaman launching a cruise missile, make a morally meaningful "decision to kill?" Even a soldier assaulting a position is acting in compliance with orders, as well as standing rules of engagement, that may demand a more or less lethal set of actions by the soldier.

Proponents of the ban argue that war crimes are most likely to occur as a result of an unintended action by the autonomous system, not as an element of deliberate design.186 Although commanders are responsible for reasonably foreseeable actions of subordinates, these authors argue that commanders, designers, and manufacturers will be excused from such responsibility because of the fundamentally complex and unpredictable nature of autonomous decision-making.187 In this view, victims of war crimes committed by LAWS will lack redress, creating a fundamental lack of justice and responsibility associated with the weapons.188 For this reason alone, some argue, LAWS should be banned.189

Opponents of the ban note that soldiers ordered to perform an otherwise lawful mission could commit war crimes as well.190 Ban proponents note that this still leaves someone criminally responsible for the misconduct,191 but opponents counter that this analysis places an excessive focus on individual criminal liability.192 They point out that the law has effectively managed responsibility for a variety of circumstances involving not fully predictable outcomes, such as the law regarding pet behavior or negligence.193 Moreover, the law of state responsibility would seem to allocate legal responsibility and an obligation to provide appropriate redress on the belligerent state employing the LAWS, arguably making the establishment of individual culpability less urgent.194

The question of whether noncombatant victims of LAWS-related violence—whether collateral or accidental—can receive justice leads to a larger question about the moral propriety of LAWS.

Moral/Ethical Issues

The potential for autonomous weapon systems to make decisions about whether to take human life has generated discussion of risks and benefits, as well as legal concerns, but it has also raised more fundamental questions. Some, including Christopher Heyns (the United Nations, Human Rights Council Special Rapporteur on extrajudicial, summary, or arbitrary executions), have indicated that the very notion of machines making the decision to take a human life is morally problematic.195 As some describe, human dignity is at the core of the international law of human rights.196 They assert that allowing a machine to make an independent judgment to take a life negates that dignity.197 Others argue that allowing machines to make the decision to kill treats human being as objects, and denies their fundamental moral status. 198

Opponents of a ban argue that this moral intuition is based on excessive anthropomorphism of the autonomous weapon system, an analogy to human reasoning very unlikely to accurately reflect military technology within the foreseeable future.199 In their opinion, even a non-deterministic LAWS (e.g., using a flexible learning algorithm) is not making a "decision" in an ethically meaningful sense any more than is an air-to-air missile or patriot battery.200 Under this notion, the relevant decision to kill is made by the commander who assigns the LAWS mission, sets limits in time and space, describes Rules of Engagement, and sets the LAWS into motion.201 As discussed above, still other authors accept the LAWS as decision-maker in a morally relevant sense but argue that, when deployed, it will make better ethical decisions than a human soldier.202

Figure 2. Taxonomy of the Debate

Source: CRS derived from multiple sources.

Appendix. Definitions of Autonomy

From HRW & IHRC, Shaking the Foundations, p. 1, "Fully autonomous weapons ... would identify and fire on targets without meaningful human intervention."

From Wallach and Allen, "Framing Robot Arms Control," p. 126, "Autonomous action by a robot includes any unsupervised activity."

From Anthony and Holland, "Governance of Autonomous Weapons," p. 424, "Contention issues centre on the weapon's adaptive capacity to make contingent discretionary decision and – in relation to those decisions – if, and at what point, a weapon is under human supervision."

From Heyns, Report of the Special Rapporteur, paragraph 38, "... robotic weapon systems that, once activated, can select and engage targets without further intervention by a human operator. The important element is that the robot has an autonomous 'choice' regarding selection of a target and the use of lethal force."

From Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, pp. 5-5, "What makes understanding autonomy so difficult is that autonomy can refer to at least three completely different concepts: * The human-machine command-and-control relationship * The complexity of the machine * The type of decision being automated."

From ICRC, Report of the ICRC Expert Meeting, p. 1, "There is no internationally agreed definition of autonomous weapon systems. For the purposes of this meeting, 'autonomous weapon systems' were defined as weapons that can independently select and attack targets, i.e. with autonomy in the 'critical functions' of acquiring, tracking, selecting and attacking targets."

From DODD 3000.09, Autonomy in Weapon Systems, p. 13, "A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation."

Author Contact Information

[author name scrubbed], Air Force Fellow (7-)
[author name scrubbed], Section Research Manager ([email address scrubbed], [phone number scrubbed])

Footnotes

1.

This report was written by Thomas B. Payne, U.S. Air Force Fellow. For questions or follow-up, contact [author name scrubbed], head of Defense Policy and Arms Control Section, [phone number scrubbed].

2.

Deputy Secretary of Defense Bob Work, "Reagan Defense Forum: The Third Offset Strategy," delivered at Reagan Presidential Library, Simi Valley, CA, November 7, 2015, http://www.defense.gov/News/Speeches/Speech-View/Article/628246/reagan-defense-forum-the-third-offset-strategy.

3.

Consider the first autopilot, developed in 1912, as a sort of militarily relevant autonomous system (see Laurence R. Newcome, Unmanned Aviation - A Brief History of Unmanned Ariel Vehicles [Reston, VA: AIAA, 2004], p. 16). Controversy and concern about autonomous weapons can be traced back far longer, well before the existence of any such system. For example, Frankenstein, Or The Modern Prometheus, by Mary Shelley, largely reflects many of the current concerns with the risks and unpredictable results of autonomous weapons development; see also United Nations Office at Geneva (UNOG), Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 9, http://www.genf.diplo.de/contentblob/4567632/Daten/5648986/201504berichtexpertentreffenlaws.pdf.

4.

Article I, Section 8, United States Constitution.

5.

Deputy Secretary of Defense Bob Work, "The Third Offset Strategy and its Implications for Partners and Allies," delivered at Willard Hotel, Washington, DC, January 28, 2015, http://www.defense.gov/News/Speeches/Speech-View/Article/606641/the-third-us-offset-strategy-and-its-implications-for-partners-and-allies; Secretary of Defense Chuck Hagel, "'Defense Innovation Days' Opening Keynote," delivered at Newport, RI, September 3, 2014, http://www.defense.gov/News/Speeches/Speech-View/Article/605602; see also Robert O. Work and Shawn Brimley, 20YY: Preparing for War in the Robotics Age, Center for a New American Security, January 2014, pp. 10-16, http://www.cnas.org/sites/default/files/publications-pdf/CNAS_20YY_WorkBrimley.pdf; Deputy Secretary of Defense Bob Work, "Reagan Defense Forum."

6.

Sydney J. Freedberg, Jr., "Hagel Lists Key Technologies for US Military; Launches 'Offset Strategy'," Breaking Defense, November 16, 2014; Zachary Keck, "A Tale of Two Offset Strategies," The Diplomat, November 18, 2014.

7.

U.S. Constitution art. I, §8.

8.

Wendell Wallach, Terminating the Terminator: What to Do About Autonomous Weapons, Institute for Ethics and Emerging Technologies, January 29, 2013, http://ieet.org/index.php/IEET/more/wallach20130129; Human Rights Watch (HRW) and Harvard Law School's International Human Rights Clinic (IHRC), Losing Humanity: The Case Against Killer Robots. November 2012, http://www.hrw.org/reports/2012/11/19/losing-humanity-o; HRW and IHRC, Shaking the Foundations: The Human Rights Implications of Killer Robots, May 2014, http://hrw.org/node/125251; HRW and IHRC, Mind the Gap: The Lack of Accountability for Killer Robots, April 2015, https://www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots; HRW and IHRC, "Advancing the Debate on Killer Robots: 12 Key Arguments for a Preemptive Ban on Fully Autonomous Weapons," May 2014, https://www.hrw.org/news/2014/05/13/advancing-debate-killer-robots.

9.

Michael N. Schmitt and Jeffrey S. Thurnher, "'Out of the Loop': Autonomous Weapon Systems and the Law of Armed Conflict," Harvard National Security Journal, vol. 4 (2013), p. 269; United Nations, General Assembly, Human Rights Counsel, Report of the Special Rapporteur on extrajudicial, summary, or arbitrary executions, A/HRC/23/47, Christopher Heyns, April 9, 2013, paragraph 108; International Committee of the Red Cross (ICRC), Report of the ICRC Expert Meeting on 'Autonomous weapon systems: technical, military, legal and humanitarian aspects', Geneva, March 26, 2014, p. 11, https://www.icrc.org/eng/assets/files/2014/expert-meeting-autonomous-weapons-icrc-report-2014-05-09.pdf.

10.

Article I, Section 8 & Article II, Section 2, United States Constitution.

11.

Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, opened for signature April 10, 1981, 1342 U.N.T.S. 137 (usually referred to as the Convention on Certain Conventional Weapons (CCW)).

12.

This includes the International Committee of the Red Cross (see ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems') and Human Rights Watch (see HRW & IHRC, "Advancing the Debate on Killer Robots," p. 24; HRW & IHRC, Losing Humanity, pp. 9-10), among others.

13.

HRW & IHRC, "Advancing the Debate on Killer Robots," pp.24-5; HRW & IHRC, Mind the Gap, pp. 11-12.

14.

Ibid.

15.

Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 2.

16.

Schmitt and Thurnher, "Out of the Loop," Harvard National Security Journal, p. 234.

17.

HRW & IHRC, "Advancing the Debate on Killer Robots," p. 26 (providing that a ban would not prevent development of civilian autonomy); Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 14.

18.

Defense Science Board, The Role of Autonomy in DOD Systems, pp. 68-76; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 6.

19.

UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 17; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 15.

20.

HRW and IHRC, "Advancing the Debate on Killer Robots," pp. 19, 24-26

21.

Anderson and Waxman, "Law and Ethics for Robot Soldiers," pp. 2, 7; Heyns, Report of the Special Rapporteur, paragraph 32.

22.

UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, pp. 24-6; Heyns, Report of the Special Rapporteur, paragraph 32; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 11.

23.

Heyns, Report of the Special Rapporteur, paragraph 108.

24.

UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, pp. 7, 10, 24-6; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 2; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 16; Heyns, Report of the Special Rapporteur, paragraph 111; Gary E. Marchant, Braden Allenby, and Ronald Arkin, et al., "International Governance of Autonomous Military Robots," The Columbia Science and Technology Review, vol. 12 (2011), p. 313, http://www.stlr.org/cite.cgi?volume=12&article=7 (proposing transparency and information sharing based on the model of Confidence Building Measures)

25.

Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 2.

26.

Ibid. at 14.

27.

Department of Defense, Directive 5000.01, The Defense Acquisition System, May 12, 2003, certified current November 20, 2011, enclosure 1, paragraph E1.1.15.

28.

For example, see U. S. Air Force, Instruction 51-402, Legal Reviews of Weapons and Cyber Capabilities, 27 July 2011 or U. S. Army, Regulation 27-53, Review of Legality of Weapons under International Law, 1 February 1979.

29.

Access to simulation capability would provide lawyers the opportunity to determine the behavior of autonomous weapon systems under legally relevant scenarios, a capability both relevant and likely unavailable if the proprietary technology used to development of the software of the autonomous weapon is held by the manufacturer, and the legal review team is simply provided a reliability number or some other highly simplified synthesis of the manufacturer's own internal testing.

30.

International organizations are currently calling for national action, in the absence of international consensus. UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 15; Heyns, Report of the Special Rapporteur, paragraph 113.

31.

CRS Report R42688, Science, Technology, and Innovation Policy: CRS Experts, by [author name scrubbed]

32.

For example, consider the definition of defensive systems that are empowered to employ lethality in the absence of human action – the DOD considers them a variety of fully autonomous systems, while others distinguish them by their temporal or geographic targeting constraints from more fully autonomous systems (see Department of Defense Directive [DODD] 3000.09, Autonomy in Weapon Systems, pp. 3, 13; Paul Scharre and Michael C. Horowitz, An Introduction to Autonomy in Weapon Systems, Center for a New American Security, Working Paper, February 2015, p. 13; HRW & IHRC, Losing Humanity, p. 12); see also Defense Science Board, Task Force Report: The Role of Autonomy in DOD Systems, July 2012, pp. 3-8, https://fas.org/irp/agency/DOD/dsb/autonomy.pdf; William Marra and Sonia McNeil, "Understanding 'The Loop': Regulating the Next Generation of War Machines," Harvard Journal of Law and Public Policy, vol. 36, no. 3 (May 1, 2012), pp. 6-7; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 11-12; Eric Sholes, "Evolution of a UAV Autonomy Classification Taxonomy," Aerospace Conference, 2007 IEEE, March 3, 2010, p. 1; Wendell Wallach and Colin Allen, "Framing Robot Arms Control," Ethics and Information Technology, vol. 15, no. 2 (June 2013), pp. 125, 132; Ian Anthony and Chris Holland, The Governance of Autonomous Weapons, Stockholm International Peace Research Institute (SIPRI), SIPRI Yearbook 2014: Armaments, Disarmament and International Security, Chapter 9, Section II, 2014, p. 424-5;

33.

See "What Is Autonomy?" text box below.

34.

Defense Science Board, The Role of Autonomy in DOD Systems, p. 24; Marra and McNeil, "Understanding 'The Loop,'" pp. 23-8; Eric Sholes, "Evolution of a UAV Autonomy Classification Taxonomy," Aerospace Conference, 2007 IEEE, March 3, 2010, p. 11-6; Giles Coppin and Francois Legras, "Autonomy Spectrum and Performance Perception Issues in Swarm Supervisory Control," Proceedings of the IEEE, vol. 100, no. 3 (March 2012), pp. 593-4; Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, p. 7; United Nations Institute for Disarmament Research (UNIDIR), Framing Discussions on the Weaponization of Increasingly Autonomous Technologies, No. 1, 2014, p. 5, http://www.unidir.org/files/publications/pdfs/framing-discussions-on-the-weaponization-of-increasingly-autonomous-technologies-en-606.pdf.

35.

As an analogy, imagine regulating bombs based on "expected collateral casualties." The lethality of the bomb itself may represent an effective, lawful means of combat or a method of commission of a war crime, depending on its employment; see also UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 17; Coppin and Legras, "Autonomy Spectrum and Performance Perception Issues," p. 593-4.

36.

Patrick Lin, "Introduction to Robot Ethics," in Robot Ethics: The Ethical and Social Implications of Robotics, ed. Patrick Lin, Keith Abney, and George A. Bekey (Cambridge: The MIT Press, 2012), p. 8; Jeffrey S. Thurnher, "No One At the Controls: Legal Implications of Autonomous Targeting," Joint Forces Quarterly, no. 67 (4th Quarter 2007), p. 78-9; Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, p. 3.

37.

Freedberg, "Hagel Lists Key Technologies"; United States Air Force, Strategic Master Plan, March 2015, p. 42, http://www.af.mil/Portals/1/documents/Force%20Management/Strategic_Master_Plan.pdf?timestamp=1434024300378; Simon Parkin, "Killer Robots: The Soldiers That Never Sleep," BBC.com, July 16, 2015, http://www.bbc.com/future/story/20150715-killer-robots-the-soldiers-that-never-sleep; James Kadtke and Linton Wells II, Policy Challenges of Accelerating Technological Change: Security Policy and Strategic Implications of Parallel Scientific Revolutions, Center for Technology and National Security Policy, National Defense University, September 2014, p. 43-8, http://ctnsp.dodlive.mil/files/2014/09/DTP106.pdf; Work and Brimley, 20YY, p. 24; Defense Science Board, The Role of Autonomy in DOD Systems, pp. 56-8; Schmitt and Thurnher, "Out of the Loop," p. 237; Gordon Johnson, Tom Meyers, Russell Richards, et al., Unmanned Effects (UFX): Taking the Human Out of the Loop, U.S. Joint Forces Command, Rapid Assessment Process Report #3-10, September 2003, p. 7, https://www.hsdl.org/?view&did=705224; Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, p. 3; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 5.

38.

Marcus Weisgerber and Patrick Tucker, "Oshkosh Wins $30 Billion Army Contract Battle to Replace Humvee," National Journal, August 26, 2014.

39.

HRW and IHRC, Mind the Gap, p. 6.

40.

Marcello Guarini and Paul Bello, "Robotic Warfare: Some Challenges in Moving from Noncivilian to Civilian Theaters," in Robot Ethics, p. 130; HRW & IHRC, Losing Humanity, pp. 9-10.

41.

Kenneth Anderson and Matthew C. Waxman, "Law and Ethics for Robot Soldiers," Policy Review, April 5, 2012, p. 4; Heyns, Report of the Special Rapporteur, paragraph 45; HRW & IHRC, Losing Humanity, p. 9-10.

42.

Schmitt and Thurnher, "Out of the Loop," p. 235; Marra and McNeil, "Understanding 'The Loop'," pp. 44-5; Heyns, Report of the Special Rapporteur, paragraph 45; HRW & IHRC, Losing Humanity, p. 9-10.

43.

HRW & IHRC, Shaking the Foundations, p.1, 5; HRW & IHRC, Losing Humanity, p. 19.

44.

Department of Defense Directive (DODD) 3000.09, Autonomy in Weapon Systems, November 21, 2012, paragraph 4.c(2).

45.

Michael N. Schmitt, "Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics," Harvard National Security Journal, December 4, 2012, p. 5; UNIDIR, Framing Discussions on the Weaponization of Increasingly Autonomous Technologies, p. 5; HRW & IHRC, Losing Humanity, p. 12; Noel Sharkey, "Towards a Principle for the Human Supervisory Control of Robot Weapons," Politica & Societa, vol. 2 (May-August 2014), p. 13.

46.

Schmitt and Thurnher, "Out of the Loop," p. 236; UNIDIR, Framing Discussions on the Weaponization of Increasingly Autonomous Technologies, p. 5; HRW & IHRC, Losing Humanity, p. 12.

47.

Marra and McNeil, "Understanding 'The Loop,'" pp. 14-6; UNIDIR, Framing Discussions on the Weaponization of Increasingly Autonomous Technologies, p. 5; HRW & IHRC, Losing Humanity, p. 12.

48.

HRW & IHRC, Losing Humanity, p. 12.

49.

Parkin, "Killer Robots"; Schmitt and Thurnher, "Out of the Loop," p. 2376; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 4; Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, pp. 9-10.

50.

Schmitt, "Autonomous Weapon Systems and International Humanitarian Law," p. 5; but see Nicholas Marsh, Defining the Scope of Autonomy: Issues for the Campaign to Stop Killer Robots, Peave Research Institute Orlo, Policy Brief 02-2014, pp. 1-3, https://www.prio.org/Publications/Publication/?x=7390 (arguing Brimstone missiles, among others, currently in use exhibit "full autonomy" based on many definitions put forward by ban proponents).

51.

Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, p. 15.

52.

Heyns, Report of the Special Rapporteur, paragraph 45; Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, p. 13.

53.

Parkin, "Killer Robots."

54.

Kadtke and Wells II, Policy Challenges of Accelerating Technological Change; Work and Brimley, 20YY, pp. 9; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 19; Johnson, Meyers, Richards, et al., Unmanned Effects (UFX): Taking the Human Out of the Loop, p. iii; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on Lethal Autonomous Weapon Systems, p. 5; Heyns, Report of the Special Rapporteur, paragraph 50-51.

55.

Work and Brimley, 20YY, pp. 9, 21-2; Freedberg, "Hagel Lists Key Technologies"; USAF, Strategic Master Plan; Johnson, Meyers, Richards, et al., Unmanned Effects (UFX): Taking the Human Out of the Loop, pp. 4-5.

56.

Defense Science Board, The Role of Autonomy in DOD Systems, pp. 41-2.

57.

Defense Science Board, The Role of Autonomy in DOD Systems, pp. 56-8.

58.

Johnson, Meyers, Richards, et al., Unmanned Effects (UFX): Taking the Human Out of the Loop, p. 5;

59.

Marra and McNeil, "Understanding 'The Loop,'" pp. 9-14; for early history see also John R. Boyd, Patterns of Conflict, December 1986, p. 5.

60.

Ibid.

61.

Noel Sharkey, "Killing Made Easy: From Joysticks to Politics," in Robot Ethics, p. 116; Schmitt and Thurnher, "Out of the Loop," p. 238; Johnson, Meyers, Richards, et al., Unmanned Effects (UFX): Taking the Human Out of the Loop, p. 5.

62.

Freedberg, "Hagel Lists Key Technologies"; Schmitt and Thurnher, "Out of the Loop," p. 238; Marra and McNeil, "Understanding 'The Loop,'" p. 44-6; Thurnher, "No One At the Controls," p. 80.

63.

Ibid.

64.

Freedberg, "Hagel Lists Key Technologies"; Schmitt and Thurnher, "Out of the Loop," p. 238; Marra and McNeil, "Understanding 'The Loop,'" p.44-6; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 7; Thurnher, "No One At the Controls," p. 80; Anthony and Holland, The Governance of Autonomous Weapons, p. 423; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 5.

65.

Schmitt and Thurnher, "Out of the Loop," p. 242-3;

66.

Ian Duncan, "As More Devices Go Online, Hackers Hunt for Vulnerabilities," Baltimore Sun, October 24, 2015, http://www.baltimoresun.com/news/maryland/bs-md-hacking-internet-things-20151024-story.html.

67.

Paul Scharre, Autonomous Weapons and Operational Risk, Center for a New American Security, Ethical Autonomy Project, February 2016, p. 1, http://www.cnas.org/sites/default/files/publications-pdf/CNAS_Autonomous-weapons-operational-risk.pdf.

68.

Ibid. at 8-17.

69.

Ibid. at 23.

70.

Scharre, Autonomous Weapons and Operational Risk, pp.18-19; Wallach, Ensuring Human Control Over Military Robotics; HRW & IHRC, "Advancing the Debate on Killer Robots," p.20; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 8.

71.

Work and Brimley, 20YY, 31; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 13;

72.

Defense Science Board, The Role of Autonomy in DOD Systems, p. 69; Work, "Reagan Defense Forum: The Third Offset Strategy"; Wallach and Allen, "Framing Robot Arms Control," p. 125-6; but see Kadtke and Wells II, Policy Challenges of Accelerating Technological Change at 26;

73.

Ibid.

74.

Deputy Secretary of Defense Work, "The Third Offset Strategy and its Implications for Partners and Allies"; Secretary of Defense Hagel, "'Defense Innovation Days' Opening Keynote"; Work and Brimley, 20YY: Preparing for War in the Robotics Age; Deputy Secretary of Defense Work, "Reagan Defense Forum."

75.

Ibid.

76.

Work and Brimley, 20YY, pp. 7-9; Wallach, Terminating the Terminator; Heyns, Report of the Special Rapporteur, paragraph 88; Sharkey, "Killing Made Easy," in Robot Ethics, p. 122; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 5.

77.

Wendall Wallach, Ensuring Human Control Over Military Robotics, Institute for Ethics and Emerging Technologies, August 29, 2015, http://ieet.org/index.php/IEET/more/wallach20150829; Schmitt and Thurnher, "Out of the Loop," pp. 240, 62; Brendon Mills, "Rosa's Dystopia: The Moral Downside of Coming Autonomous Weapon Systems," Foreign Policy, June 18, 2013, p. 1; Heyns, Report of the Special Rapporteur, paragraph 52.

78.

Schmitt and Thurnher, "Out of the Loop," pp. 248-9; Mills, "Rosa's Dystopia," p. 1; Heyns, Report of the Special Rapporteur, paragraph 54; Ronald C. Arkin, Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture, Georgia Institute of Technology, Technical Report GIT-GVU-07-11, pp. 6-7, http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf.

79.

Arkin, Governing Lethal Behavior, p. 6.

80.

UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 7; Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, p. 11-2.

81.

Ibid.

82.

Ibid.

83.

Wallach, Terminating the Terminator; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 5; HRW & IHRC, Shaking the Foundations, p. 6; Marra and McNeil, "Understanding 'The Loop,'" pp. 60-2; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 10; Wallach and Allen, "Framing Robot Arms Control," p. 131; Mills, "Rosa's Dystopia," p. 2; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 8.

84.

Ibid.

85.

Wallach, Terminating the Terminator; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 5; HRW & IHRC, Shaking the Foundations, p.6; Marra and McNeil, "Understanding 'The Loop,'" pp. 60-2; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 5; Wallach and Allen, "Framing Robot Arms Control," p. 127; Mills, "Rosa's Dystopia," p. 2; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 8.

86.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 122; Wallach, Ensuring Human Control Over Military Robotics; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 5; Wallach and Allen, "Framing Robot Arms Control," p. 125; Heyns, Report of the Special Rapporteur, paragraph 57-8; HRW & IHRC, Losing Humanity, p. 39-41.

87.

Department of Defense Office of the General Counsel, Department of Defense Law of War Manual, June 2015, p. 39.

88.

The legal basis for Just War analysis derives from the variety of sources including international agreements and unwritten customary international law; see Department of Defense Law of War Manual, pp. 39-49.

89.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 122; Wallach, Ensuring Human Control Over Military Robotics; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 5; Wallach and Allen, "Framing Robot Arms Control," p. 125; Heyns, Report of the Special Rapporteur, paragraph 57-8; HRW & IHRC, Losing Humanity, p. 39-41.

90.

Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 13.

91.

Schmitt and Thurnher, "Out of the Loop," p. 232; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 13. Arguably, using human lives as a calculated method to impose decision-making costs on politicians represents an actualization of the same moral problems posed by opponents of LAWS - in potential - when considering machine-determined lethal fires (see section Moral/Ethical Issues below). Human lives used as "means", without individuation. see Kenneth Anderson and Matthew C. Waxman, Law and Ethics for Autonomous Weapon Systems: Why a Ban Won't Work and How the Laws of War Can, American University Washington College of Law, Research Paper No. 2013-11, p. 18, http://ssrn.com/abstract=2250126 (arguing moral equivalence to hostage taking to influence political decisions).

92.

Wallach, Ensuring Human Control Over Military Robotics; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 18; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 5; Heyns, Report of the Special Rapporteur, paragraph 88.

93.

Work and Brimley, 20YY, pp. 7-9; Wallach, Terminating the Terminator; Sharkey, "Killing Made Easy," in Robot Ethics, p. 122; Heyns, Report of the Special Rapporteur, paragraph 88.

94.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 122; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 5; Heyns, Report of the Special Rapporteur, paragraph 88.

95.

Such relations are unstable if they drain participants financial capacity and thereby incentivize initiation of conflict in order to prevent further economic impact; see Theresa Clair Smith, "Arms Race Instability and War," Journal of Conflict Resolution, vol. 24, no. 2 (June 1980), pp. 253-284.

96.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 122; HRW & IHRC, "Advancing the Debate on Killer Robots," p.23; Heyns, Report of the Special Rapporteur, paragraph 88.

97.

Kadtke and Wells II, Policy Challenges of Accelerating Technological Change, p. 26; Schmitt and Thurnher, "Out of the Loop," Harvard National Security Journal, p. 238; Thurnher, "No One At the Controls," p. 80; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 5, 13-17

98.

Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 5, 13-17.

99.

Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 13-14; Interview with Mr. Shawn Steene, Office of the Secretary of Defense, Strategy and Force Development (office responsible for DODD 3000.09, Autonomy in Weapon Systems); ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 6.

100.

Ibid.

101.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 122; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 5; Mills, "Rosa's Dystopia," p. 2; Heyns, Report of the Special Rapporteur, paragraph 87.

102.

Ibid.

103.

Interview with Mr. Shawn Steene, Office of the Secretary of Defense, Strategy and Force Development.

104.

Ibid.

105.

Ibid.

106.

Scharre, Autonomous Weapons and Operational Risk, pp. 14-15; Schmitt and Thurnher, "Out of the Loop," p. 242-3; Heyns, Report of the Special Rapporteur, ¶ 98

107.

Scharre, Autonomous Weapons and Operational Risk, pp. 23.

108.

Scharre, Autonomous Weapons and Operational Risk, pp. 11, 19-20; Schmitt and Thurnher, "Out of the Loop," p. 242-3; Heyns, Report of the Special Rapporteur, ¶ 98.

109.

Schmitt and Thurnher, "Out of the Loop," p. 238; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 5.

110.

Kadtke and Wells II, Policy Challenges of Accelerating Technological Change, p. 46; Schmitt and Thurnher, "Out of the Loop," p. 242-3; Duncan, "As More Devices Go Online, Hackers Hunt for Vulnerabilities";

111.

Wallach, Ensuring Human Control Over Military Robotics; HRW & IHRC, "Advancing the Debate on Killer Robots," p.20; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 8.

112.

Work and Brimley, 20YY, p.31; HRW & IHRC, "Advancing the Debate on Killer Robots," p.20; Wallach and Allen, "Framing Robot Arms Control," p. 125; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 4; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 8; Scharre, Autonomous Weapons and Operational Risk, 25-33.

113.

Scharre, Autonomous Weapons and Operational Risk, 18.

114.

Ibid.

115.

Schmitt and Thurnher, "Out of the Loop," p. 241;

116.

Schmitt and Thurnher, "Out of the Loop," p. 241; UNIDIR, Framing Discussions on the Weaponization of Increasingly Autonomous Technologies, p. 6.

117.

Parkins, "Killer Robots"; Wallach, Terminating the Terminator; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 10; Wallach and Allen, "Framing Robot Arms Control," p. 132; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 4; Scharre, Autonomous Weapons and Operational Risk, 11-17; Schmitt and Thurnher, "Out of the Loop," pp. 239-40, 247; Johnson, Meyers, and Richards, et al., Unmanned Effects (UFX): Taking the Human Out of the Loop, p. 10, 5; Thurnher, "No One At the Controls," p. 80; Mill, "Rosa's Dystopia," p. 1.

118.

This is taken from Michio Kaku, The Physics of the Future: How Science Will Shape Human Destiny and Our Daily Lives by the Year 2100 (New York: Anchor Books, 2011), pp. 82-84. The book contains a useful layman's description of current development in AI and projected development for the rest of the century.

119.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 120; Parkins, "Killer Robots"; Wallach, Terminating the Terminator; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 4

120.

HRW & IHRC, Shaking the Foundations, p. 2, 12; Parkins, "Killer Robots"; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 6.

121.

Parkins, "Killer Robots"; Wallach, Terminating the Terminator; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 10; Wallach and Allen, "Framing Robot Arms Control," p. 132; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 4.

122.

Schmitt and Thurnher, "Out of the Loop," pp. 239-40, 247; Johnson, Meyers, and Richards, et al., Unmanned Effects (UFX): Taking the Human Out of the Loop, p. 10, 5; Thurnher, "No One At the Controls," p. 80; Mill, "Rosa's Dystopia," p. 1.

123.

Ibid.

124.

[author name scrubbed], Legislative Attorney, [phone number scrubbed], [email address scrubbed] contributed to this section of the report. 

125.

Jeffrey S. Thurnher, The Law That Applies to Autonomous Weapon Systems, American Society of International Law, Vol 17, Issue 4, January 13, 2013, https://www.asil.org/insights/volume/17/issue/4/law-applies-autonomous-weapon-systems; Schmitt and Thurnher, "Out of the Loop," pp. 243-81; Anderson and Waxman, "Law and Ethics for Robot Soldiers," pp. 8-9.

126.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 116; Wallach, Terminating the Terminator; HRW & IHRC, "Advancing the Debate on Killer Robots," p.13; HRW & IHRC, Shaking the Foundations, p. 19-22,

127.

Department of Defense Law of War Manual, June 2015, p. 39-49.

128.

Schmitt and Thurnher, "Out of the Loop," p. 251.

129.

See CWC preamble ; Additional Protocol I, art. 35(1).

130.

Additional Protocol I, art. 35(2-3).

131.

Additional Protocol I, art. 36.

132.

W. Hays Parks, Conventional Weapons and Weapons Reviews, 8 Yearbook of International Humanitarian Law 55, 57-58 (2005) (noting that the obligation predated the 1977 Additional Protocols, and that only nine parties to it had adopted a formal legal review program for weapons)..

133.

Kenneth Anderson, Daniel Reisner, and Matthew Waxman, "Adapting the Law of Armed Conflict to Autonomous Weapon Systems," International Law Studies, Vol 90 (2014), p. 395.

134.

Ibid.

135.

Thurnher, The Law That Applies to Autonomous Weapon Systems; Schmitt and Thurnher, "Out of the Loop," p. 244.

136.

Thurnher, The Law That Applies to Autonomous Weapon Systems.

137.

Thurnher, The Law That Applies to Autonomous Weapon Systems; Schmitt and Thurnher, "Out of the Loop," p. 245; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 8.

138.

Schmitt and Thurnher, "Out of the Loop," p. 250; Thurnher, "No One at the Controls," p. 83.

139.

Parks, Conventional Weapons and Weapons Reviews, p. 59 (noting that virtually any weapon is capable of being used in an indiscriminate fashion, but that this capacity does not render a weapon unlawful).

140.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.131; Thurnher, The Law That Applies to Autonomous Weapon Systems.

141.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.131; Schmitt and Thurnher, "Out of the Loop," p. 246; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 9.

142.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.131; Schmitt and Thurnher, "Out of the Loop," p. 246; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 7.

143.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.131; Thurnher, The Law That Applies to Autonomous Weapon Systems; Schmitt and Thurnher, "Out of the Loop," p. 246.

144.

Schmitt and Thurnher, "Out of the Loop," p. 247.

145.

Schmitt and Thurnher, "Out of the Loop," p. 248; Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, p. 10.

146.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.147; Thurnher, The Law That Applies to Autonomous Weapon Systems; Schmitt and Thurnher, "Out of the Loop," pp. 249-51; Kanwar, "Post-Human Humanitarian Law," p. 8.

147.

Ibid.

148.

Department of Defense Law of War Manual, pp. 50-66.

149.

HRW & IHRC, Shaking the Foundations, p.15; Thurnher, The Law That Applies to Autonomous Weapon Systems; Vik Kanwar, "Post-Human Humanitarian Law: The Law of War in the Age of Robotic Weapons," Harvard Journal of National Security, vol. 2 (June 3, 2010), p. 5; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 8; Schmitt and Thurnher, "Out of the Loop," p. 250-5; Anthony and Holland, The Governance of Autonomous Weapons, p. 428; Heyns, Report of the Special Rapporteur, paragraph 66.

150.

With the exception of discussing whether use of ethical autonomous weapons might be required under some circumstances, which is addressed in the "Potential Improvements in Ethical Warfare" section, above.

151.

Thurnher, The Law That Applies to Autonomous Weapon System; Schmitt and Thurnher, "Out of the Loop," p. 251; Anthony and Holland, The Governance of Autonomous Weapons, p. 428.

152.

Additional Protocol I, Article 48, Geneva Conventions, U.N Publication Vol. 1125, 1-175I2.

153.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 118; Wallach, Terminating the Terminator; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 5; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 10; Heyns, Report of the Special Rapporteur, paragraph 67; HRW & IHRC, Losing Humanity, p. 31.

154.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p. 130; Sharkey, "Killing Made Easy," in Robot Ethics, p. 118; Thurnher, The Law That Applies to Autonomous Weapon Systems; Heyns, Report of the Special Rapporteur, paragraph 68; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems, p. 2.

155.

Ibid.

156.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p. 130.

157.

Ibid.

158.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.131-2; Sharkey, "Killing Made Easy," in Robot Ethics, pp. 116-8; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 5; HRW & IHRC, Shaking the Foundations, p. 13.

159.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.131-2; Sharkey, "Killing Made Easy," in Robot Ethics, p. 116-8; HRW & IHRC, "Advancing the Debate on Killer Robots," p.11; Heyns, Report of the Special Rapporteur, paragraph 67-8.

160.

Wallach, Ensuring Human Control Over Military Robotics; Schmitt and Thurnher, "Out of the Loop," p. 240, 62; Johnson, Meyers, Richards, et al., Unmanned Effects (UFX): Taking the Human Out of the Loop, p. 10; Thurnher, "No One At the Controls," p. 80; Heyns, Report of the Special Rapporteur, paragraph 69.

161.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p. 148; Schmitt and Thurnher, "Out of the Loop," Harvard National Security Journal, pp. 264-5; Thurnher, "No One At the Controls," p. 80-1; Heyns, Report of the Special Rapporteur, paragraph 54; Heyns, Report of the Special Rapporteur, paragraph 69.

162.

Defense Science Board, The Role of Autonomy in DOD Systems, pp. 62-4.

163.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.131; Schmitt and Thurnher, "Out of the Loop," p. 246; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 9.

164.

Ibid.

165.

Ibid.

166.

HRW & IHRC, "Advancing the Debate on Killer Robots," p.18, 22-3; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 7; Wallach and Allen, "Framing Robot Arms Control," p. 127; Heyns, Report of the Special Rapporteur, paragraph 29; ICRC, Report of the ICRC Expert Meeting on 'Autonomous weapon systems', p. 8.

167.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 122; HRW & IHRC, "Advancing the Debate on Killer Robots," p.23; Heyns, Report of the Special Rapporteur, paragraph 88; and also Anderson and Waxman, "Law and Ethics for Robot Soldiers," pp. 7-8.

168.

Anthony and Holland, The Governance of Autonomous Weapons, p. 428; Thurnher, The Law That Applies to Autonomous Weapon System; Heyns, Report of the Special Rapporteur, paragraph 70.

169.

Sharkey, "Killing Made Easy," in Robot Ethics, pp. 123-4; Wallach, Terminating the Terminator; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 6; HRW & IHRC, Shaking the Foundations, p. 16; Marra and McNeil, "Understanding 'The Loop,'" pp. 60-2; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 10; HRW & IHRC, Losing Humanity, p. 31.

170.

Sharkey, "Killing Made Easy," in Robot Ethics, pp. 123-4; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 6; Schmitt and Thurnher, "Out of the Loop," p. 257.

171.

Sharkey, "Killing Made Easy," in Robot Ethics, pp. 123-4; Schmitt and Thurnher, "Out of the Loop," p. 257.

172.

Sharkey, "Killing Made Easy," in Robot Ethics, pp. 123-4; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 6; Thurnher, The Law That Applies to Autonomous Weapon System; Schmitt and Thurnher, "Out of the Loop," Harvard National Security Journal, pp. 254, 6.

173.

Schmitt and Thurnher, "Out of the Loop," p. 256; Thurnher, "No One At the Controls," p. 82-3.

174.

Ibid.

175.

Ibid.

176.

HRW & IHRC, "Advancing the Debate on Killer Robots," p.6; Thurnher, The Law That Applies to Autonomous Weapon System.

177.

Schmitt and Thurnher, "Out of the Loop," p. 256; Thurnher, "No One At the Controls," p. 82-3.

178.

Schmitt and Thurnher, "Out of the Loop," p. 255; Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, p. 11.

179.

Consider, for example, a cruise missile, which may take several hours to strike with no recall capability; also see Scharre and Horowitz, An Introduction to Autonomy in Weapon Systems, p. 10.

180.

Peter M. Asaro, "A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics," in Robotic Ethics, p. 171; Sharkey, "Killing Made Easy," in Robot Ethics, p. 124; HRW & IHRC, "Advancing the Debate on Killer Robots," p.13; HRW & IHRC, Shaking the Foundations, p. 19; Heyns, Report of the Special Rapporteur, paragraph 78; HRW & IHRC, Mind the Gap, pp. 19-20.

181.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 116; Wallach, Terminating the Terminator; HRW & IHRC, "Advancing the Debate on Killer Robots," p.13; HRW & IHRC, Shaking the Foundations, pp.19-22; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 14; Heyns, Report of the Special Rapporteur, paragraph 76; HRW and IHRC, Mind the Gap, pp. 18-9.

182.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 117; HRW & IHRC, Shaking the Foundations, p. 19; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 14; HRW and IHRC, Mind the Gap, pp. 18-37.

183.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, pp.151-2; Schmitt and Thurnher, "Out of the Loop," p. 252, 77; Heyns, Report of the Special Rapporteur, paragraph 78.

184.

Ibid.

185.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, pp.151-2.; HRW & IHRC, Shaking the Foundations, pp.19-20; Schmitt and Thurnher, "Out of the Loop," Harvard National Security Journal, p. 278; Patrick Lin, George Bekey, and Keith Abney, Autonomous Military Robotics: Risk, Ethics, and Design, U.S. Dept of the Navy, award #N00014-09-1-1152, N00014-08-1-1209, 2008, p. 66, http://ethics.calpoly.edu/ONR_report.pdf (arguing responsibility adheres to the initiator of the autonomous systems' actions)

186.

HRW & IHRC, "Advancing the Debate on Killer Robots," pp.12-3; HRW & IHRC, Shaking the Foundations, p. 19; HRW & IHRC, Mind the Gap, pp. 20-5.

187.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 117; HRW & IHRC, Shaking the Foundations, p. 19; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 14; HRW and IHRC, Mind the Gap, pp. 18-37.

188.

Ibid.

189.

Ibid.

190.

Schmitt, "Autonomous Weapon Systems and International Humanitarian Law," p. 13.

191.

HRW & IHRC, Mind the Gap, p. 13.

192.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.149-50; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 12.

193.

Asaro, "A Body to Kick," p. 177.

194.

Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 17 (advocating the adaptation of "mechanisms of collective responsibility borne by a 'side' in war, through its operational planning and law, including legal reviews of weapon systems and justification of their use in particular operational conditions").

195.

Sharkey, "Killing Made Easy," in Robot Ethics, p. 116; Wallach, Terminating the Terminator; HRW & IHRC, "Advancing the Debate on Killer Robots," p. 21; HRW & IHRC, Shaking the Foundations, pp. 23-4; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 17; Anderson and Waxman, "Law and Ethics for Robot Soldiers," p. 11; Ray Acheson, The Unbearable Meaninglessness of Autonomous Violence, Campaign to Stop Killer Robots, CCW Report, April 16, 2015; Heyns, Report of the Special Rapporteur, paragraph 89-97.

196.

Universal Declaration of Human Rights, preamble, paragraph 1; HRW & IHRC, Shaking the Foundations, p.23-4; Heyns, Report of the Special Rapporteur, paragraph 89-97.

197.

HRW & IHRC, Shaking the Foundations, p.23-4; Heyns, Report of the Special Rapporteur, paragraph 89-97.

198.

Rob Sparrow, "Can Machines Be People? Reflections on the Turing Triage Test," in Robotic Ethics, p. 306.

199.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.152; Kanwar, "Post-Human Humanitarian Law," p. 5; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 20.

200.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.152; Defense Science Board, The Role of Autonomy in DOD Systems, p. 48; Kanwar, "Post-Human Humanitarian Law," p. 5

201.

Guarini and Bello, "Robotic Warfare," in Robotic Ethics, p.152; Defense Science Board, The Role of Autonomy in DOD Systems, p. 48; Kanwar, "Post-Human Humanitarian Law," p. 5; UNOG, Advance Copy of the Report of the 2015 Informal Meeting of Experts on LAWS, p. 9.

202.

See "Potential Improvements in Ethical Warfare" section above.