LETHAL AUTONOMOUS WEAPON SYSTEMS: SLAVES NOT MASTERS!
Conflict Scenarios, Autonomy and UnpredictabilitySections
Introduction
LAWS, IHL and the Spectrum of Conflict
Autonomy in LAWS
Are AI-Powered Systems Inherently Unpredictable?
Safeguarding Against Automating the “Will” of LAWS
References
Introduction
Increasing levels of autonomy are being incorporated in AI-powered weapon systems on the modern battlefield, which are soon expected to acquire the capability to “select and kill” targets without human intervention. Such systems are widely referred to as Lethal Autonomous Weapon Systems (LAWS), sensationally dubbed as “killer robots”. As a reaction to these developments a raging debate is on globally, particularly at the UN, over the ethical, moral and legal aspects of deploying fully autonomous weapon systems in future wars. Human rights groups are advocating a pre-emptive ban on their development on the grounds that employment of LAWS would be in violation of International Humanitarian Law (IHL).
This is the second of three articles in a series which discusses issues which are at the heart of this ongoing debate. In the previous piece, a brief tour was given of relevant literature on the subject, the unique characteristics of LAWS, and why these are viewed as being in violation of the IHL principles of Distinction, Proportionality and Martens Clause. It was also highlighted that full autonomy in the “select and engage” functions of weapon systems was the one critical feature of LAWS which was motivating activists to advocate a pre-emptive ban on their development and employment. Finally, it was noted that discussion of LAWS, if carried out against the backdrop of actual military conflict scenarios rather than in general terms, would lead to early convergence of opposing views in the ongoing raging debate.
This article begins with an analysis of whether or not LAWS violate IHL principles in three typical warfighting scenarios which represent different facets of the spectrum of conflict. It goes on to discuss some noteworthy nuances of Autonomy in LAWS, as well as the intriguing feature of Unpredictability in AI-powered systems. It also emphasizes the need for caution while attempting to make the composite “select and engage” function autonomous.
LAWS, IHL and the Spectrum of Conflict
Pro-ban advocacy groups claim that employment of LAWS on the battlefield would violate the IHL principles of Distinction, Proportionality and Martens Clause. In order to assess the veracity of these claims, let us look at a few not-so-futuristic scenarios of autonomous drone attacks in military operations, and check whether or not deployment of LAWS in these settings violates the listed IHL principles.
Scenario 1: Mechanized Warfare
After declaration of hostilities, a commander tasks a swarm of armed drones to destroy maximum combat capability of an adversary tank formation known to be operating in a well-defined 100 square kms area of desert terrain. The autonomous drone swarm is launched, carries out the mission with a degree of success, and returns to base. Relevant aspects to consider are as under:-
- With current (pre-LAWS) capabilities, such a mission would be carried out by own forces using tank formations in conjunction with artillery and air support, including attack helicopters.
- Principle of Distinction. The “select” function in the drone swarm is restricted to identifying tank signatures in the designated area, picking them one at a time and targeting them with on-board weapons. There are no civilians/ combatants hors de combat expected to be present in such a combat zone, and even if they are, casualties amongst them would clearly be acceptable as collateral damage. The capability to make a ‘distinction’ is thus not relevant with respect to the autonomous drone swarm since, given the tactical setting, it would not be applied even by human soldiers (in tanks, artillery gun positions and attack helicopters) in such circumstances. It is also pertinent to mention here that the selection of the group target (which we may term as group select), in this case the group of enemy tanks, is carried out by a human commander before the autonomous drones are tasked to attack it.
- Principle of Proportionality. Value judgement on proportionality and military necessity would be exercised by the commander at the time of group select, before launching the drone attack.
- Martens Clause. The decision to destroy the group of tanks (with humans inside) is taken by the human commander. The task of the autonomous drones is merely to pick up tank signatures and destroy them one by one. This is pretty similar to the procedure which would be followed if the attack were to be carried out by manned aerial platforms, and there appears to be no room for human empathy to play out in such scenarios.
Scenario 2: Attack on Logistics Infrastructure
A similar attack by autonomous armed drones can be visualised on logistics infrastructure in the adversary’s hinterland during hot war, eg, an ammunition dump, an airfield, a bridge or a group of such targets. The difference here is that civilians are expected to be present in and around the target area. Civilians working within the ammunition dump would clearly be valid targets (civilian combatants), while any civilian casualties on an attacked bridge would be acceptable as collateral damage. The assessment of military necessity would have been carried out by a human commander before launching the attack, just like in a similar strike carried out by manned aircraft.
Other Conventional Warfare Scenarios
Several other battle scenarios belonging to the conventional warfare sub-class of the spectrum of conflict may be envisaged, eg, attack on a battalion defended area, attack on naval fleets, etc, all by autonomous armed weapon systems. The distinguishing features of all such attacks are that they are carried out on declaration of hostilities between adversaries, are restricted to designated combat zones or on well-defined military targets, and the selection of targets, individual or group, is carried out by a human commander before activating the LAWS.
Scenario 3: Counter-Terrorist Operations
In a politically turbulent peace-time situation, an autonomous armed drone attack is launched to destroy a group of terrorists during a meeting known to be scheduled at a particular venue (an “Eye in the Sky” type of setting [1]). The turnaround time from drone take-off time until return to base is several hours. The drone does not have the requisite sensors and intelligence to identify the terrorists from amongst the civilian population in the area. The chances of a change in situation from the time the attack was launched till the time it is carried out are high.
Analysis
It may be noted that in all the above scenarios, it is assumed that the drones employed are fully autonomous, ie, they carry out the critical “select and engage” composite function without human intervention.
Use of LAWS in Scenarios 1 & 2, and other conventional conflict scenarios as characterised above, do not appear to violate IHL principles, for reasons given out in the respective paragraphs. As per HRW, however, the conventional scenarios painted here are “narrowly defined constructs,” framed merely to justify the use of LAWS [2]. In the opinion of the author, although in conventional warfare too there would be tactical situations where use of fully autonomous weapon systems may not be warranted, the ones described here are the norm rather than the exception.
On the other hand, till the time LAWS do not evolve to a stage where they possess a distinction capability as good as or better than humans, an attack such as the one which is described in Scenario 3 would violate existing provisions in IHL.
It seems that the military contexts implicitly assumed by most ban proponents are William Lind’s Fourth Generation Warfare (4GW) scenarios (eg, post-Second Gulf War operations in Iraq, Operation Enduring Freedom in Afghanistan, operations against ISIS, etc) which the world has been witnessing over the last decade and a half [3]. Although such scenarios clearly need to be considered in the LAWS debate, it must be kept in mind that the capabilities of major world armies (US, Russia, China, India, etc) are primarily meant to fight conventional wars, notwithstanding the fact that the frequency of such wars has reduced significantly. The acceptability or otherwise of LAWS for deployment in conventional war settings should, therefore, be the principal guiding factor while deliberation upon this issue at the UN.
Going by the above discussion it may be concluded that, in typical conventional war tactical settings, employment of LAWS should not be a cause for raising humanitarian concerns. On the other hand, in most 4GW scenarios, use of LAWS may not be acceptable at least in the foreseeable future.
Autonomy in LAWS
Symbiotic Relationship with Human Control
Autonomy and Human Control are two facets of control which lie in a close relationship on opposite sides of the human-machine interface; where autonomy ends, human control begins. In a weapon system, this is a complex multi-functional relationship which, with the right balance, can achieve a powerful synergy. Further, while one can visualise a fully manual weapon system (eg, a spear), a fully autonomous system may not be easy to conceptualise, as some level of human control over weapon systems is always likely to be there. Also, with progressive increase in autonomy in one or more functions associated with a given weapon system, the human-machine interface in a given system would shift in incremental steps towards becoming fully autonomous.
Human Control, together with the notion of Meaningful Human Control, is dealt with in the final article in this series. Some aspects of autonomy in LAWS are discussed below.
Autonomy in Weapon Systems is a Continuum
Going by the literature on the subject and as also evidenced by the ongoing discussions at the UN, it is evident that the term autonomy is not well-defined, and that clearly there are degrees of autonomy. An attempt was made to define different categories of LAWS based on their degree of autonomy during the UN Informal Meeting of Experts on LAWS in 2016. The Meeting Report submitted by the Chairperson states that “clear distinctions were made between tele-operated, automated and autonomous systems” [4]. However, making such distinctions with clarity does not in fact appear to be feasible at all. Marra and Mcneil have given an excellent exposition on autonomy in weapon systems, stating that “there is no bright line between automation and autonomy” and that “autonomy should be measured on a continuous scale” [5].
A popular way to classify different degrees of autonomy in LAWS is as follows: in “human-in-the-loop” systems, the decision to kill is taken by humans; “human-on-the-loop” systems are those in which machines may make the decision to kill, but humans can override the decision; finally, in “human-out-of-the-loop” systems, humans have no role to play in the critical “select and engage” functions [6]. In the context of the ongoing debate, “human-out-of-the-loop” LAWS would be considered as fully autonomous, and ban advocacy groups are demanding a pre-emptive ban on such systems.
Noel Sharkey has proposed a five-level classification for human supervisory control of weapons [7]. The US Department of Defence (DoD) in its Directive 3000.09 of 2012 uses the terms “semi-autonomous”, “supervised autonomy” and “fully autonomous” [8]. Several other classifications defining different degrees and facets of autonomy exist in the literature, some of them at a more granular level [9, 10].
Given that the requirement of human involvement in targeting decisions is at the core of the ongoing LAWS debate, none of the above types of classification adequately capture the different aspects of human involvement which have relevance with respect to the targeting process. This issue is discussed more extensively in a subsequent section on Human Control.
A deliberate examination reveals that the degree of autonomy in weapon systems is essentially a continuum. This is because since any such system would have multiple functions (eg, in the case of aerial drones, activate (take off/ land), navigate, identify, select, engage, assess), and the degree of autonomy designed into each function in general would be independent of autonomy in other functions. For instance, while the “navigate” function may be fully autonomous, the “select” function may have zero autonomy. Therefore, given the multiple functions each with a different degree of autonomy, it may not be possible to separate weapon systems into clear sub-classes based on a single “autonomy” parameter. However, it may be noted that in the context of the ongoing debate, the composite “select and engage” function is being taken as the sole basis for determining the autonomy level of LAWS.
It is also interesting to note that it may not be possible to rigorously define weapons as “fully autonomous” in the true sense. This is because human control over weapon systems will always exist in any targeting process (unless the human race is taken over by machines!).
Autonomous Systems are Not Necessarily AI Powered
“Autonomy implies AI, which in turn implies unpredictability leading to loss of human control, and hence there is a case for a prohibitory ban on LAWS!” This seems to sum up the tacit line of thinking of a good number of pro-ban participants in the LAWS debate. Here we examine the first part, ie, whether the implementation of autonomy in LAWS, particularly in its “select and engage” functions, necessarily requires to be AI-powered.
Let us take the example of the Harpy Suppression of Enemy Air Defences (SEAD) weapon system, which appears to meet all the parameters used today to define LAWS. Once launched, the Harpy looks for radar signatures by “navigating” to a designated area, “identifies” enemy radars by matching these against an on-board database of radar signatures, “selects” a radar (there may be more than one), and then dives down to “destroy” it using its explosive warhead [11]. It is also “lethal” and not purely anti-materiel (unlike the Phalanx close-in weapon system [12]), since a radar station is generally manned. Apparently it does not employ AI technology, or at least does not need to, given the nature of its operational capabilities. There does not appear to be much of a hue and cry against the operational deployment of the Harpy, developed more than two decades ago, or for that matter against its more sophisticated successor, the Harop [13]!
Close-in weapon systems such as the Phalanx too need not be AI-powered, as machine learning/ deep learning may not be essential for meeting its operational specifications. Of course the Phalanx, being anti-materiel and not lethal by design, would not fall under the classification of LAWS.
Thus, autonomy does not necessarily imply an underlying AI technology.
Are AI-Powered Systems Inherently Unpredictable?
One of the lines of argument often put forth by ban proponents is that LAWS would be inherently unpredictable, being AI based with self-learning abilities. Since learning would be dependent on the external environment, every time the system learns and adapts, it would metamorphose into a “new system”. This would have two implications: first, since its behaviour would keep on changing, it may not be feasible to keep it within defined parameters, thus there is a distinct possibility of its going out of human control; and secondly, it would not be feasible to hold anyone accountable for its behaviour, in particular its “decision” to kill, on the grounds that designers and military commanders cannot be held responsible for something which is beyond their control.
Is the assumption that machine-learning, especially deep learning, would necessarily lead to unpredictable behaviour, justified? Here, we are primarily concerned with the “select” function, in order to ensure that only the intended military targets are selected and engaged. The “navigate” function is also relevant here, since it needs to be safeguarded that the LAWS does not operate out of a designated target area.
In this context, it may be useful to take a cue from the current status of development of self-driving cars. Self-driving cars with Level 4 autonomy (full automation with driver intervention in difficult environments) are already making their presence felt, and should be on offer by leading companies (Ford, GM, Renault-Nissan, Daimler, Tesla, etc) within the next few years. Cars with Level 5 autonomy (no driver cockpit) are predicted to become a reality in about a decade from now. Although the relevant technologies are still under development, it is pretty clear at this stage itself that AI is a core technology which will power these self-driving cars. It is evident that “unpredictable” self-driving cars are not going to be commercialised and put on the roads. Therefore, there is a reasonable degree of confidence even today that supervised/ deep learning algorithms are expected to yield controlled behaviour, well within design parameters, even in an unpredictable environment such as urban roads.
Interestingly Elon Musk, a pioneer in self-driving cars, has also endorsed the “Campaign to Ban Killer Robots”, and has flagged AI as an existential threat to humanity, if left unregulated [14]. In the context of LAWS, deep learning is likely to be utilised in both the “navigate” and “select” functions, and the end result can be expected to be as reliable and predictable as being demonstrated in self-driving cars. Thus, in the typical conventional war scenarios discussed above, LAWS may be relied upon to identify and engage well defined military targets in a combat zone bounded in area and time. However, they may not be able to accomplish the complex task of identifying a terrorist amongst a civilian group, or even to distinguish a civilian from a combatant, at least in the foreseeable future. In supporting the pro-ban advocacy groups, perhaps Elon Musk is merely trying to caution against use of AI-enabled military applications at operational and strategic levels of warfare, or for that matter in 4GW scenarios, where human capabilities will continue to surpass LAWS for a long time to come.
The amazing success demonstrated by the AlphaGo/ Alpha Zero programs developed by Google’s DeepMind in beating professional Go players [15, 16] also demonstrates that machine learning systems can be designed to achieve desired goals, even though the path to the goal may not be transparent to the developers in every case.
While undeniably there is an element of unpredictability associated with self-learning systems, this does not necessarily translate to these “going out of human control”. It was also brought out in the previous section that autonomous systems need not necessarily be AI-powered. In summary, it does seem perfectly feasible to design autonomous systems which function within the constraints laid down in the design specifications. In the context of LAWS, the key constraints would be those which are related to target selection and area of operation.
Safeguarding Against Automating the “Will” of LAWS
It was discussed in the previous section that machines which use deep learning techniques have the characteristic of being able to continuously metamorphose into something for which they were not specifically designed, depending essentially on the environment in which they operate. This characteristic and the fact that, being too complex, such “evolution” is non-decipherable by their original designers, appear to be among the chief causes for rising consternation amongst AI professionals engaged in the development of LAWS. At the same time, it has also been brought out above that machine-learning systems can be designed such that their unpredictability is confined within acceptable limits.
In order to ensure controlled behaviour, it should be feasible to internally isolate the different functions of a complex system. Thus, the activate/ navigate/ identify/ select/ engage/ assess sub-functions of LAWS, could be engineered to self-learn independent of each other, and not derive their learning power from a central self-learning cognitive core. In other words, given the current machine learning design methodologies, it should be perfectly feasible to physically isolate the self-learning mechanisms of sub-functions in such a manner that, while individually they might suffer from the much feared unpredictability, the design could ensure that interaction amongst the sub-functions is under perfect algorithmic control, thus confining their unpredictability to individual sub-functions.
Flowing from the above, while the critical “select” and “engage” sub-functions may separately rely on deep-learning techniques, as long as the implicit “decide” sub-function interposed between these two is under strict human/ algorithmic control, LAWS as a whole cannot be said to possess an “autonomous will” which might make it run amok, as many imagine. In other words, there can be no objection to a weapon system autonomously selecting, ie, identifying and prioritising, one (or a subset) amongst many available targets based on specified criteria. Similarly, once the decision to engage has been taken, the process of engagement from the time of weapon release till the time it hits the target (a non-trivial process in many scenarios) could also be made fully autonomous without any eyebrows being raised. However, the decision to engage the selected target(s), ie, the decision to kill, needs to be taken with care.
In the ongoing debate, the “decision to engage” function is rarely discussed, and is perhaps taken to be part of (or even synonymous with) the “select” function. For clarity, however, it may be best to separate the “select” function (involving identification and prioritising one or more targets) from the “decision to engage” function.
The degree of autonomy to be permitted in the “decision to engage” function is perhaps the real bone of contention between opposing views. Ban proponents advocate that the decision to engage every individual target should be taken by a human. In comparison, the US is of the view that a human commander may deploy an autonomous system to destroy a specified group of targets. An even more permissive view might accept a level of autonomy which gives LAWS the flexibility to destroy all targets (of a specified description) in a designated area.
It is felt that, rather than putting out broad-based arguments on deployment of LAWS in general terms, or even narrowing down the focus to the composite “select and engage” function, the debate should focus on the nature of autonomy to be permitted in the “decision to engage” function, which essentially represents the “will” of LAWS.
Conclusion
In this article, an analysis has been carried out on whether or not employment of LAWS violates the principles of IHL against the backdrop of different conflict scenarios. It is concluded that while in most conventional war settings their deployment seems to be in conformance with IHL, in certain Fourth Generation Warfare scenarios resorting to LAWS may not be acceptable. Since weapon systems, in general, are developed for conventional conflicts, this conclusion does not support a pre-emptive ban on the development of LAWS.
The above discussion also brings out the various nuances of autonomy in weapon systems, highlighting that it may not be feasible to categorise weapon systems into well-defined classes based on a single “autonomy” parameter, as autonomy has a multi-functional character and is essentially a continuum. It also dwells on the “unpredictable” feature of AI-based autonomous system, and concludes that, while a certain amount of unpredictability is inherent in self-learning systems, that does not imply that AI-powered LAWS will go out of the control of humans. Finally, this piece highlights that, rather than talking about autonomy in general terms, it would be advisable to focus on autonomy in the all-important “decision to engage” function, in order to reach an early convergence of views.
The concluding article in this series will analyse the important feature of Meaningful Human Control, bring out how employment of LAWS may in fact lead to saving human lives, and discuss the pros and cons of a pre-emptive ban on LAWS vis-à-vis a binding regulation on their development and employment.
References
(1) Eye in the Sky, English Movie, Entertainment One Production Company, 2015, Accessed 29 Sep 2020, https://bleeckerstreetmedia.com/eyeinthesky.
(2) International Human Rights Clinic, Making the Case: The Dangers of Killer Robots and the Need for a Pre-emptive Ban, Human Rights Watch, Harvard Law School, Dec 2016, pp. 7, ISBN: 978-1-6231-34310, pp. 9, Accessed 29 Sep 2020, https://www.hrw.org/report/2016/12/09/making-case/dangers-killer-robots-and-need-preemptive-ban.
(3) William S Lind et al, The Changing Face of War: Into the Fourth Generation, Marine Corps Gazette, Oct 1989, pp. 22-26, Accessed 29 Sep 2020, http://www.lesc.net/system/files/4GW+Original+Article+1989.pdf.
(4) Chairperson of the Informal Meeting of Experts, Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), Geneva, Dec 2016, pp. 4-5, Accessed 29 Sep 2020, https://www.unog.ch/80256EDD006B8954/(httpAssets)/DDC13B243BA863E6C1257FDB00380A88/$file/ReportLAWS_2016_AdvancedVersion.pdf.
(5) William C Marra & Sonia K McNeil, Understanding the Loop: Regulating the Next Generation of War Machines, Harvard Journal of Law and Public Policy, Vol. 36, No 3, 2013, pp. 22-26, Accessed 15 Jan 2021, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2043131.
(6) Christof Heyns, Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, UN General Assembly, Apr 2013, pp. 8, Accessed 29 Sep 2020, https://www.ohchr.org/Documents/HRBodies/HrCouncil/Regularsession/session23/A-HRC-23-47_en.pdf.
(7) Ibid. 8.
(8) Ashton B Carter, Autonomy in Weapon Systems, US DOD Directive 3000.09, 21 Nov 2012, pp. 3, Accessed 29 Sep 2020, https://fas.org/irp/doddir/dod/d3000_09.pdf.
(9) Ibid. 20, pp. 22-25.
(10) Unmanned Aircraft Systems, Jt Doc Pub 0-30.2, UK Min of Def, Aug 17, pp. 13, Accessed 29 Sep 20, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/673940/doctrine_uk_uas_jdp_0_30_2.pdf.
(11) Harpy, Israel Aerospace Industries, Accessed 29 Sep 2020, https://www.iai.co.il/p/harpy.
(12) Phalanx CIWS, Wikipedia, Accessed 29 Sep 2020, https://en.wikipedia.org/wiki/Phalanx_CIWS.
(13) Harop Loitering Munitions UCAV System, Airforce Technology, Accessed 29 Sep 2020, https://www.airforce-technology.com/projects/haroploiteringmuniti.
(14) Samuel Gibbs, Elon Musk: Regulate AI to combat ‘existential threat’ before it’s too late, The Guardian, 17 Jul 2017, Accessed 29 Sep 2020, https://www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo.
(15) AlphaGo versus Lee Sedol, Wikipedia, https://en.wikipedia.org/wiki/AlphaGo_versus_ Lee_Sedol, accessed 17 Jun 2018.
(16) AlphaZero, Wikipedia, Accessed 28 Sep 2020, https://en.wikipedia.org/wiki/AlphaZero.
0 Comments