LETHAL AUTONOMOUS WEAPON SYSTEMS: SLAVES NOT MASTERS!
Meaningful Human Control, Saving Lives and Non-Feasibility of a Pre-Emptive BanSections
Introduction
Meaningful Human Control
Saving Lives
Non-Feasibility of a Pre-Emptive Ban
Ban versus Regulation: The Debate Goes On
References
Introduction
Lethal Autonomous Weapon Systems (LAWS), sensationally dubbed as “killer robots”, are currently the subject of a raging debate which is on globally, particularly at the UN, over ethical, moral and legal aspects related to their deployment in future wars. Human rights groups are advocating a pre-emptive ban on their development on the grounds that employment of LAWS would be in violation of International Humanitarian Law (IHL).
This is the final article in a three-piece series focusing on issues which are at the heart of this ongoing debate. The previous two write-ups, after giving a brief tour of diverse and impassioned views on the subject, dwelt on the unique characteristics of LAWS, analysed different positions on their purported violation of IHL against the backdrop of a spectrum of conflict scenarios, and discussed some noteworthy nuances of Autonomy in LAWS as well as the intriguing feature of Unpredictability in AI-powered systems. It was also observed that it may not be feasible to categorise weapon systems into well-defined classes based on a single “autonomy” parameter, as autonomy has a multi-functional character and is essentially a continuum. It was also pointed out that while a certain amount of unpredictability is inherent in self-learning systems, this does not necessarily imply that AI-powered LAWS would go out of the control of humans.
This piece will examine the important notion of Meaningful Human Control (MHC), and also bring out how employment of LAWS may in fact lead to saving of human lives. The pros and cons of a pre-emptive ban on LAWS vis-à-vis a binding regulation on their development and employment will also be discussed. Finally, the current status of the debate and the positions taken by various countries, including India, will also be touched upon.
Meaningful Human Control
In the previous article of this series, it was highlighted that there is a symbiotic relationship between Autonomy and Human Control, and these two facets of control lie on opposite sides of the human-machine interface: where autonomy ends, human control begins. Having discussed autonomy in the previous piece, a few noteworthy aspects of human control are discussed here. Some remarks are also made on the dividing line between autonomy and human control, ie, the human-machine interface. However at the outset it is worth reiterating that the degree of autonomy is a continuum spanning multiple functions, rather than a set of logically discrete stages. Therefore, since human control takes over where autonomy ends, the degree of human control must also necessarily possess similar characteristics.
MHC in Critical Functions
In the Final Report of the 2016 CCW Meeting of Experts, it was stated that “meaningful human control” and “appropriate level of human judgement” were two alternative frameworks proposed by the participants for taking forward the discussion on the degree of human control in LAWS [1]. As per Human Rights Watch, in the arms arena the term MHC signifies control over the selection and engagement of targets, that is, the “critical functions” of a weapon system. It goes on to assert that humans should exercise control over individual attacks, not simply overall operations [2]. The US DOD Directive 3000.09, in contrast, defines “semi-autonomous weapon systems” as those which, once activated, can (autonomously) engage either individual targets or specific target groups that have been selected by a human operator. The implication of this definition is that (semi) autonomous engagement of group targets is acceptable.
The two positions stated above represent the crux of the ongoing debate: whether LAWS should be permitted to autonomously engage, on human command, only individual targets selected one at a time by humans; or whether they may be designed with the flexibility to autonomously identify and engage an entire “target group” specified by a human operator. It may be noted here that the manner in which a target group might be specified could vary widely. For instance, a group could have a tight specification such as “three (specific) bridges on a (specified) river”; or it could be loosely specified as “all enemy tanks within a (specified) geographical area”. Unfortunately, the deliberations at the UN and other forums are not incisive enough to discuss such finer nuances of human control.
“Life Cycle” of LAWS
On human control, the International Committee of the Red Cross (ICRC) is of the view that control may be exercised by human beings at different stages: development of the weapon system, including its programming; the deployment and use of the weapon system, including the decision by the commander or operator to use or activate the weapon system; and the operation of the weapon system during which it “selects and attacks” targets. It deliberates upon whether control in the first two stages is sufficient to justify minimal or no human control at the operation stage from a legal, ethical and military-operational standpoint. ICRC opines that this may depend on various technical and operational parameters, such as task, type of target, time-frame of operation, potential for intervention, etc [3].
Key Elements of MHC
In a particularly insightful commentary on the key elements of MHC Richard Moyes, in a background paper prepared for the 2016 CCW Meeting on LAWS, states the following:-
- As per its existing provisions, IHL provides a framework that should be understood as “requiring human judgment and control over individual attacks as a unit of legal management and tactical action”. However, he goes on to elaborate that an individual attack is not necessarily a single application of kinetic force to a single target object. In practice an attack may involve multiple kinetic events against multiple specific target objects. However, there has to be some spatial, temporal, or conceptual boundaries to an attack if the law is to function.
- He asserts that, for the law to function meaningfully, there needs to be legal judgment and accountability over actions at the most local (tactical) level, as expanding the meaning of “single attack” to mean an attack at the operational or strategic level may render the concept of human control meaningless.
- He also proposes the following as key elements for further discussions on defining MHC: predictable, reliable and transparent technology; accurate information for the user on the outcome sought, the technology, and the context of use; timely human judgement and action, and a potential for timely intervention; and finally, accountability to a certain standard [4].
The “Select” Function: Multiple Interpretations
It emerges from the above discussion that the critical “select” function may have multiple interpretations, depending on the scenario. In one situation, a single military target may be selected for engagement just before release of the actual lethal force. In another, a commander may select a group of military targets before the LAWS is launched or tasked (please see conventional scenarios discussed in the previous article), and after navigating to the target area, the LAWS selects an individual target (or several individual targets one at a time) from amongst the specified group and releases the lethal force to destroy one or more targets. In this case, the decision on taking human lives would have been taken at the time of “group select” by the human commander and not by the LAWS at the time of actual engagement, and therefore would not be in violation of the Martens Clause.
The Human – Machine Interface
In order to emphasize the synergetic relationship between humans on the one hand and LAWS on the other, the alternative viewpoint to MHC is that it is the human-machine interaction which needs to be optimized. As per this view, proposed by the US [5], the human-machine relationship extends throughout the development and employment of the LAWS, and is not limited to the moment of decision to engage a target. Flowing from this logic, as per this view it would be more useful to talk about “appropriate levels of human judgement” rather than MHC.
Need for Common Understanding on MHC
From the body of opinion which has emerged on the aspect of human control in the LAWS debate so far, the following may be summarised:-
- Although other terminologies, such as “appropriate level of human judgement”, may be arrived at to express the complex connotation of human control over LAWS, MHC appears to the more popular and acceptable term so far.
- For ensuring adherence to existing provisions in IHL and human rights law, as also matching up to morality and human dignity standards, MHC does not necessarily imply that each and every release of kinetic force be specifically approved by a human operator/ commander.
- The critical “select” function has multiple interpretations which are context specific, and autonomy in the “select and engage” functions at the execution stage does not necessarily imply than an implicit “decision to kill” has been taken by the LAWS and thereby violated the spirit of Martens Clause.
- A number of technical and operational parameters need to be considered before concluding whether or not the desired level of human control has been exercised. Thus there may a case for focusing on the human – machine interface, in order to reap the benefits of the best synergetic combination of humans and LAWS.
It is suggested that further work in this area needs to concentrate on coming to a common understanding of MHC keeping in view the aspects discussed above.
Saving Lives
As already discussed earlier on in the series, the arguments for banning LAWS are primarily based on the premise that they would violate the IHL principles of distinction and proportionality. Both of these principles are directed towards saving innocent civilian lives. Further, in these arguments, LAWS are being compared with humans, and that too humans who are in situations where human qualities such as empathy and making value judgements (on proportionality aspects) are required to be exercised.
As per another perspective which brings out the benefits of deploying LAWS, it would be more appropriate to compare LAWS to “dumber” weapon systems such as artillery guns and “fire and forget” missiles, and how the higher intelligence of LAWS can lead to lesser collateral damage.
Saving Combatant Lives
In conventional warfare, once hostilities are declared, more often than not humans operating traditional weapon systems are not required to draw upon “human” qualities such as empathy. For a soldier defending his locality against an adversary offensive, every adversary combatant is a target. For combatants manning an artillery gun position, given a target the sole aim is to neutralise it with a barrage of fire using the calculated amount of kinetic explosive. Long range precision vectors (“fire and forget” missiles), once released, proceed to destroy their designated targets with no further consideration for civilian casualties. For a tank formation in mechanised warfare, all efforts are made to inflict maximum tank losses on the adversary while the battle is on. The singular task of Air Force fighter pilots is to bring down enemy aircraft or neutralise enemy logistics infrastructure in the hinterland with minimum losses to own air assets. If the soldier in defense, the artillery gun positions, the tank formations and the fighter/ bomber aircrafts are replaced by LAWS, there would be a huge saving in lives of own combatant soldiers [6].
Saving Civilian Lives
Precision munitions are preferred over “dumb” munitions, even from a humanitarian perspective, because their lethality is more precisely directed at military targets, as a consequence of their being more “intelligent” than their dumber counterparts. LAWS are fundamentally more intelligent than even precision munitions, although cognitively (as yet) inferior to humans. Therefore, even in scenarios where civilians are present, use of LAWS in place of dumb or precision munitions for destroying military targets (military headquarters, logistics infrastructure) is expected to result in lesser collateral damage.
LAWS: Clear Benefits in Conventional Warfare
The following important points are being made here:-
- The above discussion on saving lives does not presume that LAWS have evolved to the stage of exhibiting the human qualities of empathy, value judgement, etc.
- In the conventional warfare scenarios depicted above, LAWS are envisaged to be deployed in situations where qualities such as empathy (linked to the principle of distinction) are not applicable. Moreover, qualities such as value judgement (linked to the principle of proportionality) are indeed being exercised, but at a higher level of military operation, where a human is still in the loop. This would be the case even if LAWS were not utilized for combat.
- In conventional warfare, such tactical settings represent the norm rather than the exception. This is in refutation of the contention of Human Rights Watch (HRW) that narrowly constructed hypothetical cases in which fully autonomous weapons could lawfully be used do not legitimize the weapons because they would likely be used more widely [7].
- Deployment of LAWS in typical conventional war scenarios is expected to result in significant savings of own combatant lives and also minimize collateral damage to civilians.
Other Humanitarian Benefits of Deploying LAWS
In its paper on the subject presented to the UN CCW/ GGE on LAWS in 2018, the US has endorsed the above position, and also given a comprehensive analysis on how deployment of AI in general and LAWS in particular can result in several humanitarian benefits during war [8].
Non-Feasibility of a Pre-Emptive Ban
To begin with, a coalition of advocacy groups called the International Committee for Robot Arms Control worked to promote an international convention to prohibit the use of LAWS. The call for an international ban was raised to greater prominence when, in November 2012, HRW issued a report calling for a sweeping multilateral treaty that would ban outright the development, production, sale, deployment, or use of LAWS. Many other organisations and groups, including some states, have now joined in to demand an outright ban on the development of LAWS.
On the other side of the debate are those who hold the view that, even if justified, the implementation of such a ban may not be feasible primarily due to the following reasons, amongst others:-
- Autonomous technologies will be implemented “incrementally” into military weapon systems on their march towards “full” autonomy, making it difficult to assess when the ban threshold is crossed.
- Dual-use technologies will in any case continue to be developed for civilian applications.
- It would be difficult to get high contracting parties at the UN to agree to sign such a convention, since there is no way to stop non-signatories as well as unprincipled signatories to march ahead in developing the requisite technologies despite the ban being in place.
In order to ensure a cautionary and controlled approach towards development of LAWS, nations have the option of putting into effect either a prohibitory or a regulatory convention. These are briefly discussed below.
Prohibitory Ban
On the issue of the workability or otherwise of a prohibitory ban, there are well reasoned arguments given out in a research paper by Anderson and Waxman on the inadvisability of adopting such a course [9]. The following additional remarks need consideration [10]:-
- There appears to be no objection from the ban proponents to the issue of autonomy per se, as long as there is a “man-in-the-loop.”
- A “decide” function is implicit in the conjoint “select and engage” functions, the so-called critical functions. If a LAWS selects a target autonomously, takes human approval for engagement, and then engages the target again through an autonomous process, the “human-in-the-loop” criteria would clearly be satisfied.
- What needs to be taken note of here is that it is for implementation of the “select” function (“identify” being implicit in “select”) that sophisticated AI technologies are expected to be utilized. AI facilitated technologies may also be utilized to improve the “navigate”, “track”, “engage” and “assess” functions in the targeting “kill chain”. In contrast, the “human approval based decide” function, which separates the “select” and the “engage” stages in a man-in-the-loop LAWS, is pretty trivial in terms of technology.
- The above implies that a ban convention signatory can go ahead and develop an acceptable “man-in-the-loop” LAWS, with all kill-chain functions (except the “decision to engage” function) as sophisticated as necessary for a fully autonomous weapon system. Thereafter, the transition from this to a fully autonomous system would be just a small step. This also implies that a ban on development of technology is not likely to be effective.
- Another noteworthy aspect is as follows: all other weapons banned vide existing conventions – chemical and biological weapons, cluster munitions, mines, etc – if used in a conflict, can easily be detected from their physical effects. On the other hand, even though extensive weapon reviews may have taken place at the development stage on a man-in-the-loop LAWS, whether or not it is functioning in a fully autonomous mode would never be evident through external observation! In other words, the transition of a weapon from a man-in-the-loop to a man-out-of-the-loop system would be almost impossible to verify.
Regulatory Convention
On the one hand, due to reasons discussed above, there does not appear to be much hope towards putting a prohibitory ban in place, and even if achieved, may be meaningless in practical terms. On the other, a regulatory convention on LAWS which restricts the deployment of LAWS to certain well-defined scenarios (such as the ones discussed in this series) while prohibiting their use in another set of scenarios, and also puts together review mechanisms in place, may find success in achieving a high degree of consensus amongst the high contracting parties at the UN.
Having stated that, it also merits mention here that reputed militaries operate under the ambit of well-structured rules of engagement, and deploy weapon systems only in environments for which they are designed. A regulatory convention, therefore, may well serve the purpose of satisfying the concerns of human rights advocacy groups, but are not likely to give any added advantage in the field, as military rules of engagement are formulated to satisfy these very concerns.
Ban versus Regulation: The Debate Goes On
A total of 97 countries have so far taken a position on LAWS in a multilateral forum, 85 of which are party to the UN Convention on Certain Conventional Weapons (CCW) on LAWS. Countries which are investing heavily in the development of LAWS include the US, China, Russia, Israel, the UK and South Korea, while Australia, Turkey and some others are also in the fray.
As discussed, banning LAWS implies prohibiting weapon systems that lack MHC. However, there is no clear definition or agreement on what constitutes MHC. In this context, there appears to be widespread consensus that MHC is where states should focus their collective efforts to arrive at a consensus. Notwithstanding the ambiguity in the key idea of MHC, 30 countries have so far called for a ban on LAWS. China interestingly has called for a ban on their use, but not their development or production! Some countries, most notably the US and Russia, have firmly rejected proposals to negotiate a new CCW protocol or standalone international treaty.
India has also taken some initiatives towards development of autonomous weapons. It has participated in every CCW meeting during 2014-19 and chaired these meetings in 2017-18, although its public position on banning LAWS has been mostly ambiguous. However, the Defence Minister Rajnath Singh stated in Sep 2019 that “the final attack decisions should be made by humans in the military, not by artificial intelligence” [11].
Conclusion
This series has attempted to analyse the ongoing debate on banning of LAWS by presenting contrarian world views, and analysing them with special focus on the military perspective. Several noteworthy conclusions were arrived at, as follows: LAWS are a class apart from other contentious weapon systems, and considerations for banning/ regulating them are fundamentally different from rationale used for existing anti-ban conventions (weapons being ‘repugnant’, ‘causing excessive injury’ or ‘indiscriminate’); LAWS do not seem to violate IHL in conventional warfare settings, but their use in 4GW settings may be unacceptable; autonomy is a continuum and attempts to classify them into discrete classes may not be meaningful; the unpredictability inherent in self-learning systems does not necessarily imply uncontrollable behaviour; and being more intelligent than dumber weapons, employment of LAWS is expected to reduce collateral damage, and save combatant lives as well.
It was also observed that it is not the composite “select and engage” function, but the implicit “decision to engage” function interposed between the “select” and “engage” functions which needs to be focused upon. Since autonomy in individual non-critical functions would continue to be improved upon, and the “decision to engage” function is trivial from the implementation perspective, a ban on development of LAWS is not likely to yield any useful result. Hence, a binding regulation is probably the right goal to be aimed for. Towards this end, there is universal agreement that MHC in the “decision to engage” function must be ensured. The real challenge lies in arriving at a common understanding of MHC, with human rights groups stating that the decision to engage “every single target” must be taken by a human, while the opposing view is that assigning a “group of targets” to LAWS may be permitted.
Consensus at the UN CCW on LAWS is not likely to be achieved anytime in the near future. Since AI & robotics technologies are widely believed as having the potential to revolutionise warfare, India should consider putting in serious efforts towards harnessing these exciting new technologies for enhancing its military potential and comprehensive national power.
Notwithstanding the world-wide concern on development of LAWS from legal and ethical points of view, it is increasingly clear that, no matter what conventions are adopted by the UN, R&D efforts by major world powers are likely to proceed unhindered. Given its own security environment, India needs to come to grips with the situation which is unfolding on the global conflict landscape, and invest in the development of LAWS on a war footing, else it would be vulnerable in any future conflict. India also needs to internally brainstorm the various facets of the debate on LAWS, and take a more proactive and unambiguous stance in the ongoing discussions at the UN and other international forums.
References
(1) Chairperson of the Informal Meeting of Experts, Report (Advanced Version) of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS), Geneva, Dec 2016, pp. 7, Accessed 03 Oct 2020.
(2) International Human Rights Clinic, Killer Robots and the Concept of Meaningful Human Control, International Human Rights Clinic (HRW), Harvard Law School, Memorandum to CCW Delegates, April 2016, pp. 1-2, Accessed 03 Oct 2020.
(3) Views of the International Committee of the Red Cross (ICRC) on Autonomous Weapon Systems, CCW Meeting of Experts on LAWS, Apr 2016, pp. 3, Accessed 03 Oct 2020.
(4) Richard Moyes, Key Elements of Meaningful Human Control, Article36, Background Paper for the CCW Meeting of Experts on LAWS, Apr 2016, pp.1-3, Accessed 03 Oct 2020.
(5) Michael W Meier, US Delegation Opening Statement, UN CCW Informal Meeting on LAWS, Geneva, 11 Apr 2016, pp. 2, Accessed 03 Oct 2020.
(6) Privacy, Security and Autonomous Machines, Panel Discussion, 31 Oct 2016, Carnegie Endowment for International Peace, C-Span, Accessed 02 Oct 2020.
(7) International Human Rights Clinic, Making the Case: The Dangers of Killer Robots and the Need for a Pre-emptive Ban, Human Rights Watch, Harvard Law School, Dec 2016, pp. 7, ISBN: 978-1-6231-34310, pp. 9, Accessed 03 Oct 2020.
(8) Humanitarian Benefits of Emerging Technologies in the area of Lethal Autonomous Weapon Systems, US Statement to UN CCW/ GGE on LAWS, 28 Mar 2018, Geneva, Accessed 02 Oct 2020.
(9) Kenneth Anderson and Matthew C Waxman, Law and Ethics for Autonomous Weapon Systems, American University Washington College of Law Research Paper No. 2013-11, Stanford University, The Hoover Institution, Apr 2013, pp. 20-21, Accessed 03 Oct 2020.
(10) Lt Gen (Dr) R S Panwar, Artificial Intelligence in Military Operations: A Raging Debate and Way Forward for the Indian Armed Forces, USI Monograph, No 2, 2018, pp. 27-29, Accessed 03 Oct 2020.
(11) Brian Stauffer, Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control, 10 Aug 2020, Human Rights Watch, Accessed 02 Oct 2020.
0 Comments