LETHAL AUTONOMOUS WEAPON SYSTEMS: SLAVES NOT MASTERS!

"Killer Robots" and International Humanitarian Law
Sections
Introduction
Views and Counterviews
LAWS: Weapon Systems with a Difference
LAWS: Terminology and Working Definition
IHL and the Critical “Select and Engage” Functions
References

Introduction

Artificial Intelligence (AI) has become a field of enormous importance with promising potential for defence applications. AI technologies could be utilised for aiding military decisions, minimizing human causalities and enhancing the combat effectiveness of forces, and in the process dramatically transforming, if not revolutionizing, the nature of warfare. This is especially true in a highly dynamic battlespace where data overload is often encountered, decision periods are short, and timely and effective decisions are an imperative.

The presence of AI powered robotic systems is progressively being felt on the modern battlefield. Increasing levels of autonomy are being seen in weapon systems, leading to capabilities for carrying out search, detect, evaluate, track, engage and kill assessment functions without human intervention. Such systems, some of which are already fielded while others are under development, are widely referred to as Lethal Autonomous Weapon Systems (LAWS). Specific examples include autonomous drones and drone swarms operating in the land, sea and aerial domains, fire-and-forget munitions, loitering torpedoes, and intelligent anti-submarine or anti-tank mines, among numerous other examples. In view of these developments, many now consider AI & robotics technologies as having the potential to trigger a new Revolution in Military Affairs (RMA), especially as LAWS continue to become gradually more sophisticated.

As a reaction to the above developments, for almost seven years now a raging debate is on globally over the ethical, moral and legal aspects of deploying fully autonomous, AI powered LAWS in future wars, sensationally dubbed as “killer robots” by human rights advocacy groups. The Campaign to Stop Killer Robots was initiated in April 2013 under the aegis of Human Rights Watch (HRW), with the aim of pre-emptively banning fully autonomous lethal weapons, defined as autonomous weapon systems without meaningful human control (MHC). The Campaign has been advocating the view that retaining human control over the use of force is a moral imperative and essential to promote compliance with international law and ensure accountability.

Triggered by the above Campaign, nations have been discussing this issue for the last several years at the United Nations Office of Disarmament Affairs (UNODA) forum on Convention on Certain Conventional Weapons (CCW). A breakthrough came at the end of 2016, when countries taking part in the treaty’s five-year Review Conference agreed to formalize their deliberations on LAWS. The Conference established a Group of Governmental Experts (GGE), which was initially chaired by Ambassador Amandeep Gill of India. As of this writing, the GGE has held five sittings since 2017, with more to be held later this year as well as in 2021. Close to a hundred countries are participating in these meetings, along with representatives from UN agencies, the International Committee of the Red Cross, the Campaign to Stop Killer Robots and a large number of other NGOs. In addition to the deliberations at the UN, discussions are also underway at several other forums world-wide, mostly at the behest of pro-ban advocacy groups. The views and counterviews being expressed on this emotive issue are multi-faceted and complex, which is why the progress towards consensus, including at the UN, is very slow.

In this three-article series, an endeavour is made to highlight issues which are at the core of the ongoing debate and have come up in some form or the other in international fora. Special emphasis is laid on the importance of factoring in the military context, and discussing the employment of LAWS against the backdrop of practical conflict scenarios, rather than providing broad-based arguments in the abstract, which it is felt would be key to achieving convergence amongst opposing views.

In this first article in the series, a brief tour is given of some of the relevant literature which discusses divergent views on this deeply debated issue. Thereafter, the unique characteristics of LAWS, which set them apart from other weapon systems, are highlighted. Finally, this work gives an insight into why these features of LAWS appear to be in violation of some of the principles enshrined in International Humanitarian Law (IHL). The two follow-up articles will examine the employment of LAWS in different conflict scenarios, discuss the issues of autonomy and unpredictability, and analyse the desirable feature of Meaningful Human Control (MHC), and the pros and cons of a pre-emptive ban vis-à-vis a binding regulation on LAWS.

Views and Counterviews

Ever since the launch of the Campaign to Stop Killer Robots, a whole body of literature has emerged expressing a wide spectrum of opinion on LAWS. One of the first documents to initiate the debate was “Losing Humanity: The Case Against Killer Robots”, issued by HRW/ International Human Rights Clinic (IHRC) in Nov 2012 [1]. It enunciated its arguments based on the IHL principles of Distinction, Proportionality, Military Necessity and Martens Clause, as also the problems of Accountability. HRW/ IHRC followed this up with several other documents related to this subject, such as “Shaking the Foundations: The Human Rights Implications of Killer Robots” in 2014, focussing on the human rights angle, “Mind the Gap: The Lack of Accountability for Killer Robots” in 2015, which dwells on the accountability aspect of LAWS, and “Heed the Call: A Moral and Legal Imperative to Ban Killer Robots” in 2018, which analyses Martens Clause at length and advocates a pre-emptive ban on LAWS [2, 3, 4].

Ronald Arkin, one of the prominent authors arguing in favour of LAWS, makes a case for development of Ethical Robots, and offers counterviews to the HRW position in his piece “Lethal Autonomous Systems and the Plight of the Non-Combatant” [5], and a follow-up article “Counterpoint” [6], although he proposes proceeding with caution while pursuing their development. Similarly, Michael N Schmitt argues strongly in support of LAWS in his incisive article “Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics”, in which he offers an issue-by-issue rebuttal of the arguments given out in IHRC’s “Losing Humanity” [7].

Other authors support the HRW case. Noel E Sharkey, in his article “The Evitability of Autonomous Robot Warfare”, disagrees with Arkin’s views, essentially stating that Artificial Intelligence/ Autonomous Systems (AI/AS) would probably never be able to match up to the fantasy of creating Ethical Robots and meet the functional requirements of Distinction and Proportionality [8]. He also suggests a five-level architecture for human supervisory control in another piece [9], which is a useful reference for taking forward the debate on MHC (the subject of the final article in this series).

Regarding the feasibility of a ban on LAWS, Kenneth Anderson and Matthew C Waxman argue that incremental development and deployment of autonomous weapons is inevitable, and any attempt at a global ban would be ineffective in stopping their use by the states whose acquisition of such weaponry would be most dangerous. They also assert that such weapon systems are not inherently unlawful or unethical [10]. Peter Asaro disagrees with this view, and goes further to state that pursuing the goal of Ethical LAWS is likely to degrade our conceptions and standards of ethical conduct, and distract us from developing the technological enhancement of human moral reasoning by chasing an improbable technology that threatens to undermine our human rights on a fundamental level [11].

Not surprisingly, there is not much literature available on AI/AS technologies which would facilitate the development of “fully” autonomous weapon systems, not least because most of the technological breakthroughs necessary for their realisation may not be forthcoming in the foreseeable future. Opinions, including those of leading AI/ robotics experts, vary widely. Ban proponents, of course, base their arguments on the premise that values such as “empathy” and “judgement” can never be simulated in machines. On the other hand, there are those who are convinced that Kurxweil’s “Singularity” would be achieved within this century, maybe sooner than later [12, 13]. There are also some futurists who are of the opinion that at a point in their development, LAWS will evolve sufficiently enough to possess “consciousness”!

A number of other authors have offered various perspectives on the complex issue of LAWS, and some of them have been referenced in context subsequently in this paper.

The stand of individual governments on the issue of banning LAWS may be gleaned from their statements given at UNODA CCW LAWS meetings over the last four years. In summary, while a number of countries have expressed pro-ban views, none of the major players (US, Russia, UK, China, Israel, etc) appear to be presently leaning towards supporting such a ban and, going by their currently stated positions and actions, are not likely to do so in the future as well [14]. The Campaign appears to be getting maximum impetus from human rights groups with HRW in the lead, renowned scientists and leading figures such as Stephen Hawking, Elon Musk, Mustafa Suleyman and Stephan Wozniak, as also players from the AI industry.

LAWS: Weapon Systems with a Difference

Existing UN Conventions banning weapons include the following: Biological (1975), Convention on Certain Conventional Weapons (CCW, 1983) with individual protocols for mines, booby traps, incendiary weapons, blinding laser weapons and explosive remnants of war, Chemical (1997), Anti-Personal Mines (1997) and Cluster Munitions (2010). The rationale for banning these weapons is based on the fact that they cause either excessive injury, are indiscriminate, or are repugnant and “against the principles of humanity and the dictates of public conscience”.

LAWS, as (loosely) defined in the ongoing discussions are, at a fundamental level, of a different flavour, for the following reasons [15]:-

  • The weapon itself (eg, rifle, missile, artillery gun, tank, etc) is not the subject of debate, and in fact, is not even specified! It is the nature of the weapon control system, in particular the algorithmic intelligence which would lend autonomy to the weapon system, which gives rise to multiple concerns, triggering the debate on banning LAWS.
  • Since autonomy is at the heart of the discussion on LAWS, understanding the employment of LAWS requires an in-depth understanding of the complex interplay of machines and humans during the targeting process in military operations, which is not a simplistic “aim and shoot” affair as many tend to believe.
  • Unlike the “indiscriminateness” associated with chemical or biological weapons, where the nature of the weapon is such that their effects cannot be adequately controlled once unleashed and hence civilians are at risk, in the case of LAWS, the argument that the IHL “distinction” principle is flouted (please see section on IHL which follows) emanates from the fact that the controlling algorithm might not be intelligent enough to distinguish adequately between combatants and civilians/ wounded/ combatants hors de combat, etc. This of course is based on the premise that LAWS cannot be designed to target a single or a group of clearly identified military target(s), a presumption that may not be correct.
  • Existing weapons may have a degree of inaccuracy and may not be perfectly reliable (as no system can be), but they are not characterized by unpredictability. The general perception of LAWS, on the other hand, is that they could be highly unpredictable, and hence not under “human” control.
  • There is a fear in some quarters that LAWS, if developed, would one day evolve to a stage where they would take-over the human race. But much before that, even in their most basic avatar, LAWS are visualized as being in competition with humans. Probably stemming from the characteristics of unpredictability and intelligence associated with LAWS, this weapon system is visualized as having a mind of its own, including the power of “life or death” over humans. This has led to the coining of the “Killer Robots” slogan, and the view that deployment of LAWS impinges on human dignity and violates the Martens Clause (please see section on IHL which follows). In other words an agency, and an amoral one at that, is implicitly associated with the idea of LAWS!
  • In an apparently contradictory stance, the implicit (and factually correct) presumption that there is no moral agent (indeed, no agent of any kind) present within LAWS leads to Accountability issues.

Amongst weapons and weapon systems, therefore, LAWS may be said to be a class apart. As a result, for the GGE instituted by the CCW, while an understanding of IHL and international human rights law is essential, of equal import is a thorough understanding of complex, evolving AI/ AS technologies, and an equally good grasp of military procedures, especially the targeting process, against the backdrop of a very wide spectrum of conflict.

LAWS: Terminology and Working Definition

Terminology

For the weapon systems under discussion, three terms currently in use are of relevance: Autonomous Weapon Systems (AWS), Lethal Autonomous Weapon Systems (LAWS) and fully (lethal) autonomous weapon systems. There are some other terms, too, which one finds in the literature, such as semi-autonomous weapon systems, supervised autonomy, etc, meant to describe the level of autonomy associated with such weapon systems. The US, for instance, categorises LAWS into two classes: “autonomous weapon systems”, ie, those weapon systems which, once activated, can select and engage targets without further intervention by a human operator; and “semi-autonomous weapon systems”, ie, those which, once activated, are intended to only engage individual targets or specific target groups that have been selected by a human operator [16].

It is felt that, since autonomy is a continuum as also multi-faceted (please see subsequent sections on autonomy), it may not be possible to rigorously define “full autonomy”. Hence, it may be best to restrict usage of terms to AWS and LAWS only, with the latter term defining that sub-class of AWS which could result in human fatalities (an anti-missile weapon system, for instance, would classify as an AWS but not come under the category of LAWS). The type and degree of autonomy build into a weapon system is best described for each system separately, rather than 

Definition of LAWS: Different Views

The following definitions of LAWS reflect two different views in the ongoing debate:-

  • The International Committee of the Red Cross (ICRC) has defined LAWS as: “Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (ie, search for or detect, identify, track, select) and attack (ie, use force against, neutralize, damage or destroy) targets without human intervention.” As per ICRC, the advantage of this broad definition, which encompasses some existing weapon systems, is that it enables real-world consideration of weapons technology to assess what may make certain existing weapon systems acceptable, legally and ethically, and which emerging technology developments may raise concerns under international humanitarian law (IHL) or violate the principles of humanity and the dictates of the public conscience [17].
  • Although ICRC classifies their definition as “broad”, an even broader definition proposed by Switzerland is also noteworthy. Switzerland describes LAWS simply as “weapons systems that are capable of carrying out tasks governed by IHL in partial or full replacement of a human in the use of force, notably in the targeting cycle” [18]. The contention of Switzerland is that such a working definition is inclusive, and allows for a debate that is without prejudice to the question of appropriate regulatory response. As per them, the working definition proposed by them is not conceived in any way to single out only those systems which could be seen as legally objectionable. At one end of the spectrum of systems falling within that working definition, countries may find some subcategories to be entirely unproblematic, while at the other end of the spectrum, other subcategories may be found to be unacceptable.

The US supports the latter view, stating that a working definition should not be drafted with a view toward describing weapons that should be banned. It also states that it is unnecessary for the GGE to adopt a specific working definition of LAWS, since absence of such a definition in no way impedes the GGE’s work in understanding the potential issues posed by LAWS [19].

It is evident that, of the two definitions, the one proposed by Switzerland is more inclusive. Their logic of not wanting to single out those systems which could be seen as objectionable, is also appealing. On the other hand, notwithstanding its claim of being broad in nature, the proposed ICRC definition, which happens to be the popularly accepted de facto working definition in UNODA discussions, does appear to be aimed at banning the problematic LAWS. It is also felt that Switzerland’s definition seems to more accurately represent of what can be literally understood from the term “LAWS”.

IHL and the Critical “Select and Engage” Functions

Simply stated, the opposition to development of fully autonomous weapon systems stems from the intuitively appealing rationale that machines should not have the power to decide and kill humans. Therefore, while there may be several facets of autonomy in a weapon system incorporated independently of each other, eg, in the activate, navigate, identify, select, engage, assess functions, it is in the “select and engage” dual-function sequence that the “decision to kill” is implicit. If the “select” (implying picking up a target, possibly amongst many) and “engage” (meaning releasing the weapon and guiding it to the target) were to be implemented as separate autonomous functions, but the “decision to engage the selected target” was done through human intervention, then there would not be any opposition to the employment of LAWS. Similarly, there is no objection to autonomous implementation of activate/ navigate/ identify/ assess functions in the weapon systems. Thus, it is the composite “select and engage” function, if executed autonomously, which is at the centre of the ongoing debate.

Principles of IHL relevant to LAWS

Ban advocates argue that the critical “select and engage” functions are in violation of IHL principles of Distinction, Proportionality and the Martens Clause. These principles are briefly explained as under [20, 21, 22]:-

  • Principle of Distinction. The basic rule of distinction requires that the parties to an armed conflict distinguish at all times between civilian persons and civilian objects on the one hand, and combatants and military objectives on the other. A party to an armed conflict may direct an attack only against combatants or military objectives. Neither the civilian population nor individual civilians may be attacked unless they directly participate in hostilities.
  • Principle of Proportionality. Attacks directed against a combatant or a military objective must be in accordance with the proportionality rule. This means that it is prohibited to launch an attack that is likely to cause incidental loss of civilian life, injury to civilians, and/or damage to civilian objects that would be excessive in relation to the concrete and direct military advantage anticipated. In other words, a military objective may be attacked only after an assessment is made which concludes that civilian losses are not expected to outweigh the military advantage foreseen.
  • Martens Clause. It has been laid down vide the Geneva Conventions (Article 1(2) of Additional Protocol I of 1977) that for cases not covered under the Protocol or by other international agreements, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of public conscience.

Ban Advocacy Groups: Employment of LAWS Violates IHL Principles!

The proponents of the ban argue that employment of LAWS would be a violation of the above IHL principles, for the following reasons:-

  • Principle of Distinction. Ban proponents declare that machines will likely never be able to reliably distinguish between combatants (the intended targets) on the one hand and civilians, wounded combatants and combatants hors de combat on the other. Hence, the “select to kill” function should never be delegated to them. Even in the case of combatants, it may be necessary to exercise empathy in certain scenarios, a characteristic which machines, having no moral agency, would never be able to possess.
  • Principle of Proportionality. It is contended that adhering to the principle of proportionality requires value judgement taking into account a host of factors, and machines can never evolve to this level of human prowess. This is another reason put forth by ban proponents to advocate that “select to kill” decisions be never taken by LAWS.
  • Martens Clause. Ban advocacy groups raise the ethical/ philosophical issue of whether machines should ever be vested with the decision power of “life and death” with respect to humans (which is implicit in the “select” function) which, as per them, would be “against the principles of humanity and the dictates of public conscience”.

It would be reasonable to accept the contention that, at least for the next couple of decades, LAWS will not evolve to the level of humans with respect to qualities such as moral and ethical behaviour, empathy and value judgement. Ongoing discussions on the need for prohibitory/ regulatory conventions should, therefore, be carried out under this assumption.

However, making such an assumption does not automatically justify a ban on LAWS. This is because the spectrum of military conflict offers many scenarios where the principle of distinction is not applicable (this is not the same thing as saying that civilians are not present in the combat zone). Furthermore, proportionality judgements are generally made by a commander at a higher level of military hierarchy, while LAWS would be tasked with carrying out individual attacks on single/ group targets at the execution level.

It is felt that much of the difference in opposing views on the issue of LAWS violating IHL can be resolved if discussions are carried out against the backdrop of specific military conflict scenarios, of which there is a very wide spectrum in 21st Century warfare.

Conclusion

In this initial article of this series, which analyses the ongoing world-wide debate on the development and employment of LAWS, an insight has been provided into the basic points of contention in the debate. The article has brought out how LAWS are different from other weapon systems in unique ways, and highlighted the arguments put forth by advocacy groups who are rooting for a ban on LAWS at the UN and other fora. The manner in which these arguments draw strength from certain principles of IHL has also been dwelt upon. Finally, this work has suggested that convergence of views would possibly be reached faster if the discussions are carried out against the backdrop of specific military scenarios, rather than in general terms as is presently being done.

The next article in this series will analyse whether or not LAWS violate IHL in different operational settings. It will also discuss various facets of Autonomy, as well as the intriguing feature of Unpredictability which characterises LAWS.

References

(1)     International Human Rights Clinic, Losing Humanity – The Case Against Killer Robots, Human Rights Watch, Harvard Law School, Nov 2012, ISBN: 1-56432-964-X, Accessed 25 Sep 2020, https://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf.

(2)     International Human Rights Clinic, Shaking the Foundations: The Human Rights Implications of Killer Robots, HRW, Harvard Law School, 2014, ISBN: 978-1-62313-1333, Accessed 25 Sep 2020, https://www.hrw.org/sites/default/files/reports/arms0514_ForUpload_0.pdf.

(3)     International Human Rights Clinic, Mind the Gap: The Lack of Accountability for Killer Robots, Human Rights Watch, Harvard Law School, 2015, ISBN: 978-1-6231-32408, Accessed 25 Sep 2020, https://www.hrw.org/sites/default/files/reports/arms0415_ForUpload_0.pdf.

(4)     International Human Rights Clinic, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots, Human Rights Watch, Harvard Law School, 2018, ISBN: 978-1-6231-36468, Accessed 25 Sep 2020, https://www.hrw.org/sites/default/files/report_pdf/arms0818_web.pdf.

(5)     Ronald Arkin, Lethal Autonomous Systems and the Plight of the Non-Combatant, AISB Quarterly, Jul 2013, pp. 1-9, Accessed 25 Sep 2020, https://www.unog.ch/80256EDD006B8954/%28httpAssets%29/54B1B7A616EA1D10C1257CCC00478A59/%24file/Article_Arkin_LAWS.pdf.

(6)     Ronald Arkin, The Case For Banning Killer Robots: Counterpoint, Communications of the ACM, December 2015, Accessed 25 Sep 2020, https://cacm.acm.org/magazines/2015/12/194632-the-case-for-banning-killer-robots/fulltext.

(7)     Michael N Schmitt, Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics, Harvard National Security Journal Features, 2013, pp. 1-37, Accessed 25 Sep 2020, https://harvardnsj.org/wp-content/uploads/sites/13/2013/02/Schmitt-Autonomous-Weapon-Systems-and-IHL-Final.pdf.

(8)     Noel E Sharkey, The Evitability of Autonomous Robot Warfare, International Review of the Red Cross, Volume 94, Issue 886, June 2012, pp. 787-799, Accessed 25 Sep 2020, https://international-review.icrc.org/sites/default/files/irrc-886-sharkey.pdf.

(9)     Noel E Sharkey, Towards a Principle for the Human Supervisory Control of Robot Weapons, Politica & Società, Number 2, May-August 2014, pp. 11-12, Accessed 25 Sep 2020, https://www.unog.ch/80256EDD006B8954/(httpAssets)/2002471923EBF52AC1257CCC0047C791/$file/Article_Sharkey_PrincipleforHumanSupervisory.pdf.

(10)   Kenneth Anderson and Matthew C Waxman, Law and Ethics for Autonomous Weapon Systems, American University Washington College of Law Research Paper No. 2013-11, Stanford University, The Hoover Institution, Apr 2013, pp. 7, Accessed 25 Sep 2020, https://media.hoover.org/sites/default/files/documents/Anderson-Waxman_LawAndEthics_r2_FINAL.pdf.

(11)   Peter Asaro, On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision Making, International Review of the Red Cross, Volume 94, Issue 886, June 2012, pp 687-709, Accessed 25 Sep 2020, https://e-brief.icrc.org/wp-content/uploads/2016/09/22.-On-banning-autonomous-weapon-systems.pdf.

(12)   Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology, Pub. Penguin, Sep 2006, ISBN 978-0-114-303788-0, Accessed 25 Sep 2020, http://stargate.inf.elte.hu/~seci/fun/Kurzweil,%20Ray%20-%20Singularity%20Is%20Near,%20The%20%28hardback%20ed%29%20%5Bv1.3%5D.pdf.

(13)   The Singularity is Near, Wikipedia, Accessed 24 Sep 2020, https://en.wikipedia.org/wiki/The_Singularity_Is_Near.

(14)   Brian Stauffer, Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control, 10 Aug 2020, Human Rights Watch, Accessed 24 Sep 2020, https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and.

(15)   Lt Gen (Dr) R S Panwar, Artificial Intelligence in Military Operations: A Raging Debate and Way Forward for the Indian Armed Forces, USI Monograph, No 2, 2018, pp. 5-7, Accessed 25 Sep 2010, http://www.vijbooks.com/BookDetails/1516/Artificial-Intelligence-in-Military-Operations–A-Raging-Debate-and-Way-Forward-for-the-Indian-Armed.

(16)   Characteristics of Lethal Autonomous Weapons Systems, US Paper in CCW GGE, Geneva, 10 Nov 2017, pp. 3, Accessed 23 Nov 2020, https://www.unog.ch/80256EDD006B8954/(httpAssets)/A4466587B0DABE6CC12581D400660157/$file/2017_GGEonLAWS_WP7_USA.pdf.

(17)   Views of the ICRC on Autonomous Weapon Systems, CCW Meeting of Experts on LAWS, Geneva, 11-15 Apr 2016, pp.1, Accessed 25 Sep 2020, https://www.icrc.org/en/document/views-icrc-autonomous-weapon-system.

(18)   Wollenmann Reto, A Purpose-Oriented Working Definition for Autonomous Weapons Systems, CCW Meeting of Experts on LAWS, Geneva, 11-15 Apr 2016, pp.1, Accessed 25 Sep 2020, https://www.unog.ch/80256EDD006B8954/(httpAssets)/A204A142AD3E3E29C1257F9B004FB74B/$file/2016.04.12+LAWS+Definitions_as+read.pdf.

(19)   Characteristics of Lethal Autonomous Weapons Systems, US Paper in CCW GGE…, pp. 1.

(20)   International Humanitarian Law: Answers to your Questions, International Committee of the Red Cross, Dec 2014, pp. 47, Accessed 23 Sep 2020, https://www.icrc.org/en/doc/assets/files/other/icrc-002-0703.pdf.

(21)  International Human Rights Clinic, Heed the Call: A Moral and Legal Imperative to Ban Killer Robots…, pp. 1. 

(22)   Additional Protocol I of 1977 to Geneva Conventions – Article 1(2), ICRC, Treaties, States Parties and Commentaries, Accessed 25 Sep 2020, https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/ART/470-750004?OpenDocument.

0 Comments

Your Views

Recent Posts

Subscribe To The Future Wars Newsletter

Join this mailing list to receive a weekly newsletter about the latest posts from R S Panwar's Future Wars Blogsite.

Almost finished....To complete the subscription process, please click the link on the email we just sent you.

Share This