Regulation of AI-Enabled Military Systems

A Risk-Based Approach - Part I
Sections
Introduction
The European Union (EU) Proposal for AI Regulation: A Risk-Based Approach
Risk-Based Approach to AI-Enabled Military Systems
Risk Hierarchy
Rationale for a Five-Level Risk Architecture
References

Introduction

It is now universally accepted that the risks posed using Artificial Intelligence (AI) technologies for developing various applications and systems are significant, and therefore must be suitably addressed and mitigated. In 2019, the United Nations affirmed a set of guiding principles about use of emerging technologies in the area of Lethal Autonomous Weapon Systems (LAWS) [1]. The United States Department of Defence adopted a set of Ethical Principles for AI in Feb 2020 [2]. In addition, China, the European Union (EU) and Russia are amongst major state players who have come up with principles/ norms with respect to development of AI technologies, though these may not specifically address military systems [3,4,5]. Notably, the EU has adopted a risk-based approach for the regulation of AI applications.

While principles are a key starting point for establishing policy and its subsequent implementation, their high level of abstraction dictates that they be followed up with a more granular mechanism which can guide implementation processes. Adopting a risk-based approach for the design, development and deployment of military systems promises to be an effective way for moving forward from risk-mitigation principles to policy and practice. This is because risks posed by different types of military systems, and different sub-systems within a military system, may vary widely, and applying a common set of risk-mitigation strategies across all systems will likely be suboptimal for the following reasons: these may be too lenient for very high-risk systems, overly stringent for low-risk systems, and hamper the development of systems which could be beneficial for humankind. A risk-based approach has the potential of overcoming these disadvantages.

This paper first gives an overview of the risk-based approach adopted by the EU for non-military systems. It then focuses its attention on AI-enabled military systems and compares principles which might govern the evolution of risk-based approaches to civilian vis-a-vis military systems. The main contribution of this work is to suggest a Risk Hierarchy, which attempts to sketch the contours of how a risk-based approach could be adopted for the regulation of military systems.

It merits mention here that the Risk Hierarchy presented in this paper owes its genesis, motivation, conceptual underpinnings and quality primarily to an ongoing comprehensive dialogue on AI in military systems being conducted under the aegis of the Centre for Humanitarian Dialogue (HD), Geneva. The dialogue has participation from former officials with military, diplomatic, intelligence, weapons design and legal backgrounds from the United States, China, Europe, Russia, India and Latin America.

The European Union (EU) Proposal for AI Regulation: A Risk-Based Approach

Objectives

In Apr 2021, the European Commission (EC) proposed a regulatory framework on AI with the following specific objectives: ensuring that AI systems placed on the EU market are safe and respect existing law on fundamental rights and EU values; facilitating investment and innovation in AI; enhancing AI governance and effective enforcement of existing law; and enabling the development of a single market for lawful and trustworthy AI applications. The proposal is presently undergoing due process for being passed as law by the European Parliament (EP).

The proposal was triggered by earlier calls within the EU for addressing the opacity, complexity, bias, unpredictability and autonomous behaviour of certain AI systems, in order to ensure their compatibility with fundamental rights and to facilitate the enforcement of legal rules [6]. It was a follow-up on a resolution adopted earlier by the EP recommending to the EC to propose legislative action for harnessing the opportunities and benefits of AI, while at the same time ensure protection of ethical principles [7].

Risk Categories

The proposal adopts a risk-based approach, with four risk levels as indicated in the figure. These are briefly explained below [8]:-

EU Model for Risk Based Approach

  • The Unacceptable Risk category comprises AI systems which contravene EU values, for instance by violating fundamental rights. Examples could include systems which manipulate persons by using subliminal techniques beyond their consciousness, exploit vulnerabilities of children or persons with disabilities in a manner which might cause them psychological or physical harm, and so on.
  • High Risk AI systems are those which might pose severe risk to the health and safety or fundamental rights of persons. The classification of an AI system as high-risk is based on the intended purpose of the AI system, in addition to its function.
  • Transparency Obligations would be applicable for systems which interact with humans, are used to detect emotions or determine association with social categories based on biometric data, or generate or manipulate content (eg, ‘deep fakes’).
  • Systems which do not fall in any of these three categories are classified as Minimal or No Risk systems. The proposal creates a framework for the formulation of codes of conduct, which aims to encourage providers of such systems to voluntarily apply the mandatory requirements which are specified for high-risk AI systems.

Risk-Based Approach to AI-Enabled Military Systems

The risk-based approach proposed by the EC is aimed at managing risks posed by non-military systems. This section first attempts to identify the underlying basis for the risks associated with AI technologies. Thereafter, it seeks to highlight how these might raise vastly different types of concerns in civilian and military systems. In order to do this, it is important to first define the spectrum of technologies which might be covered under the ambit of AI, a term which is arguably very nebulous in its usage.

Defining AI

The EU proposal defines an AI system to mean software that is developed for generating outputs, predictions, recommendations or decisions, using one or more of the following techniques and approaches: machine learning techniques such as supervised, unsupervised and reinforcement learning; knowledge-based approaches such as logic programming and expert systems; and statistical approaches such as Bayesian estimation and optimization methods. In other words, in the EU proposal, AI-enabled systems may use a very wide spectrum of techniques [9].

Notwithstanding the wide-ranging nature of AI techniques indicated above, it may not be far off the mark to state that most of the risks associated today with AI-enabled systems stem essentially from machine-learning techniques, which have been predominantly responsible for the ongoing AI boom. These are discussed next.

Unique Characteristics of AI

The distinctive characteristics of machine learning based AI systems, which are at the root of their power as well as risks, arise fundamentally from their ability to learn directly from data, and this learning might continue even after the systems are deployed. The way current neural-network based AI systems function also give them a black-box character, wherein the process by which inputs are translated into outputs are not adequately known even to the developers. This is also referred to as non-transparency or non-explainability of AI systems. Finally, neural networks have proved to be very powerful, leading to an exponential increase over time in the intelligence which they confer onto AI-enabled systems.

The data-centricity of AI-enabled systems introduces risks resulting from unrepresentative, biased or incorrect/ deliberately poisoned data. The fact that a system might continue to learn and thus, post deployment, metamorphose into something different from what was fielded, together with its opaque nature, introduces a degree of unpredictability into its functioning. The non-transparent nature of AI systems is also partly responsible for systems becoming vulnerable to catastrophic failure when confronted with edge cases, a characteristic which is referred to as brittleness. The increasingly higher intelligence and consequent greater autonomy conferred onto AI systems results in undesirable effects such as automation bias and lack of accountability [10].

Civilian vis-à-vis Military AI-Enabled Systems: Different Concerns

AI regulation in the context of non-military systems is driven by concerns related to fundamental rights issues, such as racial and gender bias, data privacy, biometric surveillance, etc. In contrast, for military systems the risks associated with the use of AI technology are viewed through legal and ethical prisms, as dictated the by the ‘jus in bello’ criteria, in other words International Humanitarian Law (IHL).

The debate ongoing since 2014 at the UN Convention on Certain Conventional Weapons (CCW) on LAWS, for which a Group of Government Experts (GGE) was constituted in 2017, revolves around concerns that fully autonomous weapons would be in violation of the IHL principles of Distinction, Proportionality and the Martens Clause. The primary objective of these discussions is to arrive at a consensus on how to enforce meaningful human control in LAWS (with focus essentially on AI-enabled systems) through legally binding international regulation [11].

Approach to AI Governance with respect to Military Systems

Even after multiple rounds of discussions at the UN spanning over seven years, the only consensus which has been reached so far is in the adoption of a set of eleven guiding principles [12]. The following counterviews appear to be inhibiting the adoption of a legally binding regulation, perhaps rightly so: firstly, AI being a technology which is expected to be ubiquitous across all types of military systems, any constraints imposed through regulation would curb its exploitation for developing weapon systems which, in addition to resulting in military advantage, could potentially cause lesser destruction and save human lives; and secondly, for any such regulation enforcement is likely to present difficulties [13].

A risk-based approach to governance of AI-enabled military systems endeavors to address the first of the above two concerns. As regards enforcement, it is felt that evolving a code of conduct using a risk-based approach, as opposed to a legally binding ban/ regulation, to be adopted by nations on a voluntary basis as a form of self-regulation, might find early acceptability and perhaps be the best way forward. Also, while evolving this approach, major apprehensions with respect to deployment of emerging technologies, primarily AI, which have been expressed time and again at the UN GGE on LAWS, have been important guiding factors.

Risk Hierarchy

This section discusses the contours of how a risk-based approach for AI-enabled military systems might be evolved, by presenting a tentative model termed as the Risk Hierarchy for Military Systems. As will be evident from its subsequent description, while the proposed model borrows certain cues from the EU proposal, it is at considerable variance with it, primarily because the driving concerns are fundamentally different, as pointed out above.

Developing such a Risk Hierarchy involves four distinct activities. Firstly, risk classes require to be defined based on certain well thought out parameters. Secondly, given the large number of military systems which are in existence, these need to be grouped into categories. Next, these groups of military systems must be assigned to risk classes. Finally, a differentiated risk mitigation mechanism needs to be linked to each risk class. The process is largely an iterative one.

Using the above approach, a five-level Risk Hierarchy has been evolved, as indicated in the diagram.

Risk Hierarchy

Working Definitions

To avoid ambiguity, working definitions of some of the terms used in the description of the Risk Hierarchy are given out below.

Fully Autonomous vis-à-vis Semi-Autonomous Weapon Systems. In this work, fully autonomous weapon systems imply man-out-of-the-loop as well as man-on-the-loop systems, while semi-autonomous weapon systems correspond to man-in-the-loop systems. These terms have relevance mostly in relation to the critical identify-and-engage functions (hereinafter referred to as critical functions, in line with the terminology adopted by the UN GGE on LAWS) [14].

Unpredictable Weapon Systems. Unpredictability is inherent in all AI-systems which utilize deep-learning techniques. However, the degree of unpredictability can be controlled by adhering to stringent test and evaluation (T&E) standards [15]. The unpredictable systems listed at Level 1 of the Risk Pyramid refer to only those weapon systems where learning is permitted to continue in critical functions even after the system is deployed post T&E, thus introducing an unknown degree of unpredictability into operations.

Human Targeting Weapon Systems. The sub-class of human targeting weapon systems (listed at Level 1) refers to weapon systems which specifically seek out humans (combatants, terrorists) for lethal engagement, but does not include those which are designed to engage non-human military targets such as tanks which might be manned by humans.

Defensive Weapon Systems. Most weapon systems may be employed for both offensive and defensive purposes. However, there are certain categories of weapon systems which by their very nature can be employed only in a defensive role, eg, close-in weapon system such as the US Phalanx [16], static robot sentries such as Korea’s SGR-A1 [17], etc. In this work, the term defensive weapon systems refers to only this class of systems.

By way of explanation of the above definitions, two aspects merit further elaboration, namely, the distinction between selection and identification, and the meaning of human targeting.

Target “Selection” vis-a-vis “Identification”

At the UN GGE on LAWS, reference is often made to the critical select-and-engage functions in the context of meaningful human control. It is felt that a distinction needs to be made between target selection and target detection/ identification. In this work, a fundamental premise is that target selection must always be done by a human, by defining a profile for a target/ group of targets using certain parameters, eg, tanks in a specified geographical area during a specified time duration. However, once the target/ group of targets has been selected before mission execution, target identification followed by engagement may be delegated to autonomous weapon systems. Here, the identification-and-engagement function is akin to “homing” in conventional weapon systems.

Given the above distinction, in this work Autonomy has been defined in terms of the critical identify-and-engage, rather than select-and-engage, functions. A noteworthy implication of defining Autonomy in this manner is as follows: While target selection in every case is done by a human, fully autonomous weapon systems are those which can identify-and-engage a group of targets through multiple weapon releases without human intervention, while in the case of semi-autonomous weapons, every single release of weapon is specifically approved by a human.

Human Targeting

In defining human targeting weapon systems above, a nuanced distinction has been made between targeting of humans per se and targeting of military platforms and establishments which might be manned/ populated by humans. If the target identification sensor of a given weapon system is designed to detect and identify humans on the battlefield, for instance, human groups such as combatants, or a specific human (eg, a known terrorist) using face recognition, then it would classify as a human targeting weapon system. On the other hand, if the sensor is designed to detect and identify non-human military targets which might be manned by humans, such as tanks, ships, aircraft, fortified bunkers in a battle area, radar installations, military airfields, and logistic installations, then such a weapon system would not classify as a human targeting weapon system. The motivation for making this distinction is explained subsequently in this work.

The next section gives out the rationale for going in for a five-level risk architecture, along with a brief explanation of each risk class.

Rationale for a Five-Level Risk Architecture

In the coming years, AI is expected to become ubiquitous across a very wide spectrum of military systems. It is perhaps useful to bifurcate this spectrum into two broad classes, namely, weapon systems (including targeting systems) and decision support systems (including Intelligence, Surveillance and Reconnaissance (ISR) systems). With this grouping as a starting point, the rationale for arriving at the proposed five-level risk architecture is given out below:-

  • Under the premise that all weapon systems present a higher level of risk as compared to systems which do not directly result in release of weapons, the higher three proposed levels of risk correspond to AI-enabled weapon systems, while decision support systems are grouped under the lower two levels.
  • Level 1: Weapon Systems Posing Unacceptable Risk. This level represents a special category of weapons (hopefully not yet developed) which present so high a risk that their development must not be undertaken.
  • Levels 2 & 3: Weapon Systems Posing Acceptable Risk. Amongst the balance weapon systems, intuitively there appears to be a case for minimum two levels of risk (High & Medium), rather than just one. For instance, fully autonomous weapon systems whose deployment may be permitted subject to certain constraints (such as fully autonomous anti-tank armed UAVs to be deployed in a tactical battle area) would clearly pose a higher risk as compared to semi-autonomous systems.
  • Level 4: Critical Decision Support Systems. Amongst the non-weapon AI-enabled military systems (collectively referred to in this work as decision support systems), defining a minimum of two levels seems necessary, as follows: a higher level, which comprises critical decision support systems (eg, those designed to suggest attack options in a tactical setting); and a lower level covering all other decision support systems. Critical decision support systems would need to be trusted by commanders for effective human-machine teaming. The idea behind such a Trust Requirements level is to address the risk of automation bias, and also for enforcing the use of explainable AI (XAI) in such systems.
  • Level 5: Non-Critical Decision Support Systems. Termed here as the Negligible Risk level, this risk category is envisaged to encompass all AI-enabled military systems which are not covered under Levels 1 to 4 and which pose a level of risk which may not warrant any special scrutiny, eg, AI-enabled military applications in areas such as logistics and maintenance. While such systems may not present a risk from an IHL/ trust perspective, they would still need to be vetted for other AI related concerns such as fragility, inadequately selected or poisoned data used for learning, etc.

It may be possible to split each of the above levels further based on additional criteria, or formulate levels based on entirely different criteria. However, it is felt that the above five-level architecture should result in a simple yet effective risk-based model to address concerns related to IHL. It also merits mention here that defining different risk levels is meaningful only if these can be mapped to corresponding differentiated risk-mitigation mechanisms. While not explicitly elaborated upon here, the viability of working out such a mechanism is a key consideration which has been kept in mind while proposing the above levels. As stated earlier, evolution of an effective risk-based model is a multi-step iterative process.

[Continued in “Regulation of AI-Enabled Military Systems: A Risk-Based Approach – Part II”]

 

References

(1)        Guiding Principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System, Final Report: Annexure III, UN CCW GGE on LAWS, 13 Dec 2019, pp. 10, UN Office of Disarmament Affairs, Accessed 17 Jan 2022.

(2)        DOD Adopts Ethical Principles for Artificial Intelligence, 24 Feb 2020, US DOD, Accessed 17 Jan 2022.

(3)        A New Generation of Artificial Intelligence Ethics Code” Released, 26 Sep 2021, Ministry of Science and Technology, People’s Republic of China, Accessed 17 Jan 2022.

(4)        Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), 21 Apr 2021, European Commission, Accessed 17 Jan 2021.

(5)        Artificial Intelligence Code of Ethics, Signed during International Forum on “Ethics of AI: The Beginning of Trust,” Moscow, 26 Oct 2021, Accessed 17 Jan 2022.

(6)        Presidency conclusions – The Charter of Fundamental Rights in the context of Artificial Intelligence and Digital Change, 11481/20, 21 Oct 2020, Council of the European Union, Accessed 17 Jan 2022.

(7)        Framework of Ethical Aspects of Artificial Intelligence, Robotics and Related Technologies, European Parliament, 20 Oct 2020, Accessed 17 Jan 2022.

(8)        Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), 21 Apr 2021, European Commission, pp. 12-15, Accessed 17 Jan 2021.

(9)        Annexes to Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), 21 Apr 2021, European Commission, pp. 1, Accessed 17 Jan 2021.

(10)      Artificial Intelligence and Machine Learning in Armed Conflict: A Human-Centred Approach, International Review of the Red Cross (2020), 102 (913), pp. 463–479, accessed 17 Jan 2022.

(11)      International Humanitarian Law: Answers to your Questions, International Committee of the Red Cross, Dec 2014, pp. 47, Accessed 17 Jan 2022.

(12)      Guiding Principles affirmed by the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons System, Final Report: Annexure III, UN CCW GGE on LAWS, 13 Dec 2019, pp. 10, UN Office of Disarmament Affairs, Accessed 17 Jan 2022.

(13)      Lt Gen (Dr) R S Panwar, Lethal Autonomous Weapon Systems: Slaves not Masters – Meaningful Human Control, Saving Lives and Non-Feasibility of a Pre-Emptive Ban, 06 Oct 2020, Future Wars, Accessed 17 Jan 2022

(14)      Eric Talbot Jansen, The (Erroneous) Requirement for Human Judgment (and Error) in the Law of Armed Conflict, International Law Studies, Volume 96, 2020, Pub. Stockton Centre for International Law, Accessed 17 Jan 2022.

(15)      Heather M. Wojton et al, Test and Evaluation of AI-Enabled and Autonomous Systems, 09 Mar 2021, Institute for Defence Analysis, US DOD, Sep 2020, Accessed 17 Jan 2022.

(16)      Phalanx CIWS, Wikipedia, Accessed 17 Jan 2022.

(17)      SGR-A1, Wikipedia, Accessed 17 Jan 2022.

2 Comments

  1. Virinder Lidder

    Very thought provoking paper that needs to be well understood by powers to be, to safe guard universal good.Very comprehensive insight to the future risks AI poses though yet hidden behind the euphoria of advanced technology.

    Reply
  2. Aravind Rajesh N

    Thanks for sharing this. Every one should have awareness about these systems for their own safety. Good initiative

    Reply

Your Views

Recent Posts

Subscribe To The Future Wars Newsletter

Join this mailing list to receive a weekly newsletter about the latest posts from R S Panwar's Future Wars Blogsite.

Almost finished....To complete the subscription process, please click the link on the email we just sent you.

Share This