Defining Autonomous Weapon Systems: A Scenario Based Analysis

Part II: Prevalent and Proposed AWS Definitions
Sections
Prevalent AWS Definitions
AWS Definition: US
AWS Definition: UK
AWS Definition: China
AWS Definitions: Other Countries
Proposed AWS Definitions and Related Glossary
The OODA Loop in AWS
The Kill Chain: Centrality of Critical Functions
Proposed Definitions
Analysis of Proposed Definitions
Need for Defining Additional Terms
Annexure
References

 

[This piece is in continuation to “Defining Autonomous Weapon Systems: A Scenario Based Analysis – Part I”, which analysed several feasible and hypothetical scenarios to highlight that AWS as a class comprises of a spectrum of weapon systems with a wide range of capabilities, warranting treatment at a more granular level.]

 

Prevalent AWS Definitions

The analysis carried out in Part I highlights the difficulties which arise when AWS as a class is defined using terminology which lacks rigour. Taddeo and Blanchard have made a detailed comparison of AWS definitions adopted by 12 countries, tabulated in the annexure [1]. Most of these are framed around autonomy in the critical functions of ‘select-and-engage’, while others adopt a different approach.

This section analyses AWS definitions of three countries, namely the US, UK and China, against the backdrop of the preceding discussion. The following section then goes on to present a fresh attempt at AWS definitions and also suggests related glossary terms which are recommended for adoption, with the objective of facilitating more meaningful discussions in international fora on the contentious subject of AWS regulation.

AWS Definition: US

The US definition of AWS is given out in the DoD Directive 3000.09 (2023) [2]. The following definitions from this directive are relevant for the current discussion:

Autonomous weapon system: A weapon system that, once activated, can select and engage targets without further intervention by an operator. This includes, but is not limited to, operator-supervised autonomous weapon systems that are designed to allow operators to override operation of the weapon system, but can select and engage targets without further operator input after activation.

Semi-autonomous weapon system: A weapon system that, once activated, is intended to only engage individual targets or specific target groups that have been selected by an operator.

Specific target group: A discrete group of potential targets, such as a particular flight of enemy aircraft, a particular formation of enemy tanks, or a particular flotilla of enemy vessels. A general class of targets or a specific type of target, such as a particular model of tank or aircraft, does not constitute a specific target group.

Target selection: The identification of an individual target or a specific group of targets for engagement.

The definition of ‘target selection’ from the earlier 2012 version of this DoD directive is also relevant, which is as under:

Target selection: The determination that an individual target or a specific group of targets is to be engaged.

It is evident that central to this definition of AWS is autonomy in the critical ‘select-and-engage’ functions. However, it is interesting to note the shift in the definition of ‘target selection from ‘determination’ in 2012 to ‘identification’ in 2023. This shift makes the term ‘selection’ synonymous with ‘identification’ of specified targets. It can be reasonably inferred that, as per the 2023 version, even in AWS (as also in semi-autonomous weapon systems) the specification/ determination of targets (ie, drawing up target lists) is done by the human operator, while the machine is only delegated the function of identifying the specified targets. The manner in which this specification is done, ie, how the target is described by the operator to the machine, has not been spelled out in the Directive. Scenarios 1 to 3 bring out three different ways in which such target specification may be carried out via a suitably designed man-machine interface (MMI).

The definition of ‘specific target group’ is also noteworthy. This definition attempts to make a distinction between a ‘discrete group of potential targets’ and a ‘general class of targets.’ A group of potential targets would also need to be specified in the MMI as a class (eg, tanks). The only possible way of specifying any target class as a discrete group (for mobile targets such as a formation of tanks or a flotilla of vessels) is to lay down area and time constraints in addition to the class description. Effectively, therefore, as per this Directive a ‘discrete group’ MMI specification corresponds to Scenario 3, while an MMI specification of a ‘general class’ aligns with Scenario 4.

In summary, the following may be stated: in both AWS as well as semi-autonomous weapon systems, target lists are drawn up by a human operator/ commander; while there are different levels of scrutiny specified for higher risk classes (AWS and also certain categories of semi-autonomous weapons) neither AWS nor semi-autonomous weapon systems (as defined) are proscribed from development; finally, weapon systems which are capable of carrying out value judgement, either with respect to adherence to IHL or for drawing up target lists (those which align with Scenarios 6 to 8) are beyond the scope of this Directive.

AWS Definition: UK

As regards the UK, the following definitions are relevant with respect to unmanned weapon systems, laid down vide its Joint Doctrine Publication 0-30.2 [3] (subsequently withdrawn):

Automated System. In the unmanned aircraft context, an automated or automatic system is one that, in response to inputs from one or more sensors, is programmed to logically follow a predefined set of rules in order to provide an outcome.  Knowing the set of rules under which it is operating means that its output is predictable.

Autonomous System. An autonomous system is capable of understanding higher-level intent and direction. From this understanding and its perception of its environment, such a system is able to take appropriate action to bring about a desired state.  It is capable of deciding a course of action, from a number of alternatives, without depending on human oversight and control, although these may still be present.  Although the overall activity of an autonomous unmanned aircraft will be predictable, individual actions may not be.

It is interesting to analyse how the above two definitions relate with Scenario 3, wherein the target list is unambiguously specified by a human operator, but the critical functions of target identification, prioritization (within identified targets) and engagement are delegated to the machine. For such an AWS, the target identification function may be based on deep neural network (DNN) trained for object recognition system, which does not follow a logical set of program instructions, but is nevertheless statistically predictable, capable of meeting specified performance standards. On the other hand, the prioritization and decision to engage functions may well be based on procedural or rule-based programming, which are characterised by clear logical flows and transparency.

Can such a system be termed as ‘automated’? Probably not, since the critical function of target recognition uses non-transparent DNNs, and its outputs are only statistically, not fully, predictable. The Scenario 3 AWS would also not classify as an autonomous system as per the UK definition, since it is not capable of translating a commander’s higher intent into target lists, but only identifies targets which are unambiguously specified by a human operator.

This UK definition of an autonomous system aligns well with AWS depicted in hypothetical Scenarios 6 to 8, which are not expected to be realised in the foreseeable future, if at all. This definition was criticised for setting the bar of AWS so high as to become irrelevant, effectively permitting the development of all types of AWS below this bar [4].

AWS Definition: China

The 2018 Chinese definition/ description of LAWS is reproduced below [5]:

LAWS should include but not be limited to the following 5 basic characteristics. The first is lethality, which means sufficient pay load (charge) and for means to be lethal. The second is autonomy, which means absence of human intervention and control during the entire process of executing a task. Thirdly, impossibility for termination, meaning that once started there is no way to terminate the device. Fourthly, indiscriminate effect, meaning that the device will execute the task of killing and maiming regardless of conditions, scenarios and targets. Fifthly evolution, meaning that through interaction with the environment the device can learn autonomously, expand its functions and capabilities in a way exceeding human expectations.

Out of the five characteristics specified in this definition, lethality, autonomy and impossibility of termination are features which align with the common understanding of LAWS. In contrast, inclusion of the two characteristics of indiscriminate effect and evolution is noteworthy.

The feature of indiscriminate effect seems to be addressing the requirement of adherence to IHL but using a uniquely different approach. Perhaps there is an unstated assumption here that LAWS can never make a value judgement so necessary for applying the principles of IHL, which in turn implies that AWS would always kill indiscriminately. In other words, it effectively rules out that hypothetical Scenario 6 can ever come to pass.

The characteristic of evolution, on the other hand, implies that LAWS must necessarily possess an online learning capability (discussed in a subsequent section). None of the feasible scenarios discussed above are premised on such an assumption. Thus, the Chinese definition of LAWS does not align with AWS depicted in any of these scenarios. Including evolution and indiscriminate effect as essential pre-requisites for a weapon system to classify as LAWS also amounts to raising the bar on what classifies as LAWS in a manner analogous to the UK definition, with similar implications on regulating their development.

AWS Definitions: Other Countries

Since the US definition of AWS is practical and well within the reach of current state of the start in AI technologies, regulation of such weapon systems is meaningful. This is because regulation would be meaningful only if there is a feasibility of developing the weapon system. The UK, by defining LAWS in terms of features which are not realizable in the foreseeable future, has rendered regulation of LAWS largely an exercise in futility. While the Chinese conceptualization of LAWS is realizable with currently available AI technologies, it provides a lot of leeway for the development of AWS (eg, any weapon system which cannot evolve, or which is not indiscriminate).

It is not the intention here to comprehensively analyse AWS definitions of all 12 countries which are listed in the annexure. However, AWS envisaged in each of these definitions falls either within or beyond the technological feasibility threshold. A scrutiny of the definitions reveals that ICRC, Israel and the Netherlands have defined AWS in practical terms (ie, below the realizability threshold). In contrast, Canada, France, Germany, Norway and Switzerland have endowed their AWS conceptions with higher cognitive functions (eg, converting commander’s higher intent into courses of action including target lists; making value judgements on IHL principles, etc), which are infeasible at this juncture or anytime in the foreseeable future, thus rendering these definitions irrelevant from the perspective of regulation.

The above discussion highlights the fact that defining autonomy in weapon systems through a single term, namely, AWS, leads to a wide range of interpretations. Over the years, attempts have been made to capture different levels of autonomy in terminology by specifying different classes, such as: automatic/ automated/ autonomous; man-in-the-loop/ man-on-the-loop; and semi-autonomous/ supervised autonomy/ autonomous. All of these have one aspect in common, ie, the attempt in each case is to classify autonomy in weapon systems as a whole. This approach, however, has failed to bring the degree of clarity needed to map actual weapon systems onto the defined classes. The class definitions overlook the fact that, in any weapon system, there are a number of functions, each of which may be independently implemented with different levels of autonomy.

This section attempts to offer a set of definitions which segregates autonomous weapon systems into classes by specifying autonomy at the level of functions which are involved in the weapon’s targeting process, sometimes referred to as its kill chain. The choice of definitions/ classes is also driven by the intent of relating AWS capabilities to the need for adherence to IHL.

The term ‘autonomy’ in any given function is taken here to mean simply that the function is executed in the weapon system without human intervention. There is another important consideration while referring to autonomy in any individual function. One of the features of AI-enabled autonomy is that a function may be implemented with a self-learning capability, and as a result evolve over time by interacting with the environment (external inputs). The effect of evolution through self-learning on each function is analyzed separately subsequently in the section on online learning.

The OODA Loop in AWS

Based on the nature of their Observe-Orient-Decide-Act (OODA) loops, all weapon systems may be classified into Platform Centric (PC), Network Centric (NC) or Swarm weapon systems. In the military context the OODA loop broadly translates to the sensor – decision-maker – shooter loop. PC weapons refer to systems in which this sensor-to-shooter loop closes on a single platform, eg, tank, aircraft, ship, etc, including their unmanned versions. In contrast, NC weapons differ in two respects: firstly, sensors, decision nodes and shooters (three types of entities) are geographically dispersed and connected via a network; and secondly, there could be multiple entities of each type making up the weapon system. A weapon system using swarm technology, although not known to be operational yet in any military, would perhaps be more akin to PC rather than NC systems, and may be best visualized as a locally distributed version of a single platform.

In a complex battlefield scenario, there may be several OODA loops in play. The definitions presented here are envisaged against the backdrop of two OODA loops, specifically, a loop at the tactical level embedded within another loop at the operational level, as explained in Scenario 9 below. Although it is painted as a separate scenario, this scenario represents the backdrop for all eight scenarios described earlier.

Scenario 9

At the operational level, the sensor in the OODA loop is an ISR network which provides the necessary inputs for building up the operational picture. The decision node comprises of a human commander and his staff which analyses the operational situation and draws up a tactical plan, which includes a list of targets to be engaged within specific time-space constraints (target determination), as well as takes the decision to activate the shooter to neutralize these targets. The shooter in this higher-level OODA loop maps to one or more AWSs which operate at the tactical level.

At the tactical level, the AWS itself may be represented as an OODA loop with its own sensor – decision-node – shooter components. The sensor within the AWS searches for and identifies one or more targets which the AWS has been updated with; the decision node prioritizes amongst the identified targets and finalizes the sequence as well as the manner in which these are to be engaged (ie, if the AWS has the capability to engage multiple targets); the shooter is, eg, the missile system on the AWS which releases on or more munitions to neutralize the targets.

A scrutiny of the eight scenarios described above would reveal that Scenarios 1 to 6 depict AWS executing tactical level OODA loops. Scenarios 7 and 8, on the other hand, broadly relate to operational and strategic levels respectively.

The definitions presented below relate to platform-centric AWS at the tactical level. An analysis of the AWS definitions of US, UK and China and other definitions referred in the previous section mostly seem to relate to AWS at this level, although this has not been explicitly stated.

That said, it is quite possible to envisage the implementation of an AWS at the strategic/ operational level, ie, one which autonomously releases either conventional weapons or tactical level AWS, even with the current state of the art in AI technologies. Needless to say, however, such weapon systems would likely present an unacceptable level of risk. One example of a system which might qualify in this category is the oft quoted Russian ‘dead-hand’ Perimeter system for automatic launch of nuclear weapons, although limited open domain information is available on this system.

In order to fully grasp the definitions presented below, it is important to understand the kill chain which comes into play when a tactical level AWS is activated.

The Kill Chain: Centrality of Critical Functions

The process of target engagement by an AWS at the tactical level from activation to neutralization may be broken down into several stages. For the purpose of this discussion, the following five stages/ functions are considered: activation, navigation, target identification, prioritization, and engagement.

Activation could include take-off from the ground, release from an aircraft, launch from a sub-surface vehicle, etc; navigation implies moving from launch point to the area where one or more targets are expected; target identification includes loitering, searching, identifying and tracking targets in the specified area during the specified period; prioritization covers within its ambit the decision to engage the targets in a particular sequence; engagement implies release of the munition, homing onto the target, neutralizing it and also carrying out battle damage assessment (BDA). After BDA, the identification-prioritization-engagement (OODA) loop may execute again either for the same target (if the previous engagement is unsuccessful) or for the next target in the sequence (for AWS which carry multiple warheads).

Against this backdrop, the definitions for tactical AWS are now presented.

Proposed Definitions

The following three definitions represent AWS with different capabilities, depending on the level of autonomy in the five functions listed above.

Cognitive AWS. An AWS which, once activated after being primed with a broad mission objective, can autonomously launch itself, navigate to a given mission area, loiter, search for and determine which adversary assets can best be targeted for mission success, prioritize amongst these targets, make the decision to engage them as prioritized while ensuring that the principles of IHL are not violated, fire the requisite munitions and neutralize the targets, carry out BDA, and repeat the OODA loop as needed.

Directed AWS. An AWS which, once activated after being primed with a clear target description together with area and time constraints, can autonomously take-off, navigate to the specified mission area, loiter, search for and identify targets with the specified description, prioritize amongst these targets, engage and neutralize the targets, carry out BDA, repeat the OODA loop as needed, and return to base.

Controlled AWS. As AWS which, once activated after being primed with target description, area and time constraints as applicable, can autonomously/ via remote control be launched, navigate to the specified mission area, loiter, search for and identify targets with the specified description, prioritize amongst these targets, and once directed by a human operator to neutralize a given target, can autonomously/ with operator assistance home on to and destroy the target, carry out BDA, repeat the OODA loop as needed, and return to base.

Note: In all three cases, if kamikaze (expendable) AWS are employed then BDA and subsequent actions are not applicable since only a single engagement is feasible.

Analysis of Proposed Definitions

The correlation of the three definitions with the scenarios presented above is as under:

  • The Directed AWS definition aligns with Scenarios 1 to 3, as well as with the AWS definition adopted by the US DoD Directive 3000.09.
  • The Cognitive AWS definition, with the capability to carry out IHL related value judgements, aligns with hypothetical Scenario 6 (which is infeasible in the foreseeable future). Although not played out in Scenario 6, Cognitive AWS is also endowed with the capability of drawing up target lists in a given tactical setting.
  • Controlled AWS is depicted as a tactical weapon with autonomy in one or more of its functions. However, here the decision to engage requires human intervention in every case and, just like Directed AWS, it is not capable of carrying out value judgements required for target determination and application of IHL principles. The Controlled AWS definition represents a range of semi-autonomous functions.

The following additional comments merit consideration:

  • Since Cognitive AWS are beyond the reach of AI technologies in the foreseeable future (if ever), any discussions on regulating this category of AWS have only academic relevance. That stated, if at all these have to be brought within the ambit of any regulatory framework, it should not be difficult to get international acceptance for prohibiting the fielding (if not the development) of such AWS (in other words, placing them in an Unacceptable Risk category in any risk-based approach).
  • Directed AWS, as the name suggests, do not possess autonomy in any function which may be considered as violative of IHL. Amongst the critical functions, the only two functions which are autonomous in this class are identification of pre-designated targets (unambiguously specified by a human) and prioritization within identified targets for engagement. Nevertheless, if these are not employed judiciously by human operators/ commanders, IHL violations might occur (as demonstrated by Scenarios 4 and 5), as is true for any non-intelligent weapon system as well.
  • As already stated, the Controlled AWS class represents a range of weapons with some degree of autonomy. For instance, at the lower end of the autonomy spectrum, a weapon system in which only the take-off/ landing or only the homing function is autonomous, with all other kill chain functions being operator controlled, would also fall in this class. At the higher end of the spectrum, if all but the decision-to-engage are autonomous, the weapon system would still fall in this class.
  • A scrutiny of definitions of AWS adopted by various states reveals that some of them are analogous to Cognitive AWS, others to Directed AWS, while the scope of still others straddle the features of both these. Controlled AWS, on the other hand, is analogous to semi-autonomous weapon systems. Not many states seem to have defined such a category, perhaps under the unstated premise that a weapon system which does not fall within the ambit of AWS would automatically classify as semi-autonomous.
  • Both Cognitive and Directed AWS fall within the scope of the popular characterization of AWS as weapon systems which “once activated, can select and engage targets without further human intervention.” Arguably, segregating this broad categorization into these two classes should prove useful towards facilitating international consensus on regulation of AI-enabled weapon systems.

The choice of the above three definitions (as opposed to other popular definition sets such as AWS/supervised AWS/ semi-autonomous weapon system or automatic/ automated/ autonomous weapon systems) is motivated by the imperative of evolving a risk-based approach [6] for regulating AI-enabled weapon systems, as explained below:

  • Controlled AWS pose low risk, since the critical decision to engage the target, to “pull the trigger” so to say, is mandated to be taken by a human. Hence, this class of AWS requires minimum oversight.
  • Cognitive AWS, being beyond the reach of extant technologies, hold little relevance from a practical perspective. Moreover, since delegating targeting related value judgements to machines is widely accepted as being undesirable, it should be relatively easy to achieve international consensus for placing these in an unacceptable risk category and prohibiting their development/ deployment.
  • Finally, although development of Directed AWS may be permitted, there is a clear need for a rigorous framework for risk mitigation at each life-cycle stage from project approval to operational deployment. This is the class of AWS which should attract maximum attention during discussions on regulatory frameworks. It also merits mentioning that, amongst all autonomous functions permitted in this class, target identification possibly warrants the most rigorous test and evaluation from an AI perspective.

As compared to most existing definitions of AWS, the above definitions are arguably unambiguous, since they consider autonomy in each of the functions in the targeting cycle, rather than merely in general terms. This also makes it easy to map the definitions to extant and envisaged AI-enabled weapon systems, which is a crucial first step for enforcing any regulatory framework which might be instituted.

Need for Defining Additional Terms

If the above definitions are to be interpreted with clarity, there is a need to define certain related terms, listed under three heads, as under:

  • Target Related: Determination, Specification, Identification and Prioritization.
  • Nature of OODA Loop: Platform-centric and Network-centric weapon systems.
  • Higher Cognitive Functions: IHL application, translating commander’s intent, etc.

It is not the intention of this work to propose rigorous definitions for the listed terms. The analysis provided in the preceding sections, however, provides adequate understanding of these terms for deriving the required definitions.

 

Annexure

Comparison of AWS Definitions

AWS Definitions - Taddeo and Blanchard - 1

[Continued in “Defining Autonomous Weapon Systems: A Scenario Based Analysis – Part III”]

 

References

(1)        Taddeo, M. and Blanchard, A, A Comparative Analysis of the Definitions of Autonomous Weapons Systems, 23 Aug 2022, Science and Engineering Ethics (2022), Accessed 06 Aug 2023.

(2)        US DOD, Directive 3000.09: Autonomy in Weapon Systems (2023), 25 January 2023, (pp.21-23), USD (Policy), Accessed 05 Aug 2024.

(3)        Ministry of Defence. (2018a). Unmanned aircraft systems (JDP 0–30.2), 2018, Accessed 06 Aug 2024.

(4)        Taddeo, M. and Blanchard, A. (2022, August 23). A Comparative Analysis of the Definitions of Autonomous Weapons Systems (p. 37)

(5)        China, Convention on certain conventional weapons: Position paper submitted by China, 2018, In Geneva, Accessed 06 Aug 2024.

(6)        Panwar, R S, A Qualitative Risk Evaluation Model for AI-Enabled Military Systems in Responsible Use of AI in Military Systems (eBook), 26 April 2024, Ed Jan Maarten Schraagen.

 

0 Comments

Your Views

Recent Posts

Subscribe To The Future Wars Newsletter

Join this mailing list to receive a weekly newsletter about the latest posts from R S Panwar's Future Wars Blogsite.

Almost finished....To complete the subscription process, please click the link on the email we just sent you.

Share This