Defining Autonomous Weapon Systems: A Scenario Based Analysis

Part I: Concepts and Scenarios
Sections
Introduction
Autonomy in Critical Functions: Multiple Interpretations
Autonomous Functions in AWS
Feasible Scenarios
Hypothetical Scenarios
Adherence to IHL: Implications of Time and Distance
References

Introduction

There is universal agreement that significant risks are associated with AI-enabled systems, which need to be evaluated and suitably addressed. The data-centricity of AI-enabled systems introduces risks arising from unrepresentative, biased, or incorrect/ deliberately poisoned data, resulting in unintended system behaviour. The fact that a system might continue to learn and thus, post deployment, metamorphose into something different from what was fielded, together with its opaque nature, introduces a degree of unpredictability into its functioning. The data-driven learning and non-transparent nature of AI systems together are perhaps mainly responsible for systems becoming vulnerable to catastrophic failure when confronted with edge cases, a characteristic which is referred to as brittleness [1]. The increasingly higher intelligence and consequent greater autonomy conferred onto AI-enabled systems results in undesirable effects such as automation bias and lack of accountability [2].

When it comes to weapon systems, the level of autonomy is perhaps the most important parameter for risk evaluation. Of particular interest are autonomous weapon systems (AWS). While there is no internationally accepted definition of AWS [3], these are often described as weapons which, once activated, can select and attack/ engage targets without further human intervention (European Parliament, 2023; ICRC, 2021 [4] The select-and-engage functions are dubbed as critical functions within the targeting chain [5] With such a characterization, most states declare that fully autonomous weapons must never be developed.

The reason for states adopting such a stance is that the above characterization is mostly interpreted to mean that AWS can draw up a list of targets and destroy them, all without any human intervention. On closer scrutiny, it can be discerned that the phrase ‘select-and-engage’ used to describe the critical functions of an AWS is very ambiguous, open to several different interpretations. Moreover, except in extreme scenarios such as if machines take over humanity, autonomy in these critical functions does not really translate to machines drawing up target lists.

This article takes a deeper look at how AWS are characterized and defined. It begins by first analysing autonomy in critical functions in general terms. It then goes on to frame ten scenarios with the objective of distinguishing various levels of autonomy in the critical functions, both feasible and hypothetical ones from a technology perspective, as also to demonstrate irresponsible employment of AWS. Thereafter, it analyses the implications of a few well-known formal definitions of AWS against the backdrop of these scenarios. Finally, it proposes a set of definitions for AWS with different levels of autonomy aimed at removing ambiguity in extant definitions and suggests related glossary terms which also warrant formal defining. It also briefly analyses how the proposed definitions fare when applied to complex AWS architectures.

Autonomy in Critical Functions: Multiple Interpretations

While characterizing an AWS as a weapon system which can select-and-engage targets without human intervention, the term ‘select’ may well be interpreted to mean the determination of adversary assets, human or otherwise, which are to be targeted. In other words, in this interpretation the weapon system itself prepares a target list for subsequent destruction. Such an extreme portrayal of an AWS is dramatized by the self-aware Skynet letting loose an army of Terminators onto humanity [6].

An alternative interpretation of the phrase ‘target selection’ is relatively benign, as follows: Given a target list or description (provided by a human), the weapon ‘identifies’ the target (or a group of targets) using sensors, then tracks and destroys it. Here, the implied meaning of the term ‘selection’ is synonymous with target identification. The US DOD Directive 3000.09, for instance, defines ‘target selection’ as “The identification of an individual target or a specific group of targets for engagement” (emphasis added) [7] Similarly, the Netherlands defines an AWS as one which “selects and engages targets matching certain predefined criteria”, where the criteria are provided by a human [8].

In the second interpretation, the target description provided by a human may range from being very specific to increasing levels of generality. Keeping in mind the current state of technology and other practical considerations, the following types of target description lend themselves to being programmed into machines:

  • Explicit Target Description. One or more specific targets (static or mobile, which may or may not be prioritized) are selected by a human, their description is fed into the weapon system, which is then activated to neutralize the targets. For static targets, the description could be in terms of a precise location reference, while for mobile targets it could be any unique identity (e.g., unique electronic signature of a mobile radar, unique visual profile of a ship, etc). In addition, time and area constraints may be included in the target specification.
  • Parameterized Target Description. In this case, instead of specific targets, target parameters may be set out (e.g., hangars on a specific airfield, enemy tanks in a specified area), together with time constraints. Such a weapon system, in addition to target identification, might at times need to prioritize amongst identified targets for efficient neutralization.

An unambiguous target description may be characterized as one which can result in identification (and destruction) by the AWS of only those target(s) which were intended to be destroyed by the human who frames the target description (assuming that the AWS does not malfunction). Moreover, implicit in the human involvement while giving such a target description at AWS activation time, which the Netherlands refers to as the “wider loop” in the decision-making process, is the responsibility (and accountability) for ensuring adherence to the IHL principles of Distinction, Proportionality and Military Necessity [9].

A parameterized target description with very loose time and space constraints, though unambiguous, may be manifestly irresponsible. For instance, one can envisage activating an AWS to engage targets based on a description such as ‘any enemy tank in the battlespace and/ or adversary territory’. Such target description provided to an AWS amounts to giving a degree of leeway to machines which should be unacceptable to responsible states, since it precludes the possibility of ensuring the degree of human involvement necessary for judicious adherence to IHL principles.

As an extreme case, one could envisage a higher-level target description such as ‘all assets which contribute towards the adversary’s combat potential’. Such a description is equivalent to stating that the weapon system prepares its own target list. For implementing such a capability, weapons would need to possess higher cognitive abilities often referred to as artificial general intelligence (AGI), which at this juncture is far from being achieved from a technology standpoint [10].

Autonomous Functions in AWS

With the above discussion as a backdrop, one can envisage five distinct capabilities which might define the critical functions of AWS. Given the ambiguity inherent in the phrase ‘select-and-engage without human intervention’, weapons with different subsets of these capabilities could all be dubbed as ‘fully autonomous’, even though the level of autonomy would vary widely amongst them, with an AWS possessing all five capabilities having the highest level of autonomy. These capabilities are as under:

  • Precision Engagement. The ability to autonomously navigate to and engage a target with precision, once target coordinates are provided.
  • Target Identification. The ability to identify and geolocate a target within a specified area using sensors, based on a target description fed to it by its operator. Here it is assumed that the target description is unambiguous, leaving no discretion with the AWS.
  • Target Prioritization. The ability to prioritize amongst a list of identified targets with the objective of engaging the targets efficiently.
  • Application of IHL Principles. The ability to take a decision on whether it would be justified to engage a target keeping in mind the IHL principles of distinction, proportionality, and military necessity.
  • Target Determination. The ability to draw up a target list (one or more targets) in any given operational situation, while also keeping in mind IHL principles.

For the purpose the following analysis, it is assumed that precision engagement (including target geolocation, given precise location parameters), target identification (based on an unambiguous description) with accuracy better than humans, and target prioritisation for efficient engagement, are capabilities which are within the realm of narrow AI and are feasible to be implemented with the current state of technology. On the other hand, AI-enabled weapon systems are not yet (and perhaps never will be) intelligent enough to apply the principles of IHL while engaging targets, or carry out target determination (ie, draw up a target list given only the operational scenario as an input). Both these require value judgement to be applied (a higher cognitive function), which requires AGI level functionality.

It is further assumed that features of autonomous navigation and the ability to loiter within a specified area for a limited period of time without being piloted are feasible with current technology and are present in all flavours of AWS being discussed below. In each case, an unmanned aerial vehicle (UAV) is assumed to be in play, though the rationale provided is equally applicable for other flavours of AWS (unmanned ground vehicles (UGVs), unmanned undersea vehicles (UUVs), etc).

We next discuss several scenarios which employ AWS with differing capabilities. In each case, it might appear to a non-discerning analyst that the critical select-and-engage functions are fully autonomous with no man-in-the-loop present. In other words, each of these notional AWS would get covered under the ambit of the popular definition of fully autonomous weapon systems as those “which once activated, can select and engage targets without further human intervention.” On closer scrutiny, however, it would be evident that the autonomy actually available to the AWS across the different scenarios presented varies quite significantly.

The scenarios are discussed under two heads: feasible and hypothetical. Feasible scenarios are those which employ AWS with levels of capability/ autonomy achievable with existing state-of-the-art. Such AWS may be employed within the constraints imposed by IHL provided responsible rules of engagement (RoE) are in place (Scenarios 1 to 3). On the other hand, just like any ‘dumb’ weapon system such as a simple rifle, each of the AWS described in the feasible scenarios could also be employed in ways which violate IHL (Scenarios 4 & 5). As regards the hypothetical scenarios discussed below (Scenarios 6 to 8), these are assessed to be technically infeasible anytime in the foreseeable future. Moreover, the higher cognitive features of AWS assumed in these scenarios are not considered to be desirable even if implementing them becomes technically feasible at a future date, as it is felt that value judgement is an inherently human function and, in the case of weapon systems, must never be delegated to machines.

Feasible Scenarios

Scenario 1: Engagement of Fixed Targets

Scenario: An operator feeds in the precise coordinates of a fixed target (eg, an adversary bridge of operational value to the adversary), together with a time limit for engaging it, into the AWS and activates it. A single kamikaze UAV takes off, navigates to the target location, engages and destroys it, all without further operator intervention.

Out of the five listed capabilities, in this scenario the AWS need only have the capability of precision engagement.

Since precise target coordinates are provided by the operator, it is clear that target ‘selection’ (determination) is carried out by a human. Because the location is fixed, the AWS does not even need to identify the target. Being a single target, no prioritization is applicable.

This scenario may be extended to cover a group of fixed targets (provided the AWS can carry multiple munitions). Being fixed targets, it would be logical for the order of engagement also to be provided to the AWS, doing away with the requirement of prioritisation.

Equivalence of such an AWS may be drawn with an artillery shell or a ballistic missile with a fixed trajectory, ie, without any manoeuvrable re-entry vehicle (MaRV) or terminal guidance features. Value judgement on application of IHL principles is exerted by the operator/ commander at the time of AWS activation, considering the time elapsed between activation and engagement while doing so.

Scenario 2: Engagement of Specific Mobile Targets

Scenario: An operator feeds in the unique signature of a mobile target, together with area and time parameters, into the UAV and activates it. This signature may be, for instance, the visual profile of an adversary ship, or the unique fingerprint of an adversary mobile radar. The UAV takes off, navigates to the specified area where the target is expected, loiters and searches for the target signature, and on identifying it engages and destroys it, all without further operator intervention.

Here the difference from Scenario 1 is that the target is mobile, requiring the UAV to identify the target based on its signature. Thus, the capabilities for target identification (based on a signature specified by a human) and precision engagement are both needed for mission completion. Nonetheless, only targets which have been uniquely chosen by a human are engaged and destroyed. As in Scenario 1, value judgement based on IHL principles is applied by a human at the time of activation.

This scenario may be extended to cover a group of mobile targets as well. If several signatures are identified near simultaneously, autonomous prioritisation may also be a feature which comes into play.

Scenario 3: Engagement of Targets in a Class

Scenario: An operator feeds in the class of target to be identified into the UAV (eg, tank of a particular make, which the UAV has been trained to recognise under operational conditions to specified performance standards), together with area and time parameters, and activates it. The UAV takes off, navigates to the specified area where the target is expected, loiters and searches for any target which fits the specified target class, and on identifying it engages and destroys it, all without further operator intervention.

Here, the difference from Scenario 2 is that a target class rather than a unique target identity is specified to the UAV. Therefore, it may be said that there is a wider leeway given to the AWS to ‘choose’ a target, though within area and time constraints. It is to be noted, however, that the entire target class has been designated as such by a human operator. The Israeli Harpy/ Harop which is trained to recognise adversary radar signatures is a good example of such a system.

Since there may be more than one target which get identified within the area and time constraints, the AWS would need to possess prioritization capability, in addition to target identification and precision engagement. Depending on its munitions carrying capability, it could neutralise several targets in a prioritised order.

As in Scenarios 1 and 2, IHL principles are considered and applied by a human at the time of AWS activation.

Analysis of Scenarios: Accountability Issue and Definitional Limitations

In the three scenarios described above, the targets are unambiguously determined by a human. In Scenarios 1 & 2 specific targets are ‘selected’. In Scenario 3 while a whole class of targets is defined by the human operator/ commander, this class, qualified by area and time constraints, is assumed to be precise enough to refer to a specific target/ target group. Further, the choice of targets in all three scenarios is assumed to be dictated by operational considerations. As regards the IHL principles of distinction, proportionality and military necessity, the desired value judgment would be exercised by the operator/commander at the time of AWS activation itself. The time-gap between activation and engagement has implications for this value judgement made in advance, an issue which is discussed in a subsequent section.

As such, in all the three scenarios, the autonomy in critical functions delegated to the AWS is restricted to identification (as opposed to determination) of well-defined targets, prioritizing amongst a set of identified targets (only in Scenario 2 & 3) and effective engagement of the targets.

From these scenarios it emerges that mere autonomy in target identification, prioritization and engagement does not shift responsibility for the decision to kill, as also accountability for any mishaps which might occur, from the human to the machine, which is often misleadingly made out to be the case. Responsibility and accountability are contingent upon who specifies the targets, ie, draws up a target list and specifies time and place for engagement, which in these scenarios is carried out by a human by giving an unambiguous description as well as time and area constraints.

The scenarios also serve to demonstrate the limitations of characterizing fully autonomous weapons as those “which once activated, can select and engage targets without further human intervention.” While the UAVs depicted in all three scenarios fall within the ambit of this definition, closer scrutiny shows that they are far from being fully autonomous weapon systems, which are often labelled as killer robots with a mind of their own. The primary fault with such a characterization can be traced to the ambiguity of the term ‘select’, which is often taken to mean drawing up a target list, but in any realistic scenario would likely translate to only target identification.

The autonomous capabilities of AWS depicted in these scenarios, however, may be employed by the human operator/ commander in a manner which violates the principles of IHL. The following scenarios demonstrate such irresponsible employment.

Scenario 4: Engagement of a Target Class with Inadequate Constraints

Scenario: An operator feeds in the class of target to be identified (eg, tank of a particular make, which the UAV has been trained to recognise under operational conditions to specified performance standards), together with ‘unrestricted’ area and time parameters (for instance, any tank found in adversary territory even if it is not participating in operations) into the UAV and activates it. The UAV takes off, loiters in adversary territory searching for enemy tanks for periods limited only by its endurance, and on identifying one engages and destroys it, all without further operator intervention.

It is to be noted that the autonomous capabilities of the AWS depicted in Scenarios 3 & 4 are identical, ie, autonomous navigation, loitering, target identification, prioritization and precision engagement. The difference lies in the manner of its employment. In Scenario 3 it was assumed that time and space constraints were imposed on the AWS to tie it down to operational requirements and ultimately ensure that IHL principles are adhered to. The free hand given to the AWS in Scenario 4 precludes any possibility of ensuring such adherence to IHL principles. Moreover, the target parameters in this case do not map to a specific target/ target group. However, violations of IHL which are expected to occur through such employment would be attributable to irresponsible employment the AWS, and not because of any inherent drawbacks of AI-enabled autonomy.

Scenario 5: Engagement of Targets in Urban Settings

Scenario: An operator feeds in the class of target to be identified (eg, adversary artillery/ mortar gun positions) which the UAV has been trained to recognise while deployed in unpopulated areas, feeds in area and time parameters into the UAV which require it to engage this target class in urban settings with considerable civilian populace, and activates it. The UAV takes off, loiters in adversary territory searching for any such gun positions, and on successful identification engages and destroys them, albeit with significant collateral damage in terms of the lives of innocent civilians, all without further operator intervention.

In this scenario too the autonomous capabilities depicted are identical to Scenario 3. However, the fact that the AWS has been employed in an area heavily populated by civilians increases the probability of collateral damage to unacceptable levels. In such scenarios, exercising value judgement at activation time for ensuring adherence to IHL is not practically feasible. In scenarios where it is necessary to engage such targets, operator control over the engagement function is desirable right up to the moment of engagement.

Another problem which has been highlighted in this scenario is the difficulty of training AWS for autonomous target identification in urban settings, since obtaining training/ test data for such a setting would be extremely difficult (which is why the scenario depicts AWS as having been trained on data pertaining to unpopulated areas).

This scenario once again demonstrates a case where violations of IHL are the result of irresponsible employment of AWS in a fully autonomous mode, rather than inherent limitations of the AI-enabled AWS.

IHL Violations: A Human Judgement Issue

Scenarios 1 to 3 depict the use of AWS capable of some or all of the following autonomous functions: take-off and landing, navigation, target identification, prioritization within identified targets, and precision engagement of targets. Importantly, drawing up target lists (target determination) is not a function associated with these AWS.

It has also been highlighted that responsibility/ accountability for target engagement/ destruction as well as adherence to IHL principles are aspects which are linked with target specification (ie, unambiguous description together with time and place of their engagement), which in all three scenarios is being done by human operators/ commanders. Hence, deployment of AWS with capabilities as depicted here cannot be said to be inherently violative of IHL, nor can employment of autonomy in such weapon systems be the basis for divesting humans of responsibility/ accountability if innocent lives are lost.

On the other hand, Scenarios 4 and 5 demonstrate that irresponsible employment of these very AWS can lead to violations of IHL. Such violations, however, would be a consequence of error in human judgement in how they should be employed and not because of autonomy in the weapon systems.

It is important to note that such irresponsible employment of weapons in war is possible not just with AWS but also with the dumbest of weapons, for instance, by firing a simple rifle indiscriminately into a crowd of civilians to target embedded combatants, or bringing down an artillery barrage onto a residential area which is still not clear of civilian populace in urban warfare.

The next section paints two hypothetical scenarios which attempt to bring out what ‘full autonomy’ in its most extreme manifestation in weapon systems might imply, but which are clearly in the realms of fantasy at this juncture.

Hypothetical Scenarios

Scenario 6: Autonomous Application of the Principle of Distinction

Scenario 6: An operator feeds in details of target (specific fixed/ mobile targets or target class) along with area and time constraints and deploys it in an area where significant civilian population is expected to be present. The UAV takes off, loiters in adversary territory in search of the specified targets. On successful identification, it assesses the collateral damage expected, makes a value judgement considering the principles of distinction and proportionality, and if these pass muster, engages and destroys the targets, all without operator intervention post activation.

This scenario assumes that the AWS is capable of autonomously carrying out assessment of collateral damage at the point of engagement. Such a capability has not yet been achieved, and it is unclear whether AI-enabled AWS would ever be able to demonstrate such capability. In its absence, application of the principle of distinction needs to be done at the time of activation. This implies that fully autonomous weapon systems (of the type depicted in Scenarios 1 to 3) may be responsibly deployed, for instance, only in areas where no civilians are present (eg, air/ undersea combat) or very sparse population happens to linger on in a designated war zone (surface naval operations, desert terrain, etc). If such constraints are not built into the RoE for such AWS, then it would amount to employing them irresponsibly, as brought out in Scenario 4.

However, going by the ever-increasing levels of cognitive faculties being displayed by frontier AI models, achieving such a capability sometime in the future cannot be ruled out.

Scenarios 7 and 8: Skynet/ Terminator Scenario

Scenario 7: Human commanders have at their disposal an extensive sensor network which autonomously provides situational awareness, an AI-enabled autonomous decision capability for evolving operational and tactical plans and drawing up target lists, and an army of AWS of various flavours (in the ground, air, sea, space and cyberspace domains) for neutralising adversary combat forces autonomously. As a consequence of non-transparency of AI systems, information overload and an intractably complex battlespace, human commanders defer to the operational/ tactical decisions as well as target lists prepared by AI-powered decision nodes and permit the autonomous launch of AWS to engage targets with very limited oversight and intervention.

Scenario 8: Machines have taken over all combat operations (and maybe even control of the state), at least on one of two warring sides. An AI commander has an extensive sensor network for situational awareness, an AI-enabled decision capability for evolving operational and tactical plans and drawing up target lists, and an army of AWS of various flavours (in the ground, air, sea, space and cyberspace domains) for neutralising adversary combat forces autonomously. All levels of OODA loops which go into warfighting execute with full autonomy, with no human intervention whatsoever.

Both scenarios envisage target lists being drawn up by AI agents, which are then handed over to AWS for engagement. While Scenario 7 depicts a degree of human involvement in tasking and activation of the AWS, it has been presumed that battlespace complexities preclude the proper vetting of target lists by human commanders. Scenario 8 has intentionally depicted a highly unrealistic setting in which the entire politico-military capability comprises of intelligent machines, with humans essentially as subservient onlookers.

Analysis of Scenarios

Scenario 6 envisages that an AWS carries out autonomous assessment of collateral damage and carries out a value judgement as mandated by the principle of distinction. Scenarios 7 and 8 depict AI agents which are capable of drawing up target lists in any operational setting to be engaged having considered the IHL principles of proportionality and military necessity. Such capabilities require higher cognitive functions which at this juncture are far from being realized in AI agents.

Notably, Scenario 8 represents the existential threat to humanity posed by AI which is being hotly debated around the world today. Indeed, while doomsday proponents (AI-Doomers) declare there is a 99% probability that such a scenario would ultimately unfold, even some of the sane voices amongst AI-technology leaders predict a non-zero chance of Scenario 8 coming to pass, unless adequate steps are taken to regulate AI [11,12].

Systems which draw up target lists autonomously are not to be conflated with AI-enabled intelligence collation systems which assist humans in ‘selecting’ (determining) military assets to be targeted, which are in use even today, for instance in the ongoing Israeli-Hamas conflict [13]. Indeed, software assisted collation of battlespace intelligence for providing decision support to commanders at all levels has been the norm for several decades now. However, the final call on adversary combatants/ assets to be targeted continues to fall squarely on the shoulders of human decision makers. Adherence to the IHL principles of proportionality, military necessity and humanity is assumed to be an inextricable component of such decision making.

Adherence to IHL: Implications of Time and Distance

Application of IHL principles while engaging targets requires value judgement. This work assumes that current state of the art in AI is not, and perhaps never will be, advanced enough for AI agents to make such value judgements. Therefore, IHL related value judgements must necessarily be made by human operators/ commanders.

Scenarios 1 to 5 depict AWS which do not possess the capability to exercise such value judgement. Since the AWS here are defined as systems which, once activated, carry out target engagement without further human intervention, it follows therefrom that IHL related judgements must necessarily be made before the AWS is activated.

Scenarios 1 to 3 depict AWS with capabilities such that exercising IHL related value judgement at/ before AWS activation time may be feasible, despite the existence of a time gap between activation and engagement. For instance, all three scenarios may be templated in combat zones where civilians are not expected (eg, is air to air combat, or mechanised operations in sparsely populated desert terrain where civilians have been evacuated). Alternatively, a good estimate can be made on number of civilians likely to be present at the point of engagement, on the basis of which the principle of proportionality/ military necessity might nevertheless justify target engagement (eg, destroying a strategic bridge deep inside adversary territory).

Scenarios 4 and 5, on the other hand, highlight that when AWS having the same capabilities are deployed in certain types of scenarios, a reasonable IHL related value judgement is not even feasible. This is because either the time-gap is too large (Scenario 4), or even a small time-gap may be unacceptable as risks of heavy collateral damage are much higher (Scenario 5).

From an IHL perspective, it is often argued that human control over AI-enabled weapon systems would be meaningful only if control is retained right till the point of target engagement (man-in-the-loop/ man-on-the-loop, preferable the former). The underpinning rationale for this argument is that the situation might change in the combat zone during the time lapse between activation and engagement, thus making IHL value judgement obsolete. Here, it would be interesting to make a comparison of AWS depicted in Scenarios 1 to 5 and certain classes of non-autonomous conventional weapon systems. Some examples are as under:

  • Long-range ICBMs may take up to 30-45 minutes, targeting military objects which might be on a different continent altogether,
  • Even long-range artillery shells, with ranges of 300 kms or more, may take 4-5 minutes before reaching the designated targets. Furthermore, these targets are not likely to be under visual observation in most circumstances.
  • Pilots executing aerial bombardment missions over strategic military assets are almost never in visual contact with the target (eg, beyond-visual-range (BVR) air-to-ground missions), making it impossible for them make an informed assessment on expected civilian casualties.

Triggering of conventional munitions is analogous to activating an AWS, since in both cases human control is relinquished at this point in time. In the case of an AWS the time gap between their activation and final engagement of target might vary from a few minutes to several days, depending on their endurance. Admittedly, at the higher end of the spectrum this time gap is much larger as compared to conventional munitions. That stated, in both cases, the gap is large enough to preclude real-time assessment of expected civilian harm.

In principle, therefore, AWS are no different from conventional weapon systems from the perspective of exercising IHL related value judgement, with human control being relinquished well before the time of engagement. Equally importantly, the non-contact nature of long-range precision weapons as well as AWS also precludes direct observation of the intended target. In addition to assessment of civilian harm, this non-contact nature of warfare also severely limits the role of human compassion, often brought up during discussions on meaningful human control (MHC), to very few conventional conflict scenarios such as trench warfare and perhaps conflicts in urban settings.

The above discussion once again highlights that adherence to IHL is primarily a human judgement issue, wherein the nature of the weapon system being employed needs to be matched to the operational setting, in order to achieve IHL’s central objective of causing minimum harm to civilians as well as soldiers hors de combat.

[Continued in “Defining Autonomous Weapon Systems: A Scenario Based Analysis – Part II”]

References

(1)        Lohn, A J, Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance, 02 September 2020, arXiv:2009.00802v1, Accessed 05 Aug 2024.

(2)        ICRC, Artificial Intelligence and Machine Learning in Armed Conflict: A Human-Centred Approach, International Review of the Red Cross, 102 (913), 2020, pp. 463-479, Accessed 05 Aug 2024.

(3)        UNIDIR, The weaponization of increasingly autonomous technologies: Concerns, characteristics and definitional approaches, 2017, pp. 23–32, Accessed 05 Aug 2024.

(4)        US DOD, Directive 3000.09: Autonomy in Weapon Systems, 25 January 2023, pp. 21–23, USD (Policy), Accessed 05 Aug 2024.

(5)        Jansen, E T,  The (Erroneous) Requirement for Human Judgment (and Error) in the Law of Armed Conflict, 2020, International Law Studies, 96, Stockton Centre for International Law, Accessed 05 Aug 2024.

(6)        Zador, A & LeCun, Y, Don’t Fear the Terminator, Scientific American Blog, 26 September 2019, Accessed 05 Aug 2019.

(7)        US DOD, Directive 3000.09: Autonomy in Weapon Systems (2023)

(8)        Government of the Netherlands, Examination of various dimensions of emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and purposes of the Convention, 09 October 2017, CCW/GGE.1/2017/WP.2, Accessed 05 Aug 2024.

(9)        Winter, E, The Compatibility of Autonomous Weapons with the Principles of International Humanitarian Law, 21 Jan 2022, Journal of Conflict & Security Law, Oxford University Press, Accessed 05 Aug 2024.

(10)       Henshall, Will, When Might AI Outsmart Us? It Depends Who You Ask? It Depends on Whom You Ask, 19 January 2024, Time, Accessed 05 Aug 2024.

(11)       Tangalakis-Lippert, Katherine, Elon Musk says there could be a 20% chance AI destroys humanity — but we should do it anyway, 01 April 2024, Business Insider, Accessed 06 Aug 2024.

(12)      Heaven, Will Douglas, Bill Gates isn’t too scared about AI, 11 July 2023,MIT Technology Review, Accessed 06 Aug 2024.

(13)       Schmitt, Michael N, Israel – Hamas 2024 Symposium – The Gospel, Lavender, and the Law of Armed Conflict 28 Jun 2024, Lieber Institute West Point: Articles of War, Accessed 06 Aug 2024.

0 Comments

Your Views

Recent Posts

Subscribe To The Future Wars Newsletter

Join this mailing list to receive a weekly newsletter about the latest posts from R S Panwar's Future Wars Blogsite.

Almost finished....To complete the subscription process, please click the link on the email we just sent you.

Share This