Regulation of AI-Enabled Military Systems

A Risk-Based Approach - Part II
Sections
Taxonomy of AI-Enabled Military Systems
Assigning Weapon Classes to Risk Levels
Differentiated Risk Mitigation Measures
Promoting International Consensus on Regulation of LAWS
References

[This piece is in continuation to “Regulation of AI-Enabled Military Systems: A Risk-Based Approach – Part I”, which discussed the EU proposal for regulation of AI-enabled civilian applications, gave an overview of the risk-based approach, and covered the Risk Hierarchy as well as the rationale for a five-level risk architecture]

Taxonomy of AI-Enabled Military Systems

Having identified the risk levels, the proposed Risk Hierarchy assigns different classes of weapon systems to the three higher risk levels (a similar assignment is yet to be carried out for the lower two levels associated with decision support systems). For doing so, a taxonomy of weapon classes first needs to be worked out.

Classification Parameters

Weapon systems may be grouped into disjoint classes independently in several ways, using various criteria. Five such classification criteria have been kept in mind while evolving the present Risk Hierarchy: complexity and nature of the Observe-Orient-Decide-Act (OODA) Loop) [1] (which gives rise to platform centric, network centric and swarm weapon systems); degree of autonomy (ie, fully-autonomous and semi-autonomous weapon systems); destructive potential (essentially nuclear and non-nuclear weapon systems); type of military operation (ie, offensive and defensive weapon systems); and type of target (ie, lethal and non-lethal weapon systems). An important parameter, which has not been considered for the time being, is warfighting dimension (these dimensions are kinetic, cyber, electromagnetic (EM) and cognitive [2]), since the Risk Hierarchy presently restricts itself to only kinetic weapons.

Most of the parameters listed above are either self-explanatory or have been discussed above in this work. The OODA loop parameter, however, requires further explanation.

The OODA Loop Parameter

Based on the nature of their OODA Loops, all weapon systems may be classified into Platform Centric (PC), Network Centric (NC) or Swarm weapon systems. In the military context the OODA Loop broadly translates into the sensor-decision-maker-shooter loop. PC weapons refer to systems in which the sensor-decision maker-shooter loop closes on a single platform, eg, tank, aircraft, ship, etc, including their unmanned versions. In contrast, NC weapons differ in two respects: firstly, sensors, decision nodes and shooters (three types of entities) are geographically dispersed and connected via a network; and secondly, there could be multiple entities of each type making up the weapon system [3]. A weapon system using swarm technology, although not known to be operational yet in any military, would perhaps be more akin to PC rather than NC systems, and may be best visualized as a locally distributed version of a single platform. It is pertinent to highlight here a few characteristics of these three weapon classes, as under:-

  • In PC weapons, the system is a single unit and mobile, which accentuates the fear that, if fully autonomous, the weapon system could go out of human control. A similar fear may not be relevant in the case of NC weapons. This characteristic has relevance with respect to the objective of achieving meaningful human control referred to earlier in this work.
  • In the case of NC weapons systems, AI-powered autonomy could well be distributed amongst multiple sensors, decision nodes and shooters, with autonomous target identification being done by the sensors, the decision to engage being taken autonomously by the decision nodes, and autonomous targeting (tracking and homing) being carried out by the shooters. In such a scenario, testing and verifying the reliability of a fully autonomous mode of functioning for the system as a whole would be problematic, as each of these entities may well be independently inducted into service and plugged into the targeting network.
  • For similar reasons as stated above, fixing accountability would present difficulties when target engagement by a NC weapon system goes awry.
  • Swarm systems are still under development, but the emergent behaviour associated with such systems has been seen to display a degree of unpredictability which is not yet adequately understood.

Disjoint Classes

The proposed Risk Hierarchy uses primarily the OODA Loop and Autonomy parameters, modified to an extent by the other three, to arrive at a taxonomy comprising ten classes of weapon systems, which are indicated as separate rows in the Risk Hierarchy diagram. The attempt here has been to create these classes as disjoint sets, while at the same time collectively covering the full spectrum of weapon systems.

It is also pertinent to highlight here that, although several alternative taxonomies of weapon systems may be worked out, the parameters for evolving the Risk Hierarchy have been selected bearing in mind the overall objective, namely, to assess risk from the perspective of IHL.

The next section gives out the rationale adopted for assigning the above weapon classes to the five risk levels, as depicted in the diagram.

Risk Hierarchy

Assigning Weapon Classes to Risk Levels

Risk Level 1: Unacceptable Risk

The following three types of systems are included in this category:-

  • Unpredictable Weapon Systems. As stated earlier, in this work unpredictable systems imply only those where learning is permitted in the critical functions during the deployment phase. This is because such weapon systems can metamorphose into a state for which they were not tested, and hence their development and deployment is considered as being unacceptable.
  • Fully Autonomous Human Targeting Weapon Systems. As stated above, these are taken to be fully autonomous weapon systems which are designed to identify and target specific humans or a human target group such as combatants. Including such weapon systems in the Unacceptable Risk category has been done to avoid violating the IHL Principle of Distinction as well as the spirit behind Martens Clause.
  • Fully Autonomous Nuclear Weapon Systems. Fully autonomous nuclear weapon systems, as defined in this paper, include only those systems which have man-out-of-the-loop or man-on-the-loop autonomy in the critical functions of identify-and-engage. For obvious reasons, such weapon systems must not be developed. Notably, however, a nuclear weapon system where, for instance, only the navigation function is autonomous, would not fall in this category, but classify as a semi-autonomous nuclear weapon system.

Risk Level 2: High Risk

The following four classes of weapon systems are included in this category:-

  • Semi-Autonomous Nuclear Weapon Systems. There may be apprehensions in permitting any degree of autonomy in nuclear weapon systems. It is once again reiterated here that in this work, in the case of semi-autonomous weapons the decision to release every weapon for each selected target will always be taken separately on human command. For nuclear weapons, a stringent protocol is expected to be in place before giving such a command. However, incorporating autonomy in non-critical functions such as take-off and landing, navigation, etc, even in the case of nuclear weapons appears to be an acceptable proposition.
  • Swarm Technology Based Weapon Systems. This class is placed the High Risk category because of the unpredictability associated with their emergent behaviour. Here the underlying premise is that the limits of unpredictable behaviour in swarms would be bounded, and these bounds could be tested rigorously to lie within specified limits before fielding the system.
  • Fully Autonomous PC Weapon Systems. In principle, all weapon systems with full autonomy in the critical functions are placed in this High Risk category, barring the following exceptions: nuclear and human targeting systems are placed one level higher in Unacceptable Risk category, while purely defensive systems are placed one level lower in the Medium Risk category. In PC weapon systems, all functions including critical ones are on the same platform, hence all such weapon systems are placed in the High Risk category.
  • Decision Nodes of Fully Autonomous NC Weapon Systems. In NC weapon systems, the sensors, decision nodes and shooters would be geographically distributed and, moreover, may be inducted into service separately. Since the decision to release a weapon would be taken at a decision node, only fully autonomous decision nodes (and not sensors and shooters) have been placed in the High Risk category.

Risk Level 3: Medium Risk

All weapon systems not covered under Levels 1 & 2 are placed in the Medium Risk category. This includes sensors and shooters of fully autonomous NC weapon systems, all purely defensive fully autonomous weapon systems, and all semi-autonomous weapon systems. However, there may be a case for shifting purely defensive fully autonomous weapon systems which are lethal (eg, static robot sentries) to the High Risk level.

Differentiated Risk-Mitigation Measures

For the Risk Hierarchy to be employed as a useful tool, an important final step is to evolve a differentiated risk-mitigation mechanism which may be linked to the five risk levels. One might think that while a risk-based approach is valuable for attaining a good understanding of the risks posed by AI-enabled military systems, risk mitigation measures must be applied uniformly across all systems. This view has a certain intuitive appeal, since it promises to reduce risk to the barest minimum, under the assumption that the most rigorous risk mitigation measures would be applied to all systems. This paper, however, takes the view that the stringency of mitigation measures must increase with increasing risk. This is because instituting a common mitigation mechanism for all systems, independent of the risk posed by each, is likely to be counter-productive, and ultimately result in dilution of mitigation efforts while dealing with very high risk systems.

The US DOD Directive 3000.09 and the EU Proposal for AI Regulation and provide good leads for working out a differentiated risk-mitigation mechanism. These are briefly discussed below.

DOD Directive 3000.09 on Autonomy in Weapon Systems

As early as Nov 2012, the US DOD issued Directive 3000.09 [4], which deals with autonomous weapon systems in general, not necessarily be AI-enabled. Although the Directive is not specifically framed as a risk-based approach, on close scrutiny glimpses of a differentiated risk-mitigation mechanism are discernable therein.

The Directive divides autonomous weapon systems into two broad categories with different levels of risk, though these have not been specifically labeled as such. The first, lower risk, category includes three types of weapon systems: all human-in-the-loop systems; human-on-the-loop systems meant for local defence; and human-out-of-the-loop systems which apply non-lethal, non-kinetic force. The second, higher risk, category includes all other autonomous systems. Notably, the Directive does not specifically categorize any type of autonomous system into an unacceptable risk category.

Some examples of differentiated risk mitigation measures which have been incorporated into the Directive are as follows: approval levels for development of the two categories are different, with higher risk systems requiring to undergo a more stringent clearance process; systems which undergo transformation (presumably as a result of self-learning) must be put through the TEV&V process after every self-learning phase; a much more stringent review procedure has been put in place for high risk systems; and field commanders are required to take approvals from appropriate higher levels before deploying systems in a manner which are classified as high risk [5].

The European Union (EU) Proposal for AI Regulation

As brought out previously in this work, the EU Proposal groups all AI systems into Unacceptable Risk, High Risk, Transparency Requirements and Negligible Risk categories. The proposal stipulates that AI practices which fall in the Unacceptable Risk category are prohibited. For High Risk systems, a very stringent risk management system has been proposed. For certain types of AI systems, only specified transparency obligations are required to be met by the fielders of the system. Finally, for systems which pose negligible risk, the proposal lays down a framework for the creation of codes of conduct, which aims to encourage providers of such systems to voluntarily apply the requirements which have been specified as mandatory for High-Risk AI systems. The proposal further states that providers of these low risk systems may create and implement the codes of conduct themselves.

Risk Hierarchy: Considerations for Evolving Mitigation Measures

The above discussion reveals that DoD Directive 3000.09 assumes two risk categories for autonomous systems (without referring to AI technologies), while the EU Regulation formally defines a four-level risk architecture, but deals with AI-enabled non-military systems. Both the above documents, however, institute stricter measures as the level of risk increases.

With respect to the Risk Hierarchy, the challenge is to evolve five sets of mitigation measures which are suited for military systems. These mitigation measures would come into play at every stage, from project clearance through design and development, TEV&V and deployment stages. While developing such a mechanism at HD is still a work in progress, a few considerations for doing so are presented here.

Out of the five risk levels, the easiest to address is the Unacceptable Risk level since, by definition, systems grouped under this level must obviously not be developed at all. Next, at the other end of the spectrum, for systems which fall in the Negligible Risk category while some mitigation measures would be applicable, these are likely to be more in the nature of best practices in AI design and development, aimed at addressing issues such as data integrity and brittleness. Thus, incorporating these into the Risk Hierarchy should not present many difficulties.

Amongst the balance middle three levels, risk mitigation measures for levels which relate to weapon systems, namely the High and Medium Risk levels, would need to be worked out much more carefully as compared to the Trust Requirements level, which comprises decision support systems having, by definition, a human-in-the-loop.

For the two levels associated with weapon systems, in case of AI techniques used for object recognition in sensors and precision targeting in shooters, rigorous testing against specified performance standards should be adequate to sufficiently mitigate AI-related risks. On the other hand, the risk associated with AI-enabled autonomy in the critical decision-to-engage function would be much greater, and suitable measures would need to be incorporated at every stage of the system life cycle to address this risk. In a similar vein, the unpredictability associated with the emergent behaviour of swarms, which is a relatively new area of research, would need to be properly understood and mitigation measures instituted accordingly.

As regards systems categorised under the Trust Requirements level, mandatory use of XAI is perhaps warranted to avoid automation bias and also to make these systems trustworthy from the perspective of commanders. XAI techniques, however, have yet to evolve to a stage where these may be practically incorporated into system design. In the interim, therefore, special care would need to be taken to ensure that commanders leverage decision support provided by “black-box” AI systems with maturity.

Finally, project clearance and review processes should be made more stringent as the risk levels increase.

Promoting International Consensus on Regulation of LAWS

As noted in a previous section, deliberations are underway at the UN since 2014 on the legal and ethical implications of developing and fielding LAWS against the backdrop of existing IHL. At the Dec 2019 meeting of the UN GGE on LAWS, a set of 11 guiding principles was accepted by all parties. However, a consensus remains elusive on how AI-enabled weapon systems may be regulated through a legally binding international instrument. Even amongst major military powers such as the US, China and Russia, a common understanding on regulation of LAWS does not as yet exist. The risk-based approach presented here could contribute usefully towards arriving at such an understanding, as explained below.

Granular Discussions would facilitate Consensus Building

A key reason for not being able to arrive at a consensus is that the discussions are very general in nature, and treat all AI-enabled weapon systems as one category. This makes it very difficult to identify specific areas of disagreement which might be taken up for resolution. The Risk Hierarchy would facilitate consensus by carrying out these discussions at a more granular level, as explained below:-

  • LAWS correspond to the top three levels of the Risk Hierarchy. By splitting weapon systems into different risk levels, states could focus on certain very high risk categories, and deliberate on whether these should be banned altogether. At the other end of the spectrum, states might find it easier to agree that certain low risk categories are not likely to fall afoul of IHL and should perhaps be left unregulated.
  • It is reiterated here that this model carries out risk evaluation primarily from an IHL perspective, based on five classification parameters discussed in an earlier section. For instance, ‘OODA loop complexity’ would likely increase unpredictability in a weapon system, which thus might result in flouting the IHL principle of Distinction; the ‘destructive potential’ parameter has a bearing on the principle of Proportionality and Military Necessity; the ‘degree of autonomy’ parameter is central to discussions on Meaningful Human Control (MHC); and so on.
  • By evaluating risks presented by different categories of weapons on the basis of well-defined parameters, it might be easier to achieve consensus on the relative degree of risk presented by them vis-à-vis one another, and also on their allocation to different risk levels within the Hierarchy.
  • As a final step towards achieving consensus, if agreement is reached on classifying various weapon categories into different risk levels, it should be easier therefrom to also come to a common understanding on the risk mitigation procedures and/ or best practices which should be put in place for each risk level, as also on whether there is a need to add to or modify existing provisions in IHL for dealing with LAWS.

Self-Regulation by Responsible Militaries

It may be justifiably assumed that responsible militaries would not wish to act in disregard of IHL, or employ weapon systems which may have negative fallouts for own forces. For instance, advanced military powers prefer precision targeting over dumb bombs, in order to minimise collateral damage (as dictated by the principles of Distinction and Proportionality) and also to enhance their own combat effectiveness. Moreover, no military commander would like to employ “unpredictable” weapon systems over which he lacks full control, for obvious reasons. As another example, if fully autonomous nuclear weapon systems malfunction, the result would be mutually assured destruction.

The Risk Hierarchy, by piercing through generalities and evaluating risk through a well-reasoned approach, helps in understanding how specific categories of AI powered weapon systems might result in IHL violations or otherwise be detrimental to a state’s own military operations. In doing so, it encourages responsible states to institute a self-regulatory mechanism for mitigating these risks. Such self-regulatory mechanisms adopted by states in their own interests should provide a good starting point for achieving international consensus on AI regulation.

Conclusion

The primary motivation for adopting a risk-based approach to regulation of AI-enabled systems is to mitigate risks while at the same time leverage the power of AI for the benefit of humankind. Given that regulatory efforts for AI technologies are very much in the early stages of evolution, the EU Proposal which adopts a risk-based approach for non-military systems, perhaps a first of its kind, needs to be complimented.

The Risk Hierarchy presented in this paper is an attempt to evolve a similar risk-based model for AI-enabled military systems. As is clear from the analysis presented above, the overall objective, nature of risks and the approach adopted for mitigation in the case of military systems differ substantially from what is applicable for non-military applications. In its current form, the Risk Hierarchy provides the conceptual contours for one such approach. However, it needs to be refined further before it can classify as a mechanism for moving beyond mere enunciation of principles and be put to practical use.

References

(1)      Nule, Lt Col Jeffrey N, A Symbiotic Relationship: The OODA Loop, Intuition, and Strategic Thought, Mar 2013, Strategy Research Project, US Army War College, Accessed 17 Jan 2022.

(2)      Lt Gen (Dr) R S Panwar, 21st Century Warfare: From Battlefield to Battlespace, 06 Oct 2017, Future Wars, Accessed 17 Jan 2022.

(3)      Lt Gen (Dr) R S Panwar, Network Centric Warfare: Understanding the Concept, 22 Sep 2017, Future Wars, Accessed 17 Jan 2022.

(4)      US DOD Directive 3000.09: Autonomy in Weapon Systems, 21 Nov 2012, US DOD, Accessed 17 Jan 2022.

(5)      Ibid., pp. 3, 6, 7-8, 12.

0 Comments

Your Views

Recent Posts

Subscribe To The Future Wars Newsletter

Join this mailing list to receive a weekly newsletter about the latest posts from R S Panwar's Future Wars Blogsite.

Almost finished....To complete the subscription process, please click the link on the email we just sent you.

Share This