Defining Autonomous Weapon Systems: A Scenario Based Analysis

Part III: Online Learning and Complex Architectures
Sections
Online Learning
Scenario 10: AWS with Online Learning Feature
Complex AWS Architectures
A Spectrum of AWS Architectures
Correlation with Proposed Definitions
Conclusion

 

[This piece is in continuation to “Defining Autonomous Weapon Systems: A Scenario Based Analysis – Parts I & II”, which analyzed extant definitions of AWS against the backdrop of several feasible and hypothetical scenarios to highlight certain limitations of these definitions, and proposed alternative definitions which might facilitate more focused discussions on the contentious subject of regulation of AWS.]

Online Learning

As already stated, for this analysis autonomy in any function is taken to simply mean absence of human intervention/ control in that function. However, there are two significantly different mechanisms by which AI-enabled autonomy in each function may be implemented, as explained below.

AI/ML based systems have the inherent advantage that post deployment, with the availability of additional data gathered from the environment, the underlying neural networks may be retrained with the objective of improving operational performance. The following two models may be followed for retraining after deployment:

  • Continual Training: The retraining may be implemented in a development set-up, wherein after each retraining cycle the updated system undergoes test & evaluation to specified standards before being released as an update.
  • Online Learning. In this model, the retraining and associated self-learning happens in real-time while a system is in operation. This mode of learning precludes any formal test and evaluation, being a continuous process. Every such update would effectively metamorphose the system into new untested states. At best, system performance may be monitored through periodic testing, and while this might mitigate any undesired performance effects which might emerge between tests, it would not eliminate them.

The scenario depicted below illustrates how online learning might be implemented in an AWS.

Scenario 10: AWS with Online Learning Feature

Scenario. Consider an AWS which has two modes of functioning vis-à-vis its critical identify-and-engage function: man-in-the-loop (Controlled AWS) and man-out-of-the-loop (Directed AWS). While in the Controlled AWS mode, every time the AWS identifies a target, the controlling operator either accepts the identification as correct or rejects it being an incorrect identification. This operator input is utilized to incrementally retrain the underlying object recognition neural network used for target identification. At a subsequent stage in operations, the mode of functioning is switched to Directed AWS based on operational considerations. Since the system has undergone a change as a result of continuous learning, the AWS would now be operating in an untested state without the benefit of human intervention in its critical functions.

The online learning feature is analogous to the “evolution” characteristic referred to in the Chinese definition of AWS. It is felt that online learning in weapon systems poses an unacceptably high risk, since it precludes the feasibility of testing its continuously changing internal states. In general, while online learning may be acceptable in certain low risk military applications such as logistics planning and predictive maintenance, it might be prudent to ensure that no weapon system update be permitted to be operationally deployed without passing specified test and evaluation standards. In other words, online learning as a feature should ideally be prohibited in weapon systems.

Complex AWS Architectures

In the section on Proposed AWS Definitions, it was stated that the definitions suggested therein relate to platform-centric AWS at the tactical level. This is to an extent reflected in the language of the definitions. For instance, the mobile and platform-centric nature of the AWS in implied in functions such as launch and navigation, which are referred to in the definitions. Such a focused approach for defining AWS has been adopted to remove ambiguity in the definitions, thus allowing their mapping to actual systems and, more importantly, facilitate consensus on AWS regulation.

Most discussions on AWS in international fora presume AWS to be a single platform which is mobile. Some of the formal definitions also provide clear cues that such an assumption has been made. For instance, the French definition refers to loss of communications with the military chain of command. The US DoD Directive 3000.09 refers to an operator as one who operates a platform or a weapon system. The UK definition has been given out in a doctrine which pertains to unmanned aircraft systems.

However, AWS need not be restricted to mobile, tactical level platforms. Several architectures of AWS may be envisaged which are different from such a visualisation, in increasingly complex ways. This section attempts to describe some of these variations in AWS manifestations, and also analyses how the proposed definitions fare in relation to all these versions of AWS.

A Spectrum of AWS Architectures

The following different types of AWS may be envisaged, which would not classify as “mobile, tactical level platforms.”

  • Tactical, Static, Platform-Centric. Being static, these are likely to be defensive in nature, such as the US Phalanx close-in weapon system, or the South Korean Robot Sentries.
  • Tactical, Static/ Mobile, Net-Centric. Here, the sensor, decision node and shooter are on different platforms, one or more of which may be mobile, all integrated over a network, and designed to operate at the tactical level. An example would be an artillery command and control system, which has static or mobile sensors acting as observation posts (OPs) which pass back target end information to an autonomous decision node, which in turn passes fire orders to static artillery guns for engaging the targets. Another example of a defensive netcentric system is the Israeli Iron Dome, which is fully autonomous and uses AI technologies for some of its functions.
  • Operational/ Strategic, Static/ Mobile, Net-Centric. Here, a large number of different sensors (space-based, air-borne, ground-based), decision nodes, and shooters (unmanned aircraft, missile systems, artillery guns, even tactical AWS, etc) are all networked together and autonomously engage designated targets as and when they appear in the operational area. The entire complex network of sensors, decision nodes and shooters is not limited to a specific weapon system, and is designed to operate at operational as well as strategic levels.
  • Tactical Armed Swarms. The task for engaging group targets which may be given to a single platform armed with multiple munitions, could also be given to an armed swarm. Though a more complex autonomous architecture, Scenarios 1 to 3 (as also Scenarios 4 and 5 which demonstrate IHL violations) are equally applicable to swarms of armed platforms in all domains (land, air and sea).
  • Strategic (Nuclear) C2 Systems. An autonomous nuclear command and control (C2) system, incorporating satellite-based sensors and nuclear missiles as shooters would be an example of a weapon specific autonomous system with strategic effects.

Correlation with Proposed Definitions

Notwithstanding the complexity of some of the above architectures, the design of each of these could be constrained to function as weapon systems which fall within the scope of Directed AWS, with no higher-level cognitive functions built into them. Conversely, one can envisage equivalents of each of the above architectures which align with either the Cognitive or Controlled AWS definitions. Notably, some of the functions referred to in the definitions may be suppressed in some architectures. For instance, for static systems (or sub-systems) the launch and navigation functions may not be applicable.

Thus, the definitions appear to be well suited for complex weapon architectures as well. That stated, it needs to be highlighted here that, as the architectures become more complex, the risks which arise as a result of delegating autonomy from humans to machines increases significantly, perhaps in a non-linear manner. For instance, while it may be acceptable to develop and deploy a tactical, platform-centric Directed AWS under a rigorous risk mitigation framework, the same will likely not be true for strategic net-centric AWS or autonomous nuclear weapon systems, as associated risks may be unacceptably high.

Conclusion

The primary motivation for carrying out the above analysis was to reconcile the wide variation in approaches adopted by different states for defining AWS. A scrutiny of the definitions reveals considerable ambiguity and internal inconsistencies within the definitions, which hamper attempts to map the definitions to extant or envisaged AWS.

A scenario-based analysis has been adopted to demonstrate that an ambiguous definition could map to several AWS with widely varying capabilities, which in turn has the effect of impeding useful and focused discussions on evolving a regulatory framework for AI-enabled weapon systems. The analysis helps to identify functional capabilities of AWS which are necessary to be incorporated into a definition to make it precise and unambiguous. Armed with this knowledge, a set of three definitions has been proposed, which captures the whole spectrum of autonomy in weapon systems in a manner which is conducive for evolving a regulatory framework. The proposed definitions are precise enough to map onto extant and envisaged AWS unambiguously.

This work makes the following additional contributions: it makes a distinction between feasible and hypothetical scenarios from a technology perspective; it highlights the fact that AWS which are technologically feasible in the foreseeable future leave humans firmly in control, and hence accountable; it brings out that while feasible AWS, as defined, cannot inherently be violative of IHL principles, these may still lead to undesired results if the AWS are misemployed; the work recommends that online learning in AWS will likely pose risks which are excessively high, hence their development must be undertaken with extreme caution; and finally, it analyses complex AWS architectures characterized by several OODA loops at tactical, operational and strategic levels embedded within each other, and relates these architectures to the proposed definitions.

It is hoped that the analysis presented in this work, by bringing rigour and precision into defining AWS, would facilitate achieving consensus on risk evaluation and mitigation frameworks at various international fora.

 

0 Comments

Your Views

Recent Posts

Subscribe To The Future Wars Newsletter

Join this mailing list to receive a weekly newsletter about the latest posts from R S Panwar's Future Wars Blogsite.

Almost finished....To complete the subscription process, please click the link on the email we just sent you.

Share This