AI IN MILITARY OPERATIONS
Defining Autonomous Weapon Systems: A Scenario Based Analysis – Part III
This is the concluding segment of an article structured as a three-part series which analysis how autonomous weapon systems (AWS) are/ should be characterized and defined. The first part resorted to the use of short vignettes/ scenarios for bringing out the ambiguity inherent in the present characterization of AWS. The next segment analysed a few well-known formal definitions of AWS against the backdrop of these scenarios and proposed a fresh set of definitions for AWS aimed at removing ambiguity in extant definitions. This part concludes by first highlighting the high-risk implications of incorporating the online learning feature in AWS and then briefly analyses how the proposed definitions fare when applied to complex AWS architectures.
Defining Autonomous Weapon Systems: A Scenario based Analysis – Part II
This is the second segment of an article structured as a three-part series which analysis how autonomous weapon systems (AWS) are/ should be characterized and defined. The rationale for this work is based on the assessment that extant AWS definitions are either too ambiguous or evolved against different contexts, leading to participants and analysts in the ongoing debate on regulation of AWS talking past each other. The first part resorted to the use of short vignettes/ scenarios for bringing out the ambiguity inherent in the present characterization of AWS. This second part analyses a few well-known formal definitions of AWS against the backdrop of these scenarios and proposes a fresh set of definitions for AWS aimed at removing ambiguity in extant definitions.
Defining Autonomous Weapon Systems: A Scenario Based Analysis – Part I
There is no internationally accepted definition of autonomous weapon systems (AWS). However, these are popularly described as weapons which, once activated, can select and engage targets without further human intervention. With such a characterisation, most states declare that fully autonomous weapons must never be developed. This is because such a characterisation is mostly interpreted to mean that AWS can choose and destroy at will, all without any human intervention, which conjures up scary images of Skynet/Terminators taking over the human race.
On closer analysis, it is evident that the above definition of AWS is very ambiguous, covering within its ambit weapon systems with widely differing levels of autonomy, some of which should be ethically and legally acceptable to most states and militaries as well as other stakeholders.
This article is structured as a three-part series which takes a deeper look at how AWS are/ should be characterized and defined. This first part resorts to the use of short vignettes/ scenarios for bringing out the ambiguity inherent in the present characterization of AWS. The subsequent parts go on to analyse a few well-known formal definitions of AWS against the backdrop of these scenarios; propose a set of definitions for AWS aimed at removing ambiguity in extant definitions; and briefly analyses how the proposed definitions fare when applied to complex AWS architectures.
Regulation of AI-Enabled Military Systems: A Risk-Based Approach – Part II
This is the second part of a two-part article which sketches out the contours of a risk-based approach to regulation of AI-enabled military systems. In the first part the proposed EU AI Act, which adopts a risk-based approach for civilian applications, was first reviewed. Thereafter, a risk-based approach for military systems was introduced, represented by a Risk Hierarchy with a five-level risk architecture. The rationale for arriving at the five levels of risk was also given out. This second part continues with the description of the risk-based approach by first categorizing weapon systems into ten classes, and then assigning them to the higher three levels of the Risk Hierarchy which correspond to weapon systems. An insight is then provided on how a differentiated risk-mitigation mechanism, to be linked to each of the five risk levels, may be worked out, and also how such a risk-based approach could help in reaching international consensus on regulation of AI-enabled military systems.
Regulation of AI-Enabled Military Systems: A Risk Based Approach – Part I
Artificial Intelligence (AI) based applications and systems pose significant risks, arising mainly as a result of the unique characteristics of machine learning technology. AI-enabled military systems, in particular, are of special concern because of the threat they pose to human lives. This has given rise to a host of legal, ethical and moral conundrums. At the same time, it is universally accepted that huge benefits could accrue to humankind, both on and off the battlefield, if the power of AI is leveraged in a responsible manner. This double-edged character of AI technologies points to the need for a carefully thought out mechanism for regulating the development of AI technologies. AI-triggered risks posed by different types of military systems may vary widely, and applying a common set of risk-mitigation strategies across all systems will likely be suboptimal. A risk-based approach has the potential of overcoming these disadvantages. This work attempts to sketch the contours of such an approach which could be adopted for the regulation of military systems. In this first part, the EU proposal for civilian applications, which adopts a risk based approach, is first discussed. Thereafter, a risk-based approach for military systems is introduced, and the rationale for a five-level risk architecture is given out.
The Looming AI RMA: A Wake-up Call for India
In this episode of Def Talks on YouTube, Aadi Achint talks to Lt Gen (Dr) R S Panwar on the impact of Artificial Intelligence (AI) on 21st Century warfare. The conversation begins by elucidating how AI is expected to usher in the next revolution in military affairs (RMA), by infusing intelligence into every element of the Observe-Orient-Decide-Act (OODA) loop and taking the human element further away from the battlefield consequent to increased autonomy in weapons. It then lists out the various application areas of AI in warfare in the physical, cyber and cognitive realms. The unique characteristics of AI are dwelled upon next, such as self-learning capability, non-transparency, unpredictability and brittleness. The risks associated with the use of AI on the battlefield are then discussed against the backdrop of ongoing debates on legal and ethical issues associated with AI-enabled military systems at the UN as well as other global forums. The final part of the episode highlights the tremendous resources being allocated by major world militaries towards the development of AI-powered systems, gives out India’s current status in this critically important field, and concludes by outlining a way forward for India in order to keep pace with the changing nature of warfare and also arrest the widening gap in its military capabilities vis-à-vis China.
Lethal Autonomous Weapon Systems: Slaves not Masters! Meaningful Human Control, Saving Lives and Non-Feasibility of a Pre-Emptive Ban
Lethal Autonomous Weapon Systems (LAWS) are currently the subject of a global debate, particularly at the UN, over ethical, moral and legal aspects related to their deployment in future wars. Human rights groups are advocating a pre-emptive ban on their development on the grounds that deployment of LAWS would be in violation of International Humanitarian Law (IHL). This is the final article in a three-piece series focusing on issues which are at the heart of this ongoing debate. The previous two write-ups dwelt on the unique characteristics of LAWS, analysed different positions on their purported violation of IHL, and discussed various nuances of Autonomy and Unpredictability. This piece will examine the important notion of Meaningful Human Control (MHC), and also bring out how employment of LAWS may in fact lead to saving of human lives. The pros and cons of a pre-emptive ban on LAWS vis-à-vis a binding regulation on their development will also be discussed.
Lethal Autonomous Weapon Systems: Slaves not Masters! Conflict Scenarios, Autonomy and Unpredictability
AI-powered weapon systems are soon expected to acquire the capability to “select and kill” targets without human intervention. Such systems are widely referred to as Lethal Autonomous Weapon Systems (LAWS), sensationally dubbed as “killer robots”. A raging debate is on globally, particularly at the UN, over the ethical, moral and legal aspects of deploying LAWS in future wars, with human rights groups advocating a pre-emptive ban on their development. This is the second of three articles in a series which discusses issues which are at the heart of this ongoing debate. The first article discussed the unique characteristics of LAWS, and why these are viewed as being in violation of the International Humanitarian Law (IHL). This piece begins with an analysis of whether or not LAWS actually violate IHL principles against the backdrop of three typical warfighting scenarios. It goes on to discuss some noteworthy nuances of Autonomy in LAWS, the intriguing feature of Unpredictability in AI-powered systems, and the need for caution while attempting to make the critical “select and engage” function autonomous.
Lethal Autonomous Weapon Systems: Slaves not Masters! “Killer Robots” and International Humanitarian Law
Increasing levels of autonomy are being incorporated in AI-powered weapon systems on the modern battlefield, which are soon expected to acquire the capability to “select and kill” targets without human intervention. Such systems are widely referred to as Lethal Autonomous Weapon Systems (LAWS), sensationally dubbed as “killer robots”. As a reaction to these developments a raging debate is on globally, particularly at the UN, over the ethical, moral and legal aspects of deploying fully autonomous weapon systems in future wars. Human rights groups are advocating a pre-emptive ban on their development on the grounds that employment of LAWS would be in violation of International Humanitarian Law (IHL). This work, comprising of three articles, discusses issues which are at the heart of this ongoing debate. In this first article, a brief tour is given of relevant literature on the subject, the unique characteristics of LAWS, and why these are viewed as being in violation of IHL.
Disruptive Military Technologies: An Overview – Part I
Cutting edge technologies whose manifestation on the battlefield is expected to have a significant impact on the nature of warfare, are often referred to as disruptive military technologies. At this point in time, potentially disruptive technologies include ICT, IW, AI & robotics, quantum, nano and hypersonic weapons, amongst several others. The impact of some of these technologies on the 21st Century battlespace is expected to be very profound, and may even revolutionise warfare. This three-part series attempts to examine whether India is sufficiently geared up to leverage these technologies for building up our comprehensive military power in tune with our geopolitical aspirations. In Part I, a classification of disruptive military technologies based on their expected degree of impact on warfare is first presented. Thereafter, a brief look is taken on the global R&D status of AI & robotics and quantum technologies, as also the initiatives being taken by India in these areas.