AI IN MILITARY OPERATIONS

Regulation of AI-Enabled Military Systems: A Risk-Based Approach – Part II

This is the second part of a two-part article which sketches out the contours of a risk-based approach to regulation of AI-enabled military systems. In the first part the proposed EU AI Act, which adopts a risk-based approach for civilian applications, was first reviewed. Thereafter, a risk-based approach for military systems was introduced, represented by a Risk Hierarchy with a five-level risk architecture. The rationale for arriving at the five levels of risk was also given out. This second part continues with the description of the risk-based approach by first categorizing weapon systems into ten classes, and then assigning them to the higher three levels of the Risk Hierarchy which correspond to weapon systems. An insight is then provided on how a differentiated risk-mitigation mechanism, to be linked to each of the five risk levels, may be worked out, and also how such a risk-based approach could help in reaching international consensus on regulation of AI-enabled military systems.

read more

Regulation of AI-Enabled Military Systems: A Risk Based Approach – Part I

Artificial Intelligence (AI) based applications and systems pose significant risks, arising mainly as a result of the unique characteristics of machine learning technology. AI-enabled military systems, in particular, are of special concern because of the threat they pose to human lives. This has given rise to a host of legal, ethical and moral conundrums. At the same time, it is universally accepted that huge benefits could accrue to humankind, both on and off the battlefield, if the power of AI is leveraged in a responsible manner. This double-edged character of AI technologies points to the need for a carefully thought out mechanism for regulating the development of AI technologies. AI-triggered risks posed by different types of military systems may vary widely, and applying a common set of risk-mitigation strategies across all systems will likely be suboptimal. A risk-based approach has the potential of overcoming these disadvantages. This work attempts to sketch the contours of such an approach which could be adopted for the regulation of military systems. In this first part, the EU proposal for civilian applications, which adopts a risk based approach, is first discussed. Thereafter, a risk-based approach for military systems is introduced, and the rationale for a five-level risk architecture is given out.

read more

The Looming AI RMA: A Wake-up Call for India

In this episode of Def Talks on YouTube, Aadi Achint talks to Lt Gen (Dr) R S Panwar on the impact of Artificial Intelligence (AI) on 21st Century warfare. The conversation begins by elucidating how AI is expected to usher in the next revolution in military affairs (RMA), by infusing intelligence into every element of the Observe-Orient-Decide-Act (OODA) loop and taking the human element further away from the battlefield consequent to increased autonomy in weapons. It then lists out the various application areas of AI in warfare in the physical, cyber and cognitive realms. The unique characteristics of AI are dwelled upon next, such as self-learning capability, non-transparency, unpredictability and brittleness. The risks associated with the use of AI on the battlefield are then discussed against the backdrop of ongoing debates on legal and ethical issues associated with AI-enabled military systems at the UN as well as other global forums. The final part of the episode highlights the tremendous resources being allocated by major world militaries towards the development of AI-powered systems, gives out India’s current status in this critically important field, and concludes by outlining a way forward for India in order to keep pace with the changing nature of warfare and also arrest the widening gap in its military capabilities vis-à-vis China.

read more

Lethal Autonomous Weapon Systems: Slaves not Masters! Meaningful Human Control, Saving Lives and Non-Feasibility of a Pre-Emptive Ban

Lethal Autonomous Weapon Systems (LAWS) are currently the subject of a global debate, particularly at the UN, over ethical, moral and legal aspects related to their deployment in future wars. Human rights groups are advocating a pre-emptive ban on their development on the grounds that deployment of LAWS would be in violation of International Humanitarian Law (IHL). This is the final article in a three-piece series focusing on issues which are at the heart of this ongoing debate. The previous two write-ups dwelt on the unique characteristics of LAWS, analysed different positions on their purported violation of IHL, and discussed various nuances of Autonomy and Unpredictability. This piece will examine the important notion of Meaningful Human Control (MHC), and also bring out how employment of LAWS may in fact lead to saving of human lives. The pros and cons of a pre-emptive ban on LAWS vis-à-vis a binding regulation on their development will also be discussed.

read more

Lethal Autonomous Weapon Systems: Slaves not Masters! Conflict Scenarios, Autonomy and Unpredictability

AI-powered weapon systems are soon expected to acquire the capability to “select and kill” targets without human intervention. Such systems are widely referred to as Lethal Autonomous Weapon Systems (LAWS), sensationally dubbed as “killer robots”. A raging debate is on globally, particularly at the UN, over the ethical, moral and legal aspects of deploying LAWS in future wars, with human rights groups advocating a pre-emptive ban on their development. This is the second of three articles in a series which discusses issues which are at the heart of this ongoing debate. The first article discussed the unique characteristics of LAWS, and why these are viewed as being in violation of the International Humanitarian Law (IHL). This piece begins with an analysis of whether or not LAWS actually violate IHL principles against the backdrop of three typical warfighting scenarios. It goes on to discuss some noteworthy nuances of Autonomy in LAWS, the intriguing feature of Unpredictability in AI-powered systems, and the need for caution while attempting to make the critical “select and engage” function autonomous.

read more

Lethal Autonomous Weapon Systems: Slaves not Masters! “Killer Robots” and International Humanitarian Law

Increasing levels of autonomy are being incorporated in AI-powered weapon systems on the modern battlefield, which are soon expected to acquire the capability to “select and kill” targets without human intervention. Such systems are widely referred to as Lethal Autonomous Weapon Systems (LAWS), sensationally dubbed as “killer robots”. As a reaction to these developments a raging debate is on globally, particularly at the UN, over the ethical, moral and legal aspects of deploying fully autonomous weapon systems in future wars. Human rights groups are advocating a pre-emptive ban on their development on the grounds that employment of LAWS would be in violation of International Humanitarian Law (IHL). This work, comprising of three articles, discusses issues which are at the heart of this ongoing debate. In this first article, a brief tour is given of relevant literature on the subject, the unique characteristics of LAWS, and why these are viewed as being in violation of IHL.

read more

Disruptive Military Technologies: An Overview – Part I

Cutting edge technologies whose manifestation on the battlefield is expected to have a significant impact on the nature of warfare, are often referred to as disruptive military technologies. At this point in time, potentially disruptive technologies include ICT, IW, AI & robotics, quantum, nano and hypersonic weapons, amongst several others. The impact of some of these technologies on the 21st Century battlespace is expected to be very profound, and may even revolutionise warfare. This three-part series attempts to examine whether India is sufficiently geared up to leverage these technologies for building up our comprehensive military power in tune with our geopolitical aspirations. In Part I, a classification of disruptive military technologies based on their expected degree of impact on warfare is first presented. Thereafter, a brief look is taken on the global R&D status of AI & robotics and quantum technologies, as also the initiatives being taken by India in these areas.

read more

Artificial Intelligence in Military Operations: An Overview – Part II

This is the second of a two-part article which focuses on development and fielding of LAWS against the backdrop of rapid advances in the field of AI. Here, international as well as Indian perspectives are given out on the current status and future prospects for development and deployment of LAWS. This part reviews the status of AI technology in India, assesses the current capability of the Indian Army (IA) to adapt to this technology, and suggest steps which need to be taken on priority to ensure that Indian defence forces keep pace with other advanced armies in the race to usher in a new AI-triggered Revolution in Military Affairs (RMA).

read more

Artificial Intelligence in Military Operations: An Overview – Part I

Artificial Intelligence (AI) technologies hold great promise for facilitating military decisions, minimizing human causalities and enhancing the combat potential of forces. This is especially true in a wartime environment, when data availability is high, decision periods are short, and decision effectiveness is an absolute necessity. This two-part article focuses on development and fielding of LAWS against the backdrop of rapid advances in the field of AI, and its relevance to the Indian security scenario. This first part reviews the status of AI technology, gives a broad overview of the possible military applications of this technology and brings out the main legal and ethical issues involved in the current ongoing debate on development of LAWS.

read more

Recent Posts

Share This