Author: Nathan Wood
Ethics and Armed Forces: AI and Autonomy in Weapons: War and Conflict out of Control? (special issue, 2024)
In some discussions of autonomous weapon systems (AWS), critics argue that such systems will not be sufficiently reliable for their use to be permissible. In particular, critics often argue that AI-enabled and autonomous weapons will not be sufficiently reliable with regards to their targeting abilities; specifically, they raise risks that noncombatants and combatants who are hors de combat may be mistakenly (and thus immorally/illegally) targeted. However, in this article I argue that this formulation of the critique is in error. This is because the critique focuses solely on the (un)reliability of the systems themselves, when in fact a full assessment of autonomous weapon systems (and indeed any weapon system) must look not only at the system itself, but at the system as it is embedded within larger institutional and operational frameworks. This is critical, as reliability is not a zero/one assessment, but is a continuum, where some things may be more or less reliable and, more importantly, reliability is rarely something that can be assessed in a vacuum, but instead is contextual and often user-dependent. Thus, we often cannot helpfully ask, “Is this particular system reliable?”, but must instead ask, “Is this system reliable for task X, when used by user Y?”. This further shows that when assessing the added (or lost) value of using AWS, the comparison should not be between human reliability and AWS’ reliability in a simple fashion, but rather between the reliability of humans using conventional (dumb) weapons and humans using autonomous and AI-enabled weapons. By focusing on not just the systems themselves, but on the larger socio-technical realities of these systems within larger systems of systems, it becomes clear that simple reliability assessments will not be sufficiently informative for grounding a strong permission or forbiddance on the use of AWS. Instead, we must look at how the incorporation of emerging technologies into military units and operations affects soldiers’ abilities to carry out their missions successfully and responsibly, and how overall reliability is affected.
•• More publications:
Author: Nathan Wood
Ethics and Armed Forces: AI and Autonomy in Weapons: War and Conflict out of Control? (special issue, 2024)
In some discussions of autonomous weapon systems (AWS), critics argue that such systems will not be sufficiently reliable for their use to be permissible. In particular, critics often argue that AI-enabled and autonomous weapons will not be sufficiently reliable with regards to their targeting abilities; specifically, they raise risks that noncombatants and combatants who are hors de combat may be mistakenly (and thus immorally/illegally) targeted. However, in this article I argue that this formulation of the critique is in error. This is because the critique focuses solely on the (un)reliability of the systems themselves, when in fact a full assessment of autonomous weapon systems (and indeed any weapon system) must look not only at the system itself, but at the system as it is embedded within larger institutional and operational frameworks. This is critical, as reliability is not a zero/one assessment, but is a continuum, where some things may be more or less reliable and, more importantly, reliability is rarely something that can be assessed in a vacuum, but instead is contextual and often user-dependent. Thus, we often cannot helpfully ask, “Is this particular system reliable?”, but must instead ask, “Is this system reliable for task X, when used by user Y?”. This further shows that when assessing the added (or lost) value of using AWS, the comparison should not be between human reliability and AWS’ reliability in a simple fashion, but rather between the reliability of humans using conventional (dumb) weapons and humans using autonomous and AI-enabled weapons. By focusing on not just the systems themselves, but on the larger socio-technical realities of these systems within larger systems of systems, it becomes clear that simple reliability assessments will not be sufficiently informative for grounding a strong permission or forbiddance on the use of AWS. Instead, we must look at how the incorporation of emerging technologies into military units and operations affects soldiers’ abilities to carry out their missions successfully and responsibly, and how overall reliability is affected.
•• More publications:
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.
Celetná 988/38
Prague 1
Czech Republic
This project receives funding from the Horizon EU Framework Programme under Grant Agreement No. 101086898.