ICRC Humanitarian Law and Policy Blog podkast

Transcending weapon systems: the ethical challenges of AI in military decision support systems

0:00
16:52
Do tyłu o 15 sekund
Do przodu o 15 sekund
The military decision-making process is facing a challenge by the increasing number of interconnected sensors capturing information on the battlefield. The abundance of information offers advantages for operational planning – if it can be processed and acted upon rapidly. This is where AI-assisted decision-support systems (DSS) enter the picture. They are meant to empower military commanders to make faster and more informed decisions, thus accelerating and improving the decision-making process. Although they are just meant to assist – and not replace – human decision-makers, they pose several ethical challenges which need to be addressed. In this post, Matthias Klaus, who has a background in AI ethics, risk analysis and international security studies, explores the ethical challenges associated with a military AI application often overshadowed by the largely dominating concern about autonomous weapon systems (AWS). He highlights a number of ethical challenges associated specifically with DSS, which are often portrayed as bringing more objectivity, effectivity and efficiency to military decision-making. However, they could foster forms of bias, infringe upon human autonomy and dignity, and effectively undermine military moral responsibility by resulting in peer pressure and deskilling.

Więcej odcinków z kanału "ICRC Humanitarian Law and Policy Blog"