Artificial intelligence in military decision-making: supporting humans, not replacing them
29/08/2024
0:00
18:09
The desire to develop technological solutions to help militaries in their decision-making processes is not new. However, more recently, we have witnessed militaries incorporating increasingly complex forms of artificial intelligence-based decision support systems (AI DSS) in their decision-making process, including decisions on the use of force. The novelty of this development is that the process by which these AI DSS function challenges the human’s ability to exercise judgement in military decision-making processes. This potential erosion of human judgement raises several legal, humanitarian and ethical challenges and risks, especially in relation to military decisions that have a significant impact on people’s lives, their dignity, and their communities. It is in light of this development that we must urgently and in earnest discuss how these systems are used and their impact on people affected by armed conflict.
With this post, Wen Zhou, Legal Adviser with the International Committee of the Red Cross (ICRC), and Anna Rosalie Greipl, Researcher at the Geneva Academy of International Humanitarian Law and Human Rights, launch a new series on artificial intelligence (AI) in military decision-making. To start the discussion, they outline some of the challenges and risks, as well as the potential, that pertain to the use of AI DSS in preserving human judgement in legal determinations on the use of force. They also propose some measures and constraints regarding the design and use of AI DSS in these decision-making processes that can inform current and future debates on military AI governance, in order to ensure compliance with international humanitarian law (IHL) and support mitigating the risk of harm to people affected by those decisions.
Mais episódios de "ICRC Humanitarian Law and Policy Blog"
Não percas um episódio de “ICRC Humanitarian Law and Policy Blog” e subscrevê-lo na aplicação GetPodcast.