Will AI fundamentally alter how wars are initiated, fought and concluded?
26.09.2024
0:00
13:48
In the debate on how artificial intelligence (AI) will impact military strategy and decision-making, a key question is who makes better decisions — humans or machines? Advocates for a better leveraging of artificial intelligence point to heuristics and human error, arguing that new technologies can reduce civilian suffering through more precise targeting and greater legal compliance. The counter argument is that AI-enabled decision-making can be as bad, if not worse, than that done by humans, and that the scope for mistakes creates disproportionate risks. What these debates overlook is that it may not be possible for machines to replicate all dimensions of human decision-making. Moreover, we may not want them to.
In this post, Erica Harper, Head of Research and Policy at the Geneva Academy of International Humanitarian Law and Human Rights, sets out the possible implications of AI-enabled military decision-making as this relates to the initiation of war, the waging of conflict, and peacebuilding. She highlights that while such use of AI may create positive externalities — including in terms of prevention and harm mitigation — the risks are profound. These include the potential for a new era of opportunistic warfare, a mainstreaming of violence desensitization and missed opportunities for peace. Such potential needs to be assessed in terms of the current state of multilateral fragility, and factored into AI policy-making at the regional and international levels.
Więcej odcinków z kanału "ICRC Humanitarian Law and Policy Blog"
Nie przegap odcinka z kanału “ICRC Humanitarian Law and Policy Blog”! Subskrybuj bezpłatnie w aplikacji GetPodcast.