LessWrong (Curated & Popular) podcast

“Notes on fatalities from AI takeover” by ryan_greenblatt

0:00
15:46
Retroceder 15 segundos
Avanzar 15 segundos
Suppose misaligned AIs take over. What fraction of people will die? I'll discuss my thoughts on this question and my basic framework for thinking about it. These are some pretty low-effort notes, the topic is very speculative, and I don't get into all the specifics, so be warned.

I don't think moderate disagreements here are very action-guiding or cruxy on typical worldviews: it probably shouldn't alter your actions much if you end up thinking 25% of people die in expectation from misaligned AI takeover rather than 90% or end up thinking that misaligned AI takeover causing literal human extinction is 10% likely rather than 90% likely (or vice versa). (And the possibility that we're in a simulation poses a huge complication that I won't elaborate on here.) Note that even if misaligned AI takeover doesn't cause human extinction, it would still result in humans being disempowered and would [...]

---

Outline:

(04:39) Industrial expansion and small motivations to avoid human fatalities

(12:18) How likely is it that AIs will actively have motivations to kill (most/many) humans

(13:38) Death due to takeover itself

(15:04) Combining these numbers

The original text contained 12 footnotes which were omitted from this narration.

---

First published:
September 23rd, 2025

Source:
https://www.lesswrong.com/posts/4fqwBmmqi2ZGn9o7j/notes-on-fatalities-from-ai-takeover

---



Narrated by TYPE III AUDIO.

Otros episodios de "LessWrong (Curated & Popular)"