LessWrong (Curated & Popular) podcast

“Which side of the AI safety community are you in?” by Max Tegmark

0:00
4:18
Reculer de 15 secondes
Avancer de 15 secondes
In recent years, I’ve found that people who self-identify as members of the AI safety community have increasingly split into two camps:

Camp A) "Race to superintelligence safely”: People in this group typically argue that "superintelligence is inevitable because of X”, and it's therefore better that their in-group (their company or country) build it first. X is typically some combination of “Capitalism”, “Molloch”, “lack of regulation” and “China”.

Camp B) “Don’t race to superintelligence”: People in this group typically argue that “racing to superintelligence is bad because of Y”. Here Y is typically some combination of “uncontrollable”, “1984”, “disempowerment” and “extinction”.

Whereas the 2023 extinction statement was widely signed by both Camp B and Camp A (including Dario Amodei, Demis Hassabis and Sam Altman), the 2025 superintelligence statement conveniently separates the two groups – for example, I personally offered all US Frontier AI CEO's to sign, and none chose [...]

---

First published:
October 22nd, 2025

Source:
https://www.lesswrong.com/posts/zmtqmwetKH4nrxXcE/which-side-of-the-ai-safety-community-are-you-in

---



Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

D'autres épisodes de "LessWrong (Curated & Popular)"