
“Training AGI in Secret would be Unsafe and Unethical” by Daniel Kokotajlo
21/4/2025
0:00
10:46
Subtitle: Bad for loss of control risks, bad for concentration of power risks
I’ve had this sitting in my drafts for the last year. I wish I’d been able to release it sooner, but on the bright side, it’ll make a lot more sense to people who have already read AI 2027.
---
First published:
April 18th, 2025
Source:
https://www.lesswrong.com/posts/FGqfdJmB8MSH5LKGc/training-agi-in-secret-would-be-unsafe-and-unethical-1
---
Narrated by TYPE III AUDIO.
I’ve had this sitting in my drafts for the last year. I wish I’d been able to release it sooner, but on the bright side, it’ll make a lot more sense to people who have already read AI 2027.
- There's a good chance that AGI will be trained before this decade is out.
- By AGI I mean “An AI system at least as good as the best human X’ers, for all cognitive tasks/skills/jobs X.”
- Many people seem to be dismissing this hypothesis ‘on priors’ because it sounds crazy. But actually, a reasonable prior should conclude that this is plausible.[1]
- For more on what this means, what it might look like, and why it's plausible, see AI 2027, especially the Research section.
- If so, by default the existence of AGI will be a closely guarded [...]
---
First published:
April 18th, 2025
Source:
https://www.lesswrong.com/posts/FGqfdJmB8MSH5LKGc/training-agi-in-secret-would-be-unsafe-and-unethical-1
---
Narrated by TYPE III AUDIO.
Otros episodios de "LessWrong (Curated & Popular)"
No te pierdas ningún episodio de “LessWrong (Curated & Popular)”. Síguelo en la aplicación gratuita de GetPodcast.