LessWrong (Curated & Popular) podcast

“Training AGI in Secret would be Unsafe and Unethical” by Daniel Kokotajlo

0:00
10:46
Retroceder 15 segundos
Avanzar 15 segundos
Subtitle: Bad for loss of control risks, bad for concentration of power risks

I’ve had this sitting in my drafts for the last year. I wish I’d been able to release it sooner, but on the bright side, it’ll make a lot more sense to people who have already read AI 2027.

  1. There's a good chance that AGI will be trained before this decade is out.
    1. By AGI I mean “An AI system at least as good as the best human X’ers, for all cognitive tasks/skills/jobs X.”
    2. Many people seem to be dismissing this hypothesis ‘on priors’ because it sounds crazy. But actually, a reasonable prior should conclude that this is plausible.[1]
    3. For more on what this means, what it might look like, and why it's plausible, see AI 2027, especially the Research section.
  2. If so, by default the existence of AGI will be a closely guarded [...]
The original text contained 8 footnotes which were omitted from this narration.

---

First published:
April 18th, 2025

Source:
https://www.lesswrong.com/posts/FGqfdJmB8MSH5LKGc/training-agi-in-secret-would-be-unsafe-and-unethical-1

---

Narrated by TYPE III AUDIO.

Otros episodios de "LessWrong (Curated & Popular)"