
"The Case for Low-Competence ASI Failure Scenarios" by Ihor Kendiukhov
25/3/2026
0:00
11:51
I think the community underinvests in the exploration of extremely-low-competence AGI/ASI failure modes and explain why.
Humanity's Response to the AGI Threat May Be Extremely Incompetent
There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example:
Outline:
(00:19) Humanitys Response to the AGI Threat May Be Extremely Incompetent
(02:26) Many Existing Scenarios and Case Studies Assume (Relatively) High Competence
(04:31) Dumb Ways to Die
(07:31) Undignified AGI Disaster Scenarios Deserve More Careful Treatment
(10:43) Why This Might Be Useful
---
First published:
March 19th, 2026
Source:
https://www.lesswrong.com/posts/t9LAhjoBnpQBa8Bbw/the-case-for-low-competence-asi-failure-scenarios
---
Narrated by TYPE III AUDIO.
Humanity's Response to the AGI Threat May Be Extremely Incompetent
There is a sufficient level of civilizational insanity overall and a nice empirical track record in the field of AI itself which is eloquent about its safety culure. For example:
- At OpenAI, a refactoring bug flipped the sign of the reward signal in a model. Because labelers had been instructed to give very low ratings to sexually explicit text, the bug pushed the model into generating maximally explicit content across all prompts. The team noticed only after the training run had completed, because they were asleep.
- The director of alignment at Meta's Superintelligence Labs connected an OpenClaw agent to her real email, at which point it began deleting messages despite her attempts to stop it, and she ended up running to her computer to manually halt the process.
- An internal AI agent at Meta posted an answer publicly without approval; another employee acted on the inaccurate advice, triggering a severe security incident that temporarily allowed employees to access sensitive data they were not authorized to view.
- AWS acknowledged that [...]
Outline:
(00:19) Humanitys Response to the AGI Threat May Be Extremely Incompetent
(02:26) Many Existing Scenarios and Case Studies Assume (Relatively) High Competence
(04:31) Dumb Ways to Die
(07:31) Undignified AGI Disaster Scenarios Deserve More Careful Treatment
(10:43) Why This Might Be Useful
---
First published:
March 19th, 2026
Source:
https://www.lesswrong.com/posts/t9LAhjoBnpQBa8Bbw/the-case-for-low-competence-asi-failure-scenarios
---
Narrated by TYPE III AUDIO.
Otros episodios de "LessWrong (Curated & Popular)"



No te pierdas ningún episodio de “LessWrong (Curated & Popular)”. Síguelo en la aplicación gratuita de GetPodcast.








