
“I am worried about near-term non-LLM AI developments” by testingthewaters
01/08/2025
0:00
10:54
TL;DR
I believe that:
Outline:
(00:11) TL;DR
(01:22) Overview
(04:10) The Agenda I am Worried About
(07:36) Concrete Predictions
(09:29) What I think we should do
---
First published:
July 31st, 2025
Source:
https://www.lesswrong.com/posts/tEZa7PouYatK78bbb/i-am-worried-about-near-term-non-llm-ai-developments
---
Narrated by TYPE III AUDIO.
I believe that:
- Almost all LLM-centric safety research will not provide any significant safety value with regards to existential or civilisation-scale risks.
- The capabilities-related forecasts (not the safety-related forecasts) of Stephen Brynes' Foom and Doom articles are correct, except that they are too conservative with regards to timelines.
- There exists a parallel track of AI research which has been largely ignored by the AI safety community. This agenda aims to implement human-like online learning in ML models, and it is now close to maturity. Keywords: Hierarchical Reasoning Model, Energy-based Model, Test time training.
- Within 6 months this line of research will produce a small natural-language capable model that will perform at the level of a model like GPT-3, but with improved persistence and effectively no "context limit" since it is constantly learning and updating weights.
- Further development of this research will produce models that fulfill most of [...]
Outline:
(00:11) TL;DR
(01:22) Overview
(04:10) The Agenda I am Worried About
(07:36) Concrete Predictions
(09:29) What I think we should do
---
First published:
July 31st, 2025
Source:
https://www.lesswrong.com/posts/tEZa7PouYatK78bbb/i-am-worried-about-near-term-non-llm-ai-developments
---
Narrated by TYPE III AUDIO.
Mais episódios de "LessWrong (Curated & Popular)"
Não percas um episódio de “LessWrong (Curated & Popular)” e subscrevê-lo na aplicação GetPodcast.