LessWrong (Curated & Popular) podcast

“Short Timelines don’t Devalue Long Horizon Research” by Vladimir_Nesov

0:00
2:10
Rewind 15 seconds
Fast Forward 15 seconds
Short AI takeoff timelines seem to leave no time for some lines of alignment research to become impactful. But any research rebalances the mix of currently legible research directions that could be handed off to AI-assisted alignment researchers or early autonomous AI researchers whenever they show up. So even hopelessly incomplete research agendas could still be used to prompt future capable AI to focus on them, while in the absence of such incomplete research agendas we'd need to rely on AI's judgment more completely. This doesn't crucially depend on giving significant probability to long AI takeoff timelines, or on expected value in such scenarios driving the priorities.

Potential for AI to take up the torch makes it reasonable to still prioritize things that have no hope at all of becoming practical for decades (with human effort). How well AIs can be directed to advance a line of research [...]

---

First published:
April 9th, 2025

Source:
https://www.lesswrong.com/posts/3NdpbA6M5AM2gHvTW/short-timelines-don-t-devalue-long-horizon-research

---

Narrated by TYPE III AUDIO.

More episodes from "LessWrong (Curated & Popular)"