LessWrong (Curated & Popular) podcast

“The best simple argument for Pausing AI?” by Gary Marcus

0:00
2:00
Spola tillbaka 15 sekunder
Spola framåt 15 sekunder
Not saying we should pause AI, but consider the following argument:

  1. Alignment without the capacity to follow rules is hopeless. You can’t possibly follow laws like Asimov's Laws (or better alternatives to them) if you can’t reliably learn to abide by simple constraints like the rules of chess.
  2. LLMs can’t reliably follow rules. As discussed in Marcus on AI yesterday, per data from Mathieu Acher, even reasoning models like o3 in fact empirically struggle with the rules of chess. And they do this even though they can explicit explain those rules (see same article). The Apple “thinking” paper, which I have discussed extensively in 3 recent articles in my Substack, gives another example, where an LLM can’t play Tower of Hanoi with 9 pegs. (This is not a token-related artifact). Four other papers have shown related failures in compliance with moderately complex rules in the last month.
  3. [...]

---

First published:
June 30th, 2025

Source:
https://www.lesswrong.com/posts/Q2PdrjowtXkYQ5whW/the-best-simple-argument-for-pausing-ai

---



Narrated by TYPE III AUDIO.

Fler avsnitt från "LessWrong (Curated & Popular)"