LessWrong (Curated & Popular) podkast

"You can’t imitation-learn how to continual-learn" by Steven Byrnes

0:00
11:06
Do tyłu o 15 sekund
Do przodu o 15 sekund
In this post, I’m trying to put forward a narrow, pedagogical point, one that comes up mainly when I’m arguing in favor of LLMs having limitations that human learning does not. (E.g. here, here, here.)

See the bottom of the post for a list of subtexts that you should NOT read into this post, including “…therefore LLMs are dumb”, or “…therefore LLMs can’t possibly scale to superintelligence”.

Some intuitions on how to think about “real” continual learning

Consider an algorithm for training a Reinforcement Learning (RL) agent, like the Atari-playing Deep Q network (2013) or AlphaZero (2017), or think of within-lifetime learning in the human brain, which (I claim) is in the general class of “model-based reinforcement learning”, broadly construed.

These are all real-deal full-fledged learning algorithms: there's an algorithm for choosing the next action right now, and there's one or more update rules for permanently changing some adjustable parameters (a.k.a. weights) in the model such that its actions and/or predictions will be better in the future. And indeed, the longer you run them, the more competent they get.

When we think of “continual learning”, I suggest that those are good central examples to keep in mind. Here are [...]

---

Outline:

(00:35) Some intuitions on how to think about real continual learning

(04:57) Why real continual learning cant be copied by an imitation learner

(09:53) Some things that are off-topic for this post

The original text contained 3 footnotes which were omitted from this narration.

---

First published:
March 16th, 2026

Source:
https://www.lesswrong.com/posts/9rCTjbJpZB4KzqhiQ/you-can-t-imitation-learn-how-to-continual-learn

---



Narrated by TYPE III AUDIO.

Więcej odcinków z kanału "LessWrong (Curated & Popular)"