LessWrong (Curated & Popular) podcast

"On Goal-Models" by Richard_Ngo

0:00
6:36
Reculer de 15 secondes
Avancer de 15 secondes
I'd like to reframe our understanding of the goals of intelligent agents to be in terms of goal-models rather than utility functions. By a goal-model I mean the same type of thing as a world-model, only representing how you want the world to be, not how you think the world is. However, note that this still a fairly inchoate idea, since I don't actually know what a world-model is.

The concept of goal-models is broadly inspired by predictive processing, which treats both beliefs and goals as generative models (the former primarily predicting observations, the latter primarily “predicting” actions). This is a very useful idea, which e.g. allows us to talk about the “distance” between a belief and a goal, and the process of moving “towards” a goal (neither of which make sense from a reward/utility function perspective).

However, I’m dissatisfied by the idea of defining a world-model as a generative model over observations. It feels analogous to defining a parliament as a generative model over laws. Yes, technically we can think of parliaments as stochastically outputting laws, but actually the interesting part is in how they do so. In the case of parliaments, you have a process of internal [...]

---

First published:
February 2nd, 2026

Source:
https://www.lesswrong.com/posts/MEkafPJfiSFbwCjET/on-goal-models

---



Narrated by TYPE III AUDIO.

D'autres épisodes de "LessWrong (Curated & Popular)"