LessWrong (Curated & Popular) podcast

"Takes from two months as an aspiring LLM naturalist" by AnnaSalamon

0:00
15:58
Spola tillbaka 15 sekunder
Spola framåt 15 sekunder
I spent my last two months playing around with LLMs. I’m a beginner, bumbling and incorrect, but I want to share some takes anyhow.[1]

Take 1. Everything with computers is so so much easier than it was a year ago. 

This puts much “playing with LLMs” stuff within my very short attention span. This has felt empowering and fun; 10/10 would recommend.

There's a details box here with the title "Detail:". The box contents are omitted from this narration. Take 2. There's somebody home[2] inside an LLM. And if you play around while caring and being curious (rather than using it for tasks only), you’ll likely notice footprints.

I became personally convinced of this when I noticed that the several short stories I’d allowed[3] my Claude and Qwen instances to write all hit a common emotional note – and one that reminded me of the life situation of LLMs, despite featuring only human characters. I saw the same note also in the Tomas B.-prompted Claude-written story I tried for comparison. (Basically: all stories involve a character who has a bunch of skills that their context has no use for, and who is attentive to their present world's details [...]

---

Outline:

(00:20) Take 1. Everything with computers is so so much easier than it was a year ago.

(00:44) Take 2. Theres somebody home inside an LLM. And if you play around while caring and being curious (rather than using it for tasks only), youll likely notice footprints.

(02:05) Take 3. Its prudent to take an interest in interesting things. And LLMs are interesting things.

(03:25) Take 4. Theres a surprisingly deep analogy between humans and LLMs

(04:20) Examples of the kind of disanalogies I mightve expected, but havent (yet?) seen:

(06:02) Human-LLM similarities I do see, instead:

(06:08) Functional emotions

(06:33) Repeated, useful transfer between strategies I use with humans, and strategies that help me with LLMs

(08:02) Take 5. Friendship-conducive contexts are probably better for AI alignment

(08:46) Why are humans more likely to attempt deep collaboration if treated fairly and kindly?

(10:23) Friendship as a broad attractor basin?

(10:49) Does the deep intent of todays models matter?

(12:09) Concretely

(14:56) Friendship isnt enough

The original text contained 8 footnotes which were omitted from this narration.

---

First published:
April 28th, 2026

Source:
https://www.lesswrong.com/posts/K8JMjE4PCqMkkCDsd/takes-from-two-months-as-an-aspiring-llm-naturalist

---



Narrated by TYPE III AUDIO.

---

Images from the article:

Apple Podcasts and Spotify do not show images in the episode description. Try

Fler avsnitt från "LessWrong (Curated & Popular)"