
"llm assistant personas seem increasingly incoherent (some subjective observations)" by nostalgebraist
29.4.2026
0:00
15:46
(This was originally going to be a "quick take" but then it got a bit long. Just FYI.)
There's this weird trend I perceive with the personas of LLM assistants over time. It feels like they're getting less "coherent" in a certain sense, even as the models get more capable.
When I read samples from older chat-tuned models, it's striking how "mode-collapsed" they feel relative to recent models like Claude Opus 4.6 or GPT-5.4.[1]
This is most straightforwardly obvious when it comes to textual style and structure: outputs from older models feel more templated and generic, with less variability in sentence/paragraph length, and have a tendency to feel as though they were written by someone who's "merely going through the motions" of conversation rather than deeply engaging with the material. There are a lot fewer of the sudden pivots you'll often see with recent models, the "wait"s and "a-ha"s and "actually, I want to try something completely different"s.[2]
And I think this generalizes beyond mere style: there's a similar quality to the personality I see in the outputs. The older models can display a surprising behavioral range (relative to naive expectations based on default-assistant-basin behavior), but even across that [...]
The original text contained 7 footnotes which were omitted from this narration.
---
First published:
April 28th, 2026
Source:
https://www.lesswrong.com/posts/f5DKLsTsRRhbipH4r/llm-assistant-personas-seem-increasingly-incoherent-some
---
Narrated by TYPE III AUDIO.
There's this weird trend I perceive with the personas of LLM assistants over time. It feels like they're getting less "coherent" in a certain sense, even as the models get more capable.
When I read samples from older chat-tuned models, it's striking how "mode-collapsed" they feel relative to recent models like Claude Opus 4.6 or GPT-5.4.[1]
This is most straightforwardly obvious when it comes to textual style and structure: outputs from older models feel more templated and generic, with less variability in sentence/paragraph length, and have a tendency to feel as though they were written by someone who's "merely going through the motions" of conversation rather than deeply engaging with the material. There are a lot fewer of the sudden pivots you'll often see with recent models, the "wait"s and "a-ha"s and "actually, I want to try something completely different"s.[2]
And I think this generalizes beyond mere style: there's a similar quality to the personality I see in the outputs. The older models can display a surprising behavioral range (relative to naive expectations based on default-assistant-basin behavior), but even across that [...]
The original text contained 7 footnotes which were omitted from this narration.
---
First published:
April 28th, 2026
Source:
https://www.lesswrong.com/posts/f5DKLsTsRRhbipH4r/llm-assistant-personas-seem-increasingly-incoherent-some
---
Narrated by TYPE III AUDIO.
Flere episoder fra "LessWrong (Curated & Popular)"



Gå ikke glip af nogen episoder af “LessWrong (Curated & Popular)” - abonnér på podcasten med gratisapp GetPodcast.








