
"Weight-Sparse Circuits May Be Interpretable Yet Unfaithful" by jacob_drori
13/02/2026
0:00
26:57
TLDR: Recently, Gao et al trained transformers with sparse weights, and introduced a pruning algorithm to extract circuits that explain performance on narrow tasks. I replicate their main results and present evidence suggesting that these circuits are unfaithful to the model's “true computations”.
This work was done as part of the Anthropic Fellows Program under the mentorship of Nick Turner and Jeff Wu.
Introduction
Recently, Gao et al (2025) proposed an exciting approach to training models that are interpretable by design. They train transformers where only a small fraction of their weights are nonzero, and find that pruning these sparse models on narrow tasks yields interpretable circuits. Their key claim is that these weight-sparse models are more interpretable than ordinary dense ones, with smaller task-specific circuits. Below, I reproduce the primary evidence for these claims: training weight-sparse models does tend to produce smaller circuits at a given task loss than dense models, and the circuits also look interpretable.
However, there are reasons to worry that these results don't imply that we're capturing the model's full computation. For example, previous work [1, 2] found that similar masking techniques can achieve good performance on vision tasks even when applied to a [...]
---
Outline:
(00:36) Introduction
(03:03) Tasks
(03:16) Task 1: Pronoun Matching
(03:47) Task 2: Simplified IOI
(04:28) Task 3: Question Marks
(05:10) Results
(05:20) Producing Sparse Interpretable Circuits
(05:25) Zero ablation yields smaller circuits than mean ablation
(06:01) Weight-sparse models usually have smaller circuits
(06:37) Weight-sparse circuits look interpretable
(09:06) Scrutinizing Circuit Faithfulness
(09:11) Pruning achieves low task loss on a nonsense task
(10:24) Important attention patterns can be absent in the pruned model
(11:26) Nodes can play different roles in the pruned model
(14:15) Pruned circuits may not generalize like the base model
(16:16) Conclusion
(18:09) Appendix A: Training and Pruning Details
(20:17) Appendix B: Walkthrough of pronouns and questions circuits
(22:48) Appendix C: The Role of Layernorm
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
February 9th, 2026
Source:
https://www.lesswrong.com/posts/sHpZZnRDLg7ccX9aF/weight-sparse-circuits-may-be-interpretable-yet-unfaithful
---
Narrated by TYPE III AUDIO.
---
Images from the article:
This work was done as part of the Anthropic Fellows Program under the mentorship of Nick Turner and Jeff Wu.
Introduction
Recently, Gao et al (2025) proposed an exciting approach to training models that are interpretable by design. They train transformers where only a small fraction of their weights are nonzero, and find that pruning these sparse models on narrow tasks yields interpretable circuits. Their key claim is that these weight-sparse models are more interpretable than ordinary dense ones, with smaller task-specific circuits. Below, I reproduce the primary evidence for these claims: training weight-sparse models does tend to produce smaller circuits at a given task loss than dense models, and the circuits also look interpretable.
However, there are reasons to worry that these results don't imply that we're capturing the model's full computation. For example, previous work [1, 2] found that similar masking techniques can achieve good performance on vision tasks even when applied to a [...]
---
Outline:
(00:36) Introduction
(03:03) Tasks
(03:16) Task 1: Pronoun Matching
(03:47) Task 2: Simplified IOI
(04:28) Task 3: Question Marks
(05:10) Results
(05:20) Producing Sparse Interpretable Circuits
(05:25) Zero ablation yields smaller circuits than mean ablation
(06:01) Weight-sparse models usually have smaller circuits
(06:37) Weight-sparse circuits look interpretable
(09:06) Scrutinizing Circuit Faithfulness
(09:11) Pruning achieves low task loss on a nonsense task
(10:24) Important attention patterns can be absent in the pruned model
(11:26) Nodes can play different roles in the pruned model
(14:15) Pruned circuits may not generalize like the base model
(16:16) Conclusion
(18:09) Appendix A: Training and Pruning Details
(20:17) Appendix B: Walkthrough of pronouns and questions circuits
(22:48) Appendix C: The Role of Layernorm
The original text contained 6 footnotes which were omitted from this narration.
---
First published:
February 9th, 2026
Source:
https://www.lesswrong.com/posts/sHpZZnRDLg7ccX9aF/weight-sparse-circuits-may-be-interpretable-yet-unfaithful
---
Narrated by TYPE III AUDIO.
---
Images from the article:
Mais episódios de "LessWrong (Curated & Popular)"



Não percas um episódio de “LessWrong (Curated & Popular)” e subscrevê-lo na aplicação GetPodcast.








