LessWrong (Curated & Popular) podcast

“Race and Gender Bias As An Example of Unfaithful Chain of Thought in the Wild” by Adam Karvonen, Sam Marks

0:00
7:56
Reculer de 15 secondes
Avancer de 15 secondes

Summary: We found that LLMs exhibit significant race and gender bias in realistic hiring scenarios, but their chain-of-thought reasoning shows zero evidence of this bias. This serves as a nice example of a 100% unfaithful CoT "in the wild" where the LLM strongly suppresses the unfaithful behavior. We also find that interpretability-based interventions succeeded while prompting failed, suggesting this may be an example of interpretability being the best practical tool for a real world problem.

For context on our paper, the tweet thread is here and the paper is here.

Context: Chain of Thought Faithfulness Chain of Thought (CoT) monitoring has emerged as a popular research area in AI safety. The idea is simple - have the AIs reason in English text when solving a problem, and monitor the reasoning for misaligned behavior. For example, OpenAI recently published a paper on using CoT monitoring to detect reward hacking during [...]



 ---

Outline:

(00:49) Context: Chain of Thought Faithfulness

(02:26) Our Results

(04:06) Interpretability as a Practical Tool for Real-World Debiasing

(06:10) Discussion and Related Work

---

First published:
July 2nd, 2025

Source:
https://www.lesswrong.com/posts/me7wFrkEtMbkzXGJt/race-and-gender-bias-as-an-example-of-unfaithful-chain-of

---



Narrated by TYPE III AUDIO.


D'autres épisodes de "LessWrong (Curated & Popular)"