
Claude's Human-Like AI & Groundbreaking Research from Anthropic with Siadhal Magos
In this installment of AI Corner, Siadhal Magos, Metaview CEO, and Nolan Church dissect Anthropic's groundbreaking paper “Tracing the thoughts of a large language model” released on March 27, 2025. They explore surprising findings: LLMs plan ahead rather than simply predicting next tokens, think across multiple languages simultaneously, and often present explanations that satisfy humans rather than reflecting their actual reasoning processes.
The conversation highlights the paradox of developing powerful AI systems without understanding their workings, raising implications for deployment, safety, and the "intelligence on tap" paradigm reshaping our technological relationship. We should all be paying attention.
Support HR Heretics Sponsors:
Planful empowers teams just like yours to unlock the secrets of successful workforce planning. Use data-driven insights to develop accurate forecasts, close hiring gaps, and adjust talent acquisition plans collaboratively based on costs today and into the future. ✍️ Go to https://planful.com/heretics to see how you can transform your HR strategy.
Metaview is the AI assistant for interviewing. Metaview completely removes the need for recruiters and hiring managers to take notes during interviews—because their AI is designed to take world-class interview notes for you. Team builders at companies like Brex, Hellofresh, and Quora say Metaview has changed the game—see the magic for yourself: https://www.metaview.ai/heretics
KEEP UP WITH SIADHAL, NOLAN + KELLI ON LINKEDIN
Siadhal: https://www.linkedin.com/in/siadhal/
Nolan: https://www.linkedin.com/in/nolan-church/
Kelli: https://www.linkedin.com/in/kellidragovich/
—
LINKS:
Anthropic’s featured paper: https://www.anthropic.com/news/tracing-thoughts-language-model
Metaview: https://www.metaview.ai/
—
OTHER RECOMMENDED LINKS:
Mercor: https://mercor.com/
—
TIMESTAMPS
(00:13) Intro
(00:45) Anthropic's Mind-Blowing Paper
(01:37) LLMs Think Across Languages
(02:04) Faked Reasoning in AI Models
(02:28) AI Hallucinations Explained
(03:00) First Paragraph Reveals Uncertainty
(05:18) Beyond Next Token Prediction
(07:23) Evaluating LLMs Without Understanding
(08:44) More Human Than Software
(10:05) Why You Should Experiment With AI
(11:13) The Airplane Analogy
(12:26) Safety Concerns for Future AI
(12:54) Sponsors: Planful | Metaview
(16:27) Dario Amodei on AI Self-Policing
(18:05) OpenAI's $300B Valuation
(19:33) Intelligence on Tap
(22:02) Retraining Google Habits
(24:35) Agentic AI & Reinforcement Learning
(27:15) Human Training for AI Models
(30:25) Wrap
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit hrheretics.substack.com
Otros episodios de "“HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies"
No te pierdas ningún episodio de ““HR Heretics” | How CPOs, CHROs, Founders, and Boards Build High Performing Companies”. Síguelo en la aplicación gratuita de GetPodcast.