The Neuron: AI Explained podcast

This AI Grows a Brain During Training (Pathway’s AI w/ Zuzanna Stamirowska)

0:00
48:50
Spol 15 sekunder tilbage
Spol 15 sekunder frem

Imagine an AI that doesn’t just output answers — it remembers, adapts, and reasons over time like a living system. In this episode of The Neuron, Corey Noles and Grant Harvey sit down with Zuzanna Stamirowska, CEO & Cofounder of Pathway, to break down the world’s first post-Transformer frontier model: BDH — the Dragon Hatchling architecture.


Zuzanna explains why current language models are stuck in a “Groundhog Day” loop — waking up with no memory — and how Pathway’s architecture introduces true temporal reasoning and continual learning.


We explore:

• Why Transformers lack real memory and time awareness

• How BDH uses brain-like neurons, synapses, and emergent structure

• How models can “get bored,” adapt, and strengthen connections

• Why Pathway sees reasoning — not language — as the core of intelligence

• How BDH enables infinite context, live learning, and interpretability

• Why gluing two trained models together actually works in BDH

• The path to AGI through generalization, not scaling

• Real-world early adopters (Formula 1, NATO, French Postal Service)

• Safety, reversibility, checkpointing, and building predictable behavior

• Why this architecture could power the next era of scientific innovation


From brain-inspired message passing to emergent neural structures that literally appear during training, this is one of the most ambitious rethinks of AI architecture since Transformers themselves.


If you want a window into what comes after LLMs, this interview is essential.


Subscribe to The Neuron newsletter for more interviews with the leaders shaping the future of work and AI: https://theneuron.ai

Flere episoder fra "The Neuron: AI Explained"