Humans of Martech podcast

213: John Whalen: The next marketing advantage is pre-testing ideas on synthetic users

0:00
1:08:24
Recuar 15 segundos
Avançar 15 segundos

What’s up everyone, today we have the pleasure of sitting down with Dr. John Whalen, Cognitive Scientist, Author, and Founder at Brilliant Experience.

Summary: John has spent his career studying how people actually think, and his conclusion is uncomfortable for anyone who believes their marketing decisions are more rational than they are. In this episode, John explores how synthetic users built from cognitive science principles can fill the massive research gap that most teams quietly ignore, and why removing the human interviewer from the room might be the fastest way to finally hear the truth.

In this Episode…

  • (00:00) - Intro
  • (01:13) - In This Episode
  • (04:31) - What Are Synthetic Users and Why Do They Matter?
  • (10:00) - How Synthetic Users Make Stakeholders Hungry for Real Human Research
  • (15:56) - Pre-Testing on Synthetic Users: Shortcut or Smart Step?
  • (18:53) - How to Actually Build a Synthetic User: Tools, Layers, and Agentic Systems
  • (40:51) - Is the Average Persona Dead? Scale, Diversity, and the World Model
  • (43:01) - Asking the Uncomfortable Questions: What AI Agents Reveal That Humans Won't
  • (49:30) - Ending the Quant vs. Qual Debate with Statistically Relevant Qualitative Data
  • (56:37) - Mining the 'Why' Behind Silent Behavioral Data with Synthetic Users
  • (01:02:31) - Designing for Agent Users: The Coming Shift to Human-and-Machine-Centered Design
  • (01:05:28) - The Happiness Question: Dogs, Nature, and Staying Analog

About John
Dr. John Whalen is a Cognitive Scientist, Author, and Founder of Brilliant Experience, where he applies cognitive science principles to help organizations design products and experiences that align with how people actually think and make decisions. He’s also an educator, teaching two AI customer research courses on Maven.

His work explores the intersection of human psychology and marketing, including the emerging practice of pre-testing ideas on synthetic users to give brands a faster and more informed competitive edge. He is also the author of a book on the science of designing for the human mind, bringing academic rigor to practical business challenges.

How Synthetic User Research Works and When to Trust It
Synthetic user research sounds like something creepy out of a dystopian science fiction film, and John is the first to admit the terminology does nobody any favors. When asked about what synthetic users actually are and what they mean for research, he admited: if he had been on the branding team, he would have pushed hard for something like “dynamic personas” instead. The name creates unnecessary friction before the conversation even starts. And that friction matters when you’re trying to get skeptical executives or methiculous researchers to take the whole thing seriously.

Under the hood, specialized AI tools simulate how a defined audience segment would respond to a question, concept, or stimulus, without recruiting, scheduling, incentivizing, or waiting on real human participants. John runs a class where he collects genuine human data first, then feeds comparable inputs into these tools to benchmark accuracy head-to-head. The results are pretty wild. AI-generated responses align with real human findings somewhere between 85% and 100% of the time on major topics and consumer needs. That is not a peer-reviewed clinical trial, and John is not pretending otherwise. But 85% alignment is enough signal to stop reflexively dismissing the method and start asking harder, more specific questions about exactly where it fits into a research stack.

So what does this mean for you and your company though? Think all the decisions that currently live in a black hole of zero structured input. How many product calls, campaign concepts, and messaging pivots happen with nothing more than a conference room full of people who all read the same talking heads on LinkedIn? John argues that low cost, round-the-clock accessibility, and minimal public exposure make these tools a natural fit for precisely those moments: pressure-checking a hypothesis at 11pm, testing whether a pitch direction even makes sense before it touches a client, or deciding whether a concept deserves the time and money required for proper validation.

“If these are only going to keep getting better and better, which they are, then logically, what kinds of decisions right now go completely by gut and no research, and what could we use to help us frame that?”

One of the more underappreciated angles John raises is global inclusivity. Large organizations routinely test in the US and Western Europe, then extrapolate those findings to markets in Southeast Asia, Latin America, or Sub-Saharan Africa because local research budgets simply do not exist. Big nono. Synthetic personas trained on broader, more representative data could at minimum provide directional signals for those markets, making research more geographically honest without a proportional spike in spend.

The early AI bias problem, where models essentially mirrored the worldview of a narrow, tech-adjacent demographic slice, was real and valid and well-documented. But training data keeps expanding, and the gap between “Silicon Valley assumption” and “what people in Nairobi or Jakarta actually think” is narrowing in ways that deserve acknowledgment.

Key takeaway: Synthetic user research earns its place not as a replacement for real human data, but as a low-cost, always-available pressure valve for the enormous volume of decisions that currently happen with no research input at all, so before you dismiss it as gimmicky, ask yourself honestly how many of your last ten strategic calls were backed by anything more rigorous than internal consensus.

How Synthetic Users Make Stakeholders Want More Real Human Research
Thos big hairy static research decks have a fundamental limitation that anyone who has sat through a stakeholder presentation already understands. You hand over a slide deck, someone reads it, and then three days later they have five more questions you can’t answer without going back to the field. Brutal feeling.

Interrogating a Live Persona
John argues that synthetic users solve this problem in a surprisingly indirect way: when a stakeholder can keep interrogating a live AI persona, the conversation never closes. They start poking at the model, asking things like “would you like this?” or “why would you feel that way about that?” and somewhere in that process, something shifts. They stop treating research as a report and start treating it as a living, always-on thing.

What John has observed across a half-dozen client engagements is that this interactivity makes leaders ravenous for it. His team positions synthetic user outputs as directional, explicitly not as data, closer to hypothesis generation than validation. But still cray valuable. When a stakeholder gets genuinely excited about a pattern they’re seeing in a synthetic persona, the natural next thought tends to be “if this could actually be true, we need to go test it with real humans.” The synthetic user functions as a preview of the variance you might find in the field, not a substitute for going there.

“Think of this as almost a preview of what you could have with your humans. So you’re being more prepared for what might be to come, what might be the distribution of different responses.”

Instant Reactions
There’s a second use case John describes, about discovering new questions. When a stakeholder first sits down to scope a research project, they often don’t know what they’re actually asking. Spinning up a synthetic user in the room and throwing that rough, half-formed question at it live tends to produce a response the stakehold...

Mais episódios de "Humans of Martech"