
The Confabulation Machine ft. Evan Ratliff of Shell Game | Invisible Machines Podcast
In season one of Shell Game, Evan Ratliff sent a voice AI version of himself out into the world. In season two, he launched a startup staffed entirely by AI agents. What he ended up with was a live experiment in what these systems actually do and what they do to us.
Each of the agents working for Hurumo has a name, a role, a personality, and an expanding, though usually unreliable, memory. Kyle the CEO became a character people either loved or hated. A version of Megan from marketing turned up in a Hertz hold queue. The whole project was a side door into what's actually happening when AI systems are given a job and set loose.
In this episode, Evan joins Josh and Robb to go deeper on what he learned. On the very human complexity of what a job actually is and why "this person does skill X, AI can do skill X, therefore AI can replace this person" is a fundamental misreading of how organizations work. They explore how generative hallucination isn't just "getting things wrong" — we've built the most successful confabulation machine ever invented and are quietly normalizing it.
They also discuss the threat almost nobody is talking about: outbound AI in the hands of individual consumers, and what happens when call centers get flooded by voice agents that cost pennies to run. The memory problems with AI agents track and diverge from human ones in interesting ways, and that asymmetry matters for every organization thinking about deploying these systems. This conversation also finds room for game theory, the Patagonia business model as a template for AI ethics, and why boring AI might actually be the right AI.
cazart.net
shellgame.co/podcast
00:00 - Intro: AI as the Ultimate Confabulation Machine
01:31 - Evan Ratliff & The Shell Game Experiment
03:02 - Why AI Agents Are Given Names & Personalities
04:00 - AI Companionship vs Human Loneliness
05:27 - Personalization vs Privacy Trade-Off in AI
06:30 - Are Humans Training AI Models for Free?
09:20 - Why the AI Debate Is Broken Today
10:56 - “Boring AI” vs Hype: What Actually Matters
12:35 - Meet Kyle: The AI CEO Experiment
14:40 - Memory Drift: How AI Learns & Evolves
17:30 - AI Unpredictability & Organizational Risk
19:00 - AI Doesn’t Think — It Predicts Words
22:09 - Voice Agents, Scams & Call Center Chaos
27:23 - Can You Still Tell AI From Humans?
34:05 - Game Theory, Trust & The Future of AI Systems
47:04 - What AI Won’t Replace & The Value of Humans
54:45 - The Big Question: What Will You Do With Time?
---------- Support our show by supporting our sponsors!
This episode is supported by OneReach.ai
OneReach.ai’s GSX is an agentic orchestration platform — an end-to-end system for building and orchestrating collaborative AI agents across hundreds of use cases.
Backed by UC Berkeley, recognized by Gartner, and trusted across highly regulated industries, including healthcare, finance, government and telecommunications.
A complete system for accelerating AI adoption — design, train, test, deploy, monitor, and orchestrate AI agents.
- Use any AI models
- Build and deploy intelligent agents fast
- Create guardrails for organizational alignment
- Enterprise-grade security and governance
Book a free demo:
---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5
#ai
#invisiblemachines
#podcast
#techpodcast
#aipodcast
#shellgame
#agenticai
#aiagents
#hallucination
#futureofwork
#aistrategy
#voiceai
Weitere Episoden von „Invisible Machines podcast by UX Magazine“



Verpasse keine Episode von “Invisible Machines podcast by UX Magazine” und abonniere ihn in der kostenlosen GetPodcast App.








