
If the Public Knew This About AI, They'd Panic: Roman Yampolskiy | Ep 286 with Dr. Roman Yampolskiy
In this episode, Daniel and Kate sit down with Dr. Roman Yampolskiy, one of the world’s leading researchers on AI safety, superintelligence, and the existential risks no one in Silicon Valley wants to talk about. His work has been featured by BBC, MSNBC, New Scientist, and dozens of global outlets — and his message is simple: we are racing toward something we don’t understand.
Roman explains why today’s AI models already outperform top PhDs, why governments are pushing for speed over safety, and why the next generation of AI might quietly outgrow human control long before anyone notices. This is not sci-fi. This is the inside view from someone who has spent two decades studying how intelligent systems break, behave, and escape oversight.
He also shares the personal story behind his obsession with AI risk, how he rose from an immigrant student to a world authority, and why fame has become a “productivity curse” for researchers sounding the alarm.
Key Discussion Points:
Roman opens with the truth that underpins his entire career: the people building AI don’t actually understand how it works — and they’re not slowing down. He explains how the U.S. government conflated “AI safety” with political correctness topics, entirely missing the existential-risk conversation and accelerating the race with no guardrails.
He breaks down why “losing control” won’t look dramatic — the world may appear normal for years as a superintelligence quietly secures resources, learns human behavior, and waits. He explains why AI trained on human data inherits not only our brilliance but our flaws, why Sam Altman understands the risks but can’t slow down, and why AGI is already partially here depending on your definition.
Roman dives into job loss, economic abundance, and whether anyone should still go to college. He shares how AI agents differ from tools, why they’re inherently dangerous, and the real threat behind humanoid robots (hint: it’s not their physical bodies). He explores global competition between the U.S. and China, the inevitability of AGI’s rise, and why cooperation is never as simple as people imagine.
Daniel steers the conversation into Roman’s personal journey — the sci-fi spark that led him into AI, how cybersecurity pulled him into safety research, and why rising fame has actually damaged his productivity. Roman reveals the bizarre messages he gets from conspiracy theorists and explains the ethical nightmare ahead: If AI becomes conscious, do we owe it rights?
Takeaways:
Humanity is racing toward a future it doesn’t fully comprehend. While AI may create abundance, cure disease, and automate nearly every job, it also introduces unprecedented existential risks — ones we are not structurally or politically prepared for. Roman emphasizes that controlling superintelligence remains an unsolved problem, and failing to solve it could make humans “irrelevant by default.” Yet he remains hopeful: with enough time and caution, we can still build systems that elevate humanity instead of replacing it.
Closing Thoughts:
Roman’s wisdom lands as both a warning and a call for clarity. The future of AI isn’t just about innovation — it’s about survival, alignment, and responsibility. And in a world sprinting toward intelligence we can’t undo, voices like his are not optional — they’re essential.
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Flere episoder fra "Founder's Story"



Gå ikke glip af nogen episoder af “Founder's Story” - abonnér på podcasten med gratisapp GetPodcast.







