Outcomes Rocket podcast

Why AI Systems Fail When We Assume They Behave Like Software with Steve Wilson, Chief AI & Product Officer for Exabeam

12/18/2025
0:00
25:35
Rewind 15 seconds
Fast Forward 15 seconds
This podcast is brought to you by Outcomes Rocket, your exclusive healthcare marketing agency. Learn how to accelerate your growth by going to⁠ outcomesrocket.com AI security is no longer optional; it's the foundation that determines whether innovation in healthcare will thrive or fail. In this episode, Steve Wilson, Chief AI & Product Officer for Exabeam and author, discusses the hidden vulnerabilities inside modern AI systems, why traditional software assumptions break down, and how healthcare must rethink safety, trust, and security from the ground up. He explains the risks of prompt injection and indirect prompt injection, highlights the fragile nature of AI “intuition,” and compares securing AI to training unpredictable employees rather than testing deterministic code. Steve also explores issues such as supply chain integrity, output filtering, trust boundaries, and the growing need for continuous evaluation rather than one-time testing. Finally, he shares stories from his early career at Sun Microsystems, Java’s early days, startup lessons from the 90s, and how modern AI agents are reshaping cybersecurity operations. Tune in and learn how today’s most advanced AI systems can be both powerful and dangerously gullible, and what it takes to secure them! Resources Connect with and follow Steve Wilson on LinkedIn. Follow Exabeam on LinkedIn and visit their website! Buy Steve Wilson’s book The Developer's Playbook for Large Language Model Security here.

More episodes from "Outcomes Rocket"