
Omar Santos is a Distinguished Engineer directing AI Security at Cisco. He’s here for a frank conversation about the realities of security in the agentic era. As more software is created on-the-fly by AI agents at the request of humans, security has to become an ever-present layer. Security will be built into complete agent runtime environments and will require constant human oversight and intervention, augmented by the ability to simulate outcomes to avoid risk.
Omar is also the Co-Chair of the Coalition for Secure AI, and these are the things he’s thinking about on a daily basis. He sits down with Robb and Josh at the end of a travel blitz that included work surrounding OpenAI’s Stargate Project, a four-year $500b plan for new AI infrastructure in the United States. The trio discuss how the ongoing training of models and the rising demand for inference continue to push the demand for security across burgeoning technology ecosystems.
---------- Support our show by supporting our sponsors!
This episode is supported by OneReach.ai — creators of Generative Studio X (GSX), the first complete AI Agent Runtime Environment (V1 circa 2019).
Forged over a decade of R&D and proven in 10,000+ deployments, GSX lets enterprises design, build, and orchestrate secure, scalable AI agents and systems. Trusted across healthcare, finance, government, and telecom.
- Use any AI models
- Build and deploy intelligent agents fast
- Create guardrails for organizational alignment
- Enterprise-grade security and governance
- Avoid vendor lock-in.
Backed by UC Berkeley and recognized by Gartner.
Before you build or buy another AI solution, think about getting an AI system.
---------- The revised and significantly updated second edition of our bestselling book about succeeding with AI agents, Age of Invisible Machines, is available everywhere: Amazon — https://bit.ly/4hwX0a5
Chapters -
00:00 - Intro and episode setup
00:33 - Meet Omar Santos and his role in AI security
01:00 - Security as the new programming
02:20 - Coalition for Secure AI and security as a new language
04:45 - Identity, access, and AI agents
06:09 - Scaling models and mega data centers
09:04 - Training vs inference and the compute explosion
12:54 - Budgets, compute, and hybrid human–AI security teams
15:16 - Checklists, guardrails, and spec-driven development
20:00 - From IDEs to agent swarms and background agents
25:19 - CodeGuard, rules for coding agents, and secure SDLC
32:00 - Why doing nothing is the biggest AI security risk
39:30 - Validating AI, AI safety levels, and open source dilemmas
46:00 - Private networks, insider AI agents, and embedded security
51:00 - Simulation, digital twins, and business-wide risk modeling
#InvisibleMachines
#Podcast
#TechPodcast
#AIPodcast
#AI
#AgenticAI
#AIAgents
#DigitalTransformation
#Cybersecurity
#AIInfrastructure
#AIOrchestration
#AIManagement
#TechLeadership
#Innovation
#ResponsibleAI
#AIStandards
#Cisco
#OpenAI
#StargateProject
#AISecurity
#Technology
Mais episódios de "Invisible Machines podcast by UX Magazine"



Não percas um episódio de “Invisible Machines podcast by UX Magazine” e subscrevê-lo na aplicação GetPodcast.







