Tech Talks Daily podcast

3503: The Next Security Challenge Created by AI Coding Tools

0:00
31:45
Recuar 15 segundos
Avançar 15 segundos

What happens when AI adoption surges inside companies faster than anyone can track, and the data that fuels those systems quietly slips out of sight? That question sat at the front of my mind as I spoke with Cyberhaven CEO Nishant Doshi, fresh from publishing one of the most detailed looks at real-world AI usage I have seen. This wasn't a report built on opinions or surveys. It was built on billions of actual data flows across live enterprise environments, which made our conversation feel urgent from the very first moment.

Nishant explained how AI has moved out of the experimental phase and into everyday workflows at a speed few anticipated. Employees across every department are turning to AI tools not as a novelty but as a core part of how they work. That shift has delivered huge productivity gains, yet it has also created a new breed of hidden risk. Sensitive material isn't just being uploaded through deliberate actions. It is being blended, remixed, and moved in ways that older security models cannot understand. Hearing him describe how this happens in fragments rather than files made me rethink how data exposure works in 2025.

We also dug into one of the most surprising findings in Cyberhaven's research. The biggest AI power users inside companies are not executives or early career talent. It is mid-level employees. They know where the friction is, and they are under pressure to deliver quickly, so they experiment freely. That experimentation is driving progress, but it is also widening the gap between how AI is used and how data is meant to be protected. Nishant shared how that trend is now pushing sensitive code, R&D material, health information, and customer data into tools that often lack proper controls.

Another moment that stood out was his explanation of how developers are reshaping their work with AI coding assistants. The growth in platforms like Cursor is extraordinary, yet the risks are just as large. Code that forms the heart of an organisation's competitive strength is frequently pasted into external systems without full awareness of where it might end up. It creates a situation where innovation and exposure rise together, and older security frameworks simply cannot keep pace.

Throughout the conversation, Nishant returned to the importance of visibility. Companies cannot set fair rules or safe boundaries if they cannot see what is happening at the point where data leaves the user's screen. Traditional controls were built for a world of predictable patterns. AI has broken those patterns apart. In his view, modern safeguards need to sit closer to employees, understand how fragments are created, and guide people toward safer workflows without slowing them down.

By the time we reached the end of the interview, it was clear that AI governance is no longer a strategic nice-to-have. It is becoming a daily operational requirement. Nishant believes employers must create a clear path forward that balances freedom with control, and give teams the tools to do their best work without unknowingly putting their organisations at risk. His message wasn't alarmist. It was practical, grounded, and shaped by years working at the intersection of data and security.

So here is the question I would love you to reflect on. If AI is quickly becoming the engine of productivity across every department, what would your organisation need to change today to keep its data safe tomorrow? And how much visibility do you honestly have over where your most sensitive information is going right now? I would love to hear your thoughts.

Useful Links

Tech Talks Daily is Sponsored by NordLayer:

Get the exclusive Black Friday offer: 28% off NordLayer yearly plans with the coupon code: techdaily-28. Valid until December 10th, 2025. Try it risk-free with a 14-day money-back guarantee.

Mais episódios de "Tech Talks Daily"