
0:00
46:36
We’re diving deep into AI at scale with NetApp experts Bobby Oomen and David Arnette. From NVIDIA SuperPod (think AI factories powering massive LLM training) to FlexPod solutions that bring inference into everyday enterprise workloads, we unpack what’s happening at the cutting edge of AI infrastructure.
You’ll hear how NetApp and NVIDIA are collaborating to solve one of AI’s biggest challenges—data management—with tools like SnapMirror, FlexCache, and FlexClone. We also explore why inference is becoming just as important (if not more) than training, and what that shift means for enterprises looking to integrate AI into their operations.
Whether you’re curious about NVIDIA Cloud Partner (NCP) offerings, KV cache innovations, or how Cisco and NetApp are pushing FlexPod into the AI era, this episode is packed with insights you won’t want to miss.
Tune in to learn how enterprises can scale AI securely, efficiently, and flexibly—with NetApp at the core.
Altri episodi di "TechONTAPPodcast"
Non perdere nemmeno un episodio di “TechONTAPPodcast”. Iscriviti all'app gratuita GetPodcast.