
0:00
38:15
Discover how mixture‑of‑experts (MoE) architecture is enabling smarter AI models without a proportional increase in the required compute and cost. Using vivid analogies and real-world examples, NVIDIA’s Ian Buck breaks down MoE models, their hidden complexities, and why extreme co-design across compute, networking, and software is essential to realizing their full potential. Learn more: https://blogs.nvidia.com/blog/mixture-of-experts-frontier-models/
Fler avsnitt från "NVIDIA AI Podcast"



Missa inte ett avsnitt av “NVIDIA AI Podcast” och prenumerera på det i GetPodcast-appen.







