The MonkCast podkast

Introduction to local AI models (with AMD's Anush Elangovan)

19.11.2025
0:00
12:25
Do tyłu o 15 sekund
Do przodu o 15 sekund

James Governor sits down with Anush Elangovan, VP of AI at AMD, to dig into the fast-evolving world of local LLMs, edge hardware, and the future of AI-powered developer experience.

They cover:

  • Why local LLMs are exploding in adoption -Privacy, data sovereignty, and on-device intelligence
  • Running 120B-parameter models on AMD Strix Halo laptops
  • The software stack Anush uses: ROCm, Llama.cpp, Ollama, ComfyUI, PyTorch & more
  • How AMD is approaching AI-driven developer tools, coding agents, and predictive ops
  • The role of custom silicon, inference-optimized chips, and next-gen laptops
  • What ROCm actually is (and why developers should care)
  • If you're curious about the future of on-device AI, developer workflows, or AMD's AI strategy, this is a must-watch.

This podcast was sponsored by AMD.

Więcej odcinków z kanału "The MonkCast"