Making AI Work: Fine-Tuning, Inference, Memory | Sharon Zhou, CEO, Lamini
In this episode, we reconnect with Sharon Zhou, co-founder and CEO of Lamini, to dive deep into the ever-evolving world of enterprise AI.
We discuss how the AI hype is evolving and what enterprises are doing to stay ahead, break down the different players in the Inference market, explore how Memory Tuning is reducing hallucinations in AI models, the role of agents in enterprise AI, and the challenges of making them real-time and reliable.
Lamini
Website - https://www.lamini.ai
Twitter - https://x.com/laminiai
Sharon Zhou
LinkedIn - https://www.linkedin.com/in/zhousharon
Twitter - https://x.com/realsharonzhou
FIRSTMARK
Website - https://firstmark.com
Twitter - https://twitter.com/FirstMarkCap
Matt Turck (Managing Director)
LinkedIn - https://www.linkedin.com/in/turck/
Twitter - https://twitter.com/mattturck
(00:00) Intro
(02:18) The state of the AI market in July, 2024
(10:51) What is Lamini?
(11:43) What is Inference?
(15:36) GPU shortage in the enterprise
(18:06) AMD vs Nvidia
(22:10) What is Lamini's final product?
(25:30) What is Memory Tuning?
(29:01) What is LoRA?
(32:39) More on Memory Tuning
(35:51) Sharon's perspective on AI agents
(40:01) What is next for Lamini?
(41:54) Reasoning vs pure compute in AI
More episodes from "The MAD Podcast with Matt Turck"
Don't miss an episode of “The MAD Podcast with Matt Turck” and subscribe to it in the GetPodcast app.