
The Future of LLMs: Smaller, Faster, Smarter
0:00
29:39
Discover the secret to training AI with less data!
On this episode of Approximately Correct, we talk with Amii Fellow and Canada CIFAR AI Chair Lili Mou about the challenges of training large language models and how his research on Flora addresses memory footprint concerns.
Otros episodios de "Approximately Correct: An AI Podcast from Amii"
No te pierdas ningún episodio de “Approximately Correct: An AI Podcast from Amii”. Síguelo en la aplicación gratuita de GetPodcast.