
The Future of LLMs: Smaller, Faster, Smarter
0:00
29:39
Discover the secret to training AI with less data!
On this episode of Approximately Correct, we talk with Amii Fellow and Canada CIFAR AI Chair Lili Mou about the challenges of training large language models and how his research on Flora addresses memory footprint concerns.
Altri episodi di "Approximately Correct: An AI Podcast from Amii"
Non perdere nemmeno un episodio di “Approximately Correct: An AI Podcast from Amii”. Iscriviti all'app gratuita GetPodcast.