
The Future of LLMs: Smaller, Faster, Smarter
0:00
29:39
Discover the secret to training AI with less data!
On this episode of Approximately Correct, we talk with Amii Fellow and Canada CIFAR AI Chair Lili Mou about the challenges of training large language models and how his research on Flora addresses memory footprint concerns.
D'autres épisodes de "Approximately Correct: An AI Podcast from Amii"
Ne ratez aucun épisode de “Approximately Correct: An AI Podcast from Amii” et abonnez-vous gratuitement à ce podcast dans l'application GetPodcast.