Approximately Correct: An AI Podcast from Amii podcast

The Future of LLMs: Smaller, Faster, Smarter

0:00
29:39
Manda indietro di 15 secondi
Manda avanti di 15 secondi

Discover the secret to training AI with less data! 

On this episode of Approximately Correct, we talk with Amii Fellow and Canada CIFAR AI Chair Lili Mou about the challenges of training large language models and how his research on Flora addresses memory footprint concerns. 


Altri episodi di "Approximately Correct: An AI Podcast from Amii"