Realizing that Foundation Models are a big deal, scaling, why Percy founded CRFM, Stanford's position in the field, benchmarking, privacy, and CRFM's first and next 30 years.
Transcript: https://web.stanford.edu/class/cs224u/podcast/liang/
- Percy's website
- Percy on Twitter
- CRFM
- On the opportunities and risks of foundation models
- ELMo: Deep contextualized word representations
- BERT: Pre-training of deep bidirectional Transformers for language understanding
- Sam Bowman
- GPT-2
- Adversarial examples for evaluating reading comprehension systems
- System 1 and System 2
- The Unreasonable Effectiveness of Data
- Chinchilla: Training Compute-Optimal Large Language Models
- GitHub Copilot
- LaMDA: Language models for dialog applications
- AI Test Kitchen
- DALL-E 2
- Richer Socher on the CS224U podcast
- you.com
- Chris Ré
- Fei-Fei Li
- Chris Manning
- HAI
- Rob Reich
- Erik Brynjolfsson
- Dan Ho
- Russ Altman
- Jeff Hancock
- The time is now to develop community norms for the release of foundation models
- Twitter Spaces event
- Best practices for deploying language models
- Model Cards for model reporting
- Datasheets for datasets
- Strathern's law
Weitere Episoden von „CS224U“
Verpasse keine Episode von “CS224U” und abonniere ihn in der kostenlosen GetPodcast App.