Towards Data Science podcast

96. Jan Leike - AI alignment at OpenAI

0:00
1:05:17
Recuar 15 segundos
Avançar 15 segundos

The more powerful our AIs become, the more we’ll have to ensure that they’re doing exactly what we want. If we don’t, we risk building AIs that use dangerously creative solutions that have side-effects that could be undesirable, or downright dangerous. Even a slight misalignment between the motives of a sufficiently advanced AI and human values could be hazardous.

That’s why leading AI labs like OpenAI are already investing significant resources into AI alignment research. Understanding that research is important if you want to understand where advanced AI systems might be headed, and what challenges we might encounter as AI capabilities continue to grow — and that’s what this episode of the podcast is all about. My guest today is Jan Leike, head of AI alignment at OpenAI, and an alumnus of DeepMind and the Future of Humanity Institute. As someone who works directly with some of the world’s largest AI systems (including OpenAI’s GPT-3) Jan has a unique and interesting perspective to offer both on the current challenges facing alignment researchers, and the most promising future directions the field might take.

---

Intro music:

➞ Artist: Ron Gelinas

➞ Track Title: Daybreak Chill Blend (original mix)

➞ Link to Track: https://youtu.be/d8Y2sKIgFWc

---

Chapters:

0:00 Intro

1:35 Jan’s background

7:10 Timing of scalable solutions

16:30 Recursive reward modeling

24:30 Amplification of misalignment

31:00 Community focus

32:55 Wireheading

41:30 Arguments against the democratization of AIs

49:30 Differences between capabilities and alignment

51:15 Research to focus on

1:01:45 Formalizing an understanding of personal experience

1:04:04 OpenAI hiring

1:05:02 Wrap-up

Mais episódios de "Towards Data Science"