ToKCast podcast

Ep 248: AI and Philosophy of Science

10/13/2025
0:00
1:25:39
Rewind 15 seconds
Fast Forward 15 seconds

This is the extended "director's cut" of a talk delivered for "RatFest 2025" (next year to be "Conjecture Con"). This also serves as a supplement to my "Doom Debates" interview which can be found here: https://youtu.be/koubXR0YL4A?si=483M6SPOKwbQYmzb

It is simply assumed some version of "Bayesian reasoning" is how AI will "create" knowledge. This misconception permeates the https://ai-2027.com paper, as well as Bostrom and Yudkowsky's work on this, as well as that of every other AI "Doomer" and even on the other extreme the so-called "AI-Accelerationists". All of that indicates a deep misconception about how new explanations are generated which comes from a deep misconception about how science works because almost no one in the field of AI seems to think the *philosophy of* science is even relevant. I explain what has gone wrong:

00:00 Introduction

09:14 The Big Questions and the new Priesthoods

18:40 Nick Bostrom and Superintelligence

25:10 If anyone builds it, everyone dies and Yudkowsky.

33:32 Prophecy, Inevitability, Induction and Bayesianism.

41:42 Popper, Kuhn, Feyerabend and Lakatos.

49:40 AI researchers ignore The Philosophy of Science.

58:46 A new test for AGI from Sam Altman and David Deutsch?

1:03:35 Accelerationists, Doomers and “Everyone dies”.

1:10:21 Conclusions

1:15:35 Audience Questions

More episodes from "ToKCast"