
Ask ChatGPT how to build a bomb, and it will flatly respond that it “can’t help with that.” But users have long played a cat-and-mouse game to try to trick language models into providing forbidden information. Just as quickly as these “jailbreaks” appear, AI companies patch them by simply filtering out forbidden prompts before they ever reach the model itself.
Recently, cryptographers have shown how the defensive filters put around powerful language models can be subverted by well-studied cryptographic tools. In fact, they’ve shown how the very nature of this two-tier system — a filter that protects a powerful language model inside it — creates gaps in the defenses that can always be exploited. In this episode, Quanta executive editor Michael Moyer tells Samir Patel about the findings and implications of this new work.
Audio coda courtesy of Banana Breakdown.
Weitere Episoden von „The Quanta Podcast“



Verpasse keine Episode von “The Quanta Podcast” und abonniere ihn in der kostenlosen GetPodcast App.







