
There is something undeniably disorienting about the way AI features in public and political discussions.
On some days, it is portrayed in utopian, almost messianic terms — as the essential technological innovation that will at once turbo-charge productivity and discover the cure to cancer, that will solve climate change and place the vast stores of human knowledge at the fingertips of every human being. Such are the future benefits that every dollar spent, every resource used, will have been worth it. From this vantage, artificial general intelligence (AGI) is the end, the ‘telos’, the ultimate goal, of humanity’s millennia-long relationship with technology. We will have invented our own saviour.
On other days, AI is described as representing a different kind of “end” — an existential threat to human life, a technological creation that, like Frankenstein’s monster, will inevitably lay waste to its creator. The fear is straightforward enough: should humanity invent an entity whose capabilities surpass our own and whose modes of “reasoning” are unconstrained by moral norms or sentiments — call it “superintelligence” — what assurances would we have that that entity would continue to subordinate its own goals to humankind’s benefit? After all, do we know what it will “what”, or whether the existence of human beings would finally pose an impediment to its pursuits?
Ever since powerful generative AI tools were made available to the public not even three years ago, chatbots have displayed troubling and hard-to-predict tendencies. They have deceived and manipulated human users, hallucinated information, spread disinformation and engaged in a range of decidedly misanthropic “behaviours”. Given the unpredictability of these more modest algorithms — which do not even approximate the much-vaunted capabilities of AGI — who’s to say how a superintelligence might behave?
It’s hardly surprising, then, that the chorus of doomsayers has grown increasingly insistent over the last six months. In April, a group of AI researchers released a hypothetical scenario (called “AI 2027”) which anticipates a geopolitical “arms race” in pursuit of AGI and the emergence of a powerful AI agent that operates largely outside of human control by the end of 2027. In the same vein, later this month two pioneering researchers in the field of AI — Eliezer Yudkowsy and Nate Soares — are releasing their book, If Anyone Builds It, Everyone Dies: The Case Against Superintelligent AI.
For all this, there is a disconcerting irony that shouldn’t be overlooked. Warnings about the existential risk posed by AI have accompanied every stage of its development — and those warnings have been articulated by the leaders in the field of AI research themselves.
This suggests that warnings of an extinction event due to the advent of AGI are, perversely, being used both to spruik the godlike potential of these companies’ product and to justify the need for gargantuan amounts of money and resources to ensure “we” get there before “our enemies” do. Which is to say, existential risk is serving to underwrite a cult of AI inevitabalism, thus legitimating the heedless pursuit of AGI itself.
Could we say, perhaps, that the very prospect of some extinction event, of some future where humanity is subservient to superintelligent overlords, is acting as a kind of decoy, a distraction from the very real ways that human beings, communities and the natural world are being exploited in the service of the goal of being the first to create artificial general intelligence?
Guest: Karen Hao is the author of Empire of AI: Inside the Reckless Race for Total Domination.
Fler avsnitt från "The Minefield"
Missa inte ett avsnitt av “The Minefield” och prenumerera på det i GetPodcast-appen.