Humans + AI podcast

Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension (AC Ep38)

2026-04-01
0:00
34:46
Spola tillbaka 15 sekunder
Spola framåt 15 sekunder

“Fiction has this unprecedented power in tech spaces. The more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer.”

–Nina Begus

About Nina Begus

Nina Begus is a researcher at the University of California, Berkeley, leading a research group on artificial humanities, and the founder of InterpretAI. She is author of Artificial Humanities: A Fictional Perspective on Language in AI, which received an Artificiality Institute Award, and First Encounters with AI.

Webiste:

ninabegus.com

LinkedIn Profile:

Nina Begus

 Book:

Artificial Humanities

What you will learn

  • How ancient myths and archetypes influence our understanding and design of AI
  • Why the humanities—literature, philosophy, and the arts—are crucial for developing more thoughtful and innovative AI systems
  • The dangers of limiting AI concepts to human-centered metaphors and the need for new, more expansive imaginaries
  • How metaphors shape our interactions with AI products and the user experiences companies choose to enable
  • The challenges and possibilities of imagining forms of machine intelligence and language beyond human templates
  • Why collaboration between technical experts and humanists opens new frontiers for creativity and responsible technology
  • What makes writing and artistic creation uniquely human, and how AI amplifies—not replaces—these impulses
  • Practical ways artists, engineers, and thinkers can work together to explore new relationships and futures with AI

Episode Resources

Transcript

Ross Dawson: Nina, it is wonderful to have you on the show.

Nina Begus: Thank you for having me.

Ross Dawson: You’ve written this very interesting book, Artificial Humanities, and I think there’s a lot to dig into. But what does that mean? What do you mean by artificial humanities?

Nina Begus: Well, this was really a new framework that I’ve developed while I was working on the relationship between AI and fiction, and I started working on this about 15 years ago when I realized that fiction has this unprecedented power in tech spaces. So this is how it all started, but then the more I started talking to engineers about their technical problems, the more I realized there’s so much more that humanities could offer in this collaborative, generative approach that I’ve developed.

I would say that now, as the field stands, it’s really a way to explore and demonstrate how humanities—as broad as science and technology studies, literary studies, film, philosophy, rhetoric, history of technology—how all of these fields can help us address the most pressing issues in AI development and use. And it’s been important to me that this approach uses traditional humanistic methods, theory, conceptual work, history, ethical approaches, but also that it’s collaborative and exploratory and experimental in this way that you can look back into the past and at the present to make a more informed choice about the future. You can speculate about different possibilities with it.

Ross Dawson: Well, art is an expression of the human psyche, or even more, it is the fullest expression of humanity, and that’s what art tries to do. Also, I’m a deep believer in archetypes, human archetypes, and things which are intrinsic to who we are, and that’s something which you can only really uncover through the arts.

Now we have arguably seen all these archetypes play out in real time, these modern myths being created right now in the stories being told of how AI is being created. So I think it’s extraordinarily relevant to look back at how we have depicted machines through our history and our relationship to them.

Nina Begus: Yes, this is the reason why I started exploring this topic, actually, because there were so many ancient myths, these archetypal narratives that I’ve seen at the same time, both in technological products that were coming to the market and in the way technologists were thinking about it, and also in fictional products and films and novels in the way we imagined AI. I framed my book around the Pygmalion myth, but there are many, many other myths—Prometheus, Narcissus, the Big Brother narrative, and so on—that are very much doing work in the AI space.

The reason why I chose the Pygmalion myth is because it’s so bizarre in many ways: you have this myth where a man creates an artificial woman, and then in the process of creation, falls in love with her. So there’s the creation of the human-like, and there’s also this relationality with the human-like. You would think this would not be a common myth, but quite the opposite—I found it everywhere I looked. It wasn’t called the Pygmalion myth, but the motif was there. I found it on the Silk Road, in ancient folk tales, in Native American folk tales, North Africa, and so on.

So I think this kind of story is actually telling us a lot about how humans are not rational, how we have some very deeply embedded behaviors in us, and one of them is that we anthropomorphize everything, including machines.So I think this was a really important takeaway that we got already from the early days of AI with the first chatbot, Eliza. We’ve learned that that will be a feature of us relating to machines.

Ross Dawson: So Joseph Campbell called the hero’s journey the monomyth, as in, there is a single myth. And I guess what you are doing here is—well, if you agree with that, which I’d be interested in—is that there are facets. The classic hero’s journey is quite simple, but there are facets of that monomyth, or something intrinsic to who we are, that is around this creation. And in this case, as you say, this relation we have with what we have created. Would you relate that at all to Joseph Campbell’s work?

Nina Begus: I haven’t thought about it in this way, because I thought about myth and myths more and less of a storytelling issue, which here is definitely happening—the hero goes on a task, returns back changed, and maybe changes something in the community. The myths that I was looking into and the metaphors that I was exploring, primarily this huge metaphor of AI as a human mind, as an artificial reason—I think it works differently. It’s less of a narrative; it’s more of an imaginary of how or towards what we are building.

I think this is a big problem, actually, because the imaginary around AI is very poor. What you get is mostly imagining machine intelligence on human terms, and a lot of people are bothered by that in the AI discourse—right, when you say the machine thinks, or the machine learns, or it has a mind, and some people go as far as to say it has consciousness. I think this kind of debate is actually not that productive. I think it’s more important to see how all these different AI products that we’ve created—and mostly when we talk about AI, people think of language models now—are very much designed as a sort of character, almost as an artificial human that, in literature, authors have been creating for a long time.

So I think in that case, we can get back to a hero’s journey. But I think what I was looking at was actually more on the surface level of what kind of shortcuts we are using with these metaphors that we’re employing when building and using AI. I think the book makes a really good case showing that, yes, this is actually a very cultural technology. It’s very much informed by our imaginaries. One surprising part of it was really how hard it was to break out of this human mold. It was pretty much impossible to find examples of machines that are not exclusively human-like. I think Stanislaw Lem is one of the rare writers who can consistently deliver this kind of imaginary.

Even looking at more recent works, like popular films such as Hollywood’s Ex Machina or Her, you can see how the technologists themselves would say, “Oh, we were influenced by this film,” in a way that it affirmed their product development trajectory. You can see it now, at this moment, with OpenAI launching companionship. So in many ways, not a lot has changed.

Ross Dawson: Yeah, there’s a lot to dig into there. I just want to go back—in a sense, Pygmalion is a metaphor, but it’s also a myth. It is a story: creates a woman, and then falls in love with her, and then whatever happens from there. There is this, something happens, and then something else happens. That’s what a story is. I think that can impact the implicit metaphor, but coming back to the metaphor—so George Lakoff wrote the beautiful book Metaphors We Live By. I think the way the brain works is in metaphors and analogies to a very large degree. Some of those are enabling metaphors, and some of those are not very useful metaphors.

I think part of your point is that some of the metaphors that we have for thinking about AI and machines are not useful. There may be, or we could create, some metaphors that are more useful. So, what are some of the most disabling metaphors, and what are some of the ones which could be more constructive?

Nina Begus: Yes, So I think this main metaphor that I’ve mentioned—of AI as a human mind—is very limiting. I think it really limits the machinic potential to actually do something good with it. The fact that we’re still using the criteria that were made for humans, like different criteria developed on human language—the Turing test was one of them, right, a while ago. Now we have stricter ones. I think this tells you a lot about how we actually evaluate AI and how even these benchmarks that are supposed to be quantitative are actually often qualitative, often stories, like mini-narratives.

But yeah, when we look at different metaphors in this space, there are other ones that also emerge from fiction. I mentioned the Big Brother, the AI as an Oracle, and we need to be aware that these ideas inform the very interaction we have with AI. If we think of it as a mirror, we’re going to use it differently—it’s almost as a bouncing board. If we think of it as a teacher, or as a coach, or as an assistant, it would again create a different use. So I think there are a lot of these metaphors that the companies themselves are trying to decide which one they will go with, because it completely changes the user and the interaction.

I think they’re also very cultural, even though you might say, “Oh, it’s a categorical mistake to treat a machine as a human.” I think you can see this kind of treatment across, at least in part, and it doesn’t mean that we consider it human. It just means that we’re engaging with it on our own terms, as if it was human.

Now, what could be productive? I do think metaphors, even if they’re not accurate, can be productive. My goal, really, with the book was to break out of this projection of what the machine could be, to find in this exploratory way other directions, other landscapes where we couldn’t go because we’re being limited by our imaginary, by our ideas. So in this way, I think humanistic approaches can be very helpful to designers, to technology builders, to artists, to explore the novelty that so many of these sectors are after.

Ross Dawson: Yeah, and I guess people latch on to what they know. I think that’s part of the thing where with AI, “Oh, it’s like a human. Let’s treat it like a human, and let’s make it like a human.” It is, amongst other things, a lack of imagination. That’s where the humanities, the arts, can offer us—those who have the imagination to be able to envisage different possibilities or relationships.

But I guess part of it is also that humans relate, and so we have learned to relate to other humans and also to other animals and hopefully to nature as well. But these are all established patterns of relating. So do we need to discover in ourselves new ways of relating to new categories—things which are not humans, not animals, and not nature?

Nina Begus: Exactly, this is the exact problem we’re dealing with, and because we’re dealing with a yet unexplored, yet undefined relation, and we’re using old, outdated terms for that relation. This is why we don’t really have a good way of describing it and establishing it. It will take a while for this to develop, which is fine, but we need to realize that there are some concepts that we’re using that we better leave behind and go ahead by building new ones.

This is why I think it’s really important to work in a more interdisciplinary collaboration, so that you can see what you can actually build from the technical perspective, so that you can see what these machines are actually capable of. Because you usually don’t know when you create them right?Machine learning is sort of exploratory by design.

Ross Dawson: So, just to call it out more explicitly, what are the metaphors you think are the most destructive or most inappropriate, and what are some of the ones which you think are the most promising?

Nina Begus: Well, I’m just writing on the Midas myth, which is sort of the opposite of the Pygmalion myth. With Pygmalion, you lean into that human imitation, but with Midas, you lean into the liminality that Midas presents as this sort of hybrid creature. I think leaning into the boundaries that we draw for ourselves—and now AI is not cooperating with them—this is where the productive part will be in actually creating something that has philosophical dignity, but also a kind of productive trajectory for the machines to go.

I feel like we’re still in this first phase of developing AI, because when you look at it historically, we haven’t really moved from the conceptual and philosophical premises that were established in the 1940s, 50s, and 60s for this technology. We have now gotten the technology that caught up to the ideas from the 60s, but we’re still stuck in the same conceptual space.

Ross Dawson: Yeah, very much so. And, you know, of course, what is AGI, which everyone talks about, is basically—the only way in which people seem to be able to frame it is as relative to humans, which is the only reference point we have. I mean, there’s, of course, animal intelligence, but that’s because of that. It is, again, that lack of imagination—saying, “Well, intelligence, oh, intelligence is what humans do, so let’s do something which is the same as that,” whereas there’s so much white space in what intelligence could be.

I think this almost comes back to definition. When people say intelligence, the word, when they use the word intelligence, they are referring to what humans do. It’s not a general term, and so it all becomes a language problem as well, because we are so rooted to relating our language to human capabilities, as opposed to a more general potential.

Nina Begus: Yes, I think you’re really on to something here, because I can see it also—because I work with animal communication researchers, and we’re finding things there that we didn’t find because we limited ourselves to thinking language is just a human production, that it needs a human subject. Now, as soon as we got rid of this presumption, we’re finding new things, things that are basically parallel to what we do in our language.

So language is in a space of tension because it’s being attacked both from the animal side and from the machinic side, which is why I really focused on language in this book. It’s not a coincidence that we centered artificial intelligence in language as the interface, because this is how we relate to the world—this is our interface to talk to each other, to understand each other.

I think the fact that language is coming under such pressure as an interface brings with it a lot of other concepts that are being challenged. Are only humans creative? Is there a natural creativity, machinic creativity? Is there a different kind of intelligence that’s maybe solely biological, embodied? How do we think about cognition? How do we think about culture? In AI and in the natural world, there’s so much that comes with it: agency, autonomy, freedom, community, which I think we will be grappling with for the next few decades, at least.

Ross Dawson: I think you alluded before to the potential for AI to have its own languages. 

Nina Begus: I’ts happening already. The reason why I like Stanislaw Lem so much is because he can actually think about a machine—back in the 1970s, he’s doing that—about a machine that’s not human-like, that’s not limited to human language. It is trained on human language, but then it goes its own way, where the human linguistic ceiling just cannot go anymore.

We’re already seeing that in the models, in Berkeley’s Biological Artificial Intelligence Lab, in the models that are not large language models, but generative adversarial networks that are based on speech. We see that as they are learning the words, they are encoding some information into silences that we don’t know what it is.

I think what’s really exciting to me are two things about language in machines. The first one is, what is this non-human production of language? We did not think that non-humans can produce language, even though we had parrots who had to crawl their way to us to speak in “humanese,” to show that they have some kind of intelligence—even if it’s just parroting, even if it’s just what we call imitation, which some people consider not to be intelligence.

We’ve had these examples before, but now it’s gotten nuclear—on this scale that LLMs are performing, it’s really challenged a lot of our solely human attributes: creativity, storytelling. A lot of journalists come to me because there’s this existential fear of machines taking over their work and so on. So we’ve been thinking about those things, and now it’s actually happening.

Ross Dawson: One of the other key points here, I think, is that humanity is—the arts—there’s so much, as you mentioned, in terms of fiction, in terms of films, in terms of visual arts, and many other artistic domains. We have reference points that we use, and the amount which people refer to the movie Her in the last years is pretty extraordinary, partly because it’s obviously coming very much true. I think the Ex Machina story is very interesting as well, as are many others in the past.

But there is also this act of imagination. There are people who have written these books, who have crafted these films, who have created these things, and they are the ones who have been not just manifesting our human psyche, but also pushing that out and coming up with ideas which others haven’t had, to give us something. So one thing we can certainly do is mine and dig into what has been created. But is there a way to interface through this to this act of imagining, which can give us new artifacts and ways of thinking and ways of relating?

Nina Begus: Yes, I think imagination and humanities in general are going to become more and more important, because AI will do a lot of technical work, but imaginaries—this is what we really excel at. It’s actually interesting to see how you think fiction is this unbounded landscape where you can imagine anything, and yet it’s really hard to find examples of machines that are beyond the human.

Even these writers, like the screenwriters for Her and Ex Machina, create these completely Pygmalion-esque films, where you have an artificial woman leading a relationship with a human man, and so on. For the whole film, you have her act as a human-like entity. But then at the end of each of those films—well, particularly in Her—Spike Jonze really tried to break out of this and show her AI side. Basically, there was no language to describe it, so he resorted to a metaphor—the metaphor of a book, where Samantha, the operations assistant, explains that her world is falling apart, like the way words are floating further and further apart in a book. That’s how she’s able to describe it; that’s the closest she gets.

And then in Ex Machina, Alex Garland really wanted to portray the world from the social robot Ava’s perspective in a visual way. He wrote down a scene, but he said, “I failed to execute it visually. I just couldn’t do it well.” So instead, he gave us a different scene that’s shot from afar, where Ava embarks onto a helicopter and she has to undergo her Turing test—the helicopter pilot cannot recognize her as a robot; he needs to think she’s a human woman.

There have been attempts, I think even in Garland’s next film Annihilation, they’re trying to set the grounds for something that’s entirely new and hard to imagine. I think a big takeaway for us is this is very hard to do.

Ross Dawson: Yes, well, given that context, I do want to—as in the human plus AI framing—given all of this, what is it that we can do or should be doing in order to amplify our humanity, our capabilities, the positive aspects of what it is to be human? How can we relate to or use AI in order to amplify the best of us?

Nina Begus: Yeah, I actually had, while I was writing the book Artificial Humanities, this other dream project to work with writers—professional writers, creatives, people who live in a world of words—to see what they make of AI. I waited a little bit for the public’s polarized reactions to calm down a bit and gathered 16 writers, some of whom already made a space for themselves in the field, like Sheila Heti and Ken Liu and Ted Chiang, and then some of the more junior writers who I knew were thinking about that—a Netflix screenwriter, and so on.

I gathered them to see—I think the creative people are really the answer here—I gathered them to see how they approach this very human part of the new human and AI collaboration zone. What was common across a lot of essays that are coming out in October under the title “First Encounters with AI” is this argument that, well, AI doesn’t have subjectivity, it doesn’t have emotions, it doesn’t have a body, it doesn’t have experience, it doesn’t have meaning—all of these things that really make us human, all of these parts that actually make art compelling and literature compelling.

So Ken Liu’s argument, for example, was, let’s leave machines what they’re good at—they’re good at imitating and copying—and we’re good at interpreting, we’re good at creating and imagining. I think this is really a way to go with this. This catastrophizing that’s very present in the public discourse, I think, is a bit misleading. I wish we had a more nuanced approach to what’s actually happening, particularly in the space of writing.

Obviously, AI is a groundbreaking technology that affects pretty much every one of us and all the sectors, but when it comes to writing, we just don’t think it’s killable. We think that there’s this perennial impulse that humans have to play with language, and that is not going to go away with AI. We’re just going to amplify it through AI, through this new possibility that has now opened in many ways.

I like to think about AI as—you know, we’ve figured out how to fly. As soon as we figured out the physics of flight, we had planes and helicopters and drones and kites, and these are the new possibilities for human activities. In the same way, we figured out the machine learning principles, and now we have large language models and diffusion models, and we have GANs and so on, and there will be more. These are the new spaces of possibility that have opened for our activities, for our spirit to work on, but they do not replace the human in a meaningful way. It’s more about extension than it is about automation.

Ross Dawson: Yeah, that’s a wonderful way of framing it. So where can people go to find out more about your work?

Nina Begus: I have a pretty populated website with my name, ninabegus.com, where I write about my books, I write about my public work. I have videos on there, podcasts, links, and so on. I also have a pretty lively lab with a lot of collaborators and students, where a lot of what I imagined when writing Artificial Humanities—where a lot of collaborative projects happen. We have artists, we have engineers, we have philosophers that work on the same question, but come at it from very different backgrounds and with very different skills. I think this is becoming more and more important in the world of AI.

Ross Dawson: Yes, yes, bringing all of those disciplines and frames and thinking together. That’s wonderful. I love what you’re doing—very important. I hope the messages ripple through, and obviously wonderful to be able to share this with the Humans Plus AI audience. Thank you so much.

Nina Begus: Thank you, Ross, and thank you all for listening.

The post Nina Begus on artificial humanities, AI archetypes, limiting and productive metaphors, and human extension (AC Ep38) appeared first on Humans + AI.

Fler avsnitt från "Humans + AI"