Humans + AI podcast

Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35)

12/3/2026
0:00
36:05
Manda indietro di 15 secondi
Manda avanti di 15 secondi

“You and I, we’re part of this last analog generation. We had the opportunity to grow up in a time and age where our brains had to evolve against friction.”

–Cornelia C. Walther

About Cornelia C. Walther

Cornelia C. Walther is Senior Fellow at Wharton School, a Visiting Research Fellow at Harvard University, and the Director of POZE, a global alliance for systemic change. She is author of many books, with her latest book, Artificial Intelligence for Inspired Action (AI4IA), due out shortly. She was previously a humanitarian leader working for over 20 years at the United Nations driving social change globally.

Webiste:

pozebeingchange

LinkedIn Profile:

Cornelia C. Walther

University Profile:

knowledge.wharton

What you will learn

  • How the ‘hybrid tipping zone’ between humans and AI shapes society’s future
  • The dangers and consequences of ‘agency decay’ as individuals delegate critical thinking and action to AI
  • The four accelerating phenomena influencing humanity: agency decay, AI mainstreaming, AI supremacy, and planetary deterioration
  • Actionable frameworks, including ‘double literacy’ and the ‘A frame’, to balance human and algorithmic intelligence
  • What defines ‘pro social AI’ and strategies to design, measure, and advocate for AI systems that benefit people and the planet
  • The need to move beyond traditional ethics toward values-driven AI development and organizational ‘return on values’
  • Leadership principles for creating humane technology and building unique, purpose-led organizations in the age of AI
  • Global contrasts in AI development (US, Europe, China, and the Global South) and emerging examples of pro social AI initiatives

Episode Resources

Transcript

Ross Dawson: Cornelia, it is fantastic to have you on the show

Cornelia Walther: Thank you for having me Ross.

Ross: So your work is very wonderfully humans plus AI, in being able to look at humans and humanity and how we can amplify the best as possible. That’s one really interesting starting point is your idea of the hybrid tipping zone. Could you share with us what that is?

Cornelia: Yes, happy to. I would argue that we’re currently navigating a very dangerous transition where we have four disconnected yet mutually accelerating phenomena happening. At the micro level, we have agency decay, and I’m sure we’ll talk more about that later, but individuals are gradually delegating ever more of their thinking, feeling, and doing to AI.

We’re losing not only control, but also the appetite and ability to take on all of these aspects, which are part of being ourselves. At the meso level, we have AI mainstreaming, where institutions—public, private, academic—are rushing to jump on the AI train, even though there are no medium or long-term evidences about how the consequences will play out.

Then at the macro level, we have the race towards AI supremacy, which, if we’re honest, is not just something that the tech giants are engaged in, but also governments, because this is not just about money, it’s also about power and geopolitical rivalry. And finally, at the meta level, we have the deterioration of the planet, with seven out of nine boundaries now crossed, some with partially irreversible damages.

Now, you have these four phenomena happening in parallel, simultaneously, and mutually accelerating each other. So the time to do something—and I would argue that the human level is the one where we have the most leeway, at least for now, to act—is now. You and I, we’re part of this last analog generation.

We had the opportunity to grow up in a time and age where our brains had to evolve against friction. I don’t know about you, but I didn’t have a cell phone when I was a child, so I still remember my grandmother’s phone number from when I was five years old. Today, I barely remember my own. Same thing with Google Maps—when was the last time you went to a city and explored with a paper map?

Now, these are isolated functions in the brain, but with ChatGPT, there’s this general offloading opportunity, which is very convenient. But being human, I would argue, it’s a very dangerous luxury to have.

Ross: I just want to dig down quite a lot in there, but I want to come back to this. So, just that phrase—the hybrid tipping zone. The hybrid is the humans plus AI, so humans and AI are essentially, whatever words we use, now working in tandem. The tipping zone suggests that it could tip in more than one way.

So I suppose the issue then is, what are those futures? Which way could it tip, and what are the things we can do to push it in one way or another—obviously towards the more desirable outcome?

Cornelia: Thank you. I think you’re pointing towards a very important aspect, which is that tipping points can be positive or negative, but the essential thing is that we can do something to influence which way it goes. Right now, we consider AI like this big phenomenon that is happening to us. It is not—it is happening with, amongst, and because of us.

I think that is the big change that needs to happen in our minds, which is that AI is neutral at the end of the day. It’s a means to an end, not an end in itself.

We have an opportunity to shift from the old saying—which I think still holds true—garbage in, garbage out, towards values in, values out. But for that, we need to start offline and think: what are the values that we stand for? What is the world that we want to live in and leave behind?

As you know, I’m a big defender of pro social AI, which refers to AI systems that are deliberately tailored, trained, tested, and targeted to bring out the best in and for people and planet.

Ross: So again, lots of angles to dig into, but I just want to come back to that agency decay. I created a framework around the cognitive impact of AI, going from, at the bottom, cognitive corruption and cognitive erosion, through to neutral aspects, to the potential for cognitive augmentation. There are some individuals, of course, who are getting their thinking corrupted or eroded, as you’ve suggested; others are using it well and in ways which are potentially enhancing their cognition.

So, there is what individuals can do to be able to do that. There’s also what institutions, including education and employers, can do to provide the conditions where people are more likely to have a positive impact on cognition. But more broadly, the question is, again, how can we tip that more in the positive direction? Because absolutely, not just the potential, but the reality of cognitive erosion—or agency decay, as you describe it, which I think is a great phrase. So are there things we can do to move away from the widespread agency decay, which we are in danger of?

Cornelia: Yeah, I think maybe we could marry our two frameworks, because the scale of agency decay that I have developed looks at experience, experimentation, integration, reliance, and addiction. I would say we have now passed the stage of experimentation, and most of us are very deeply into the field of integration. That means we’re just half a step away from reliance, where all of a sudden it becomes nearly unthinkable to write that email yourself, to do that calendar scheduling yourself, or to write that report from scratch. But that means we’re just one step away from full-blown addiction.

At least now, we still have the possibility to compare the before and after, which comes back to us as an analog generation. Now is the time to invest in what I would call double literacy—a holistic understanding of our NI, our natural intelligence, but also our algorithmic, our AI. That requires a double literacy—not just AI literacy or digital literacy, but the complementarity of these two intelligences and their mutual influence, because none of them happens in a vacuum anymore.

Ross: Absolutely, So what you described—experiment, integration, reliance, addiction—sounds like a slippery slope. So, what are the things we can do to mitigate or push back against that, to use AI without being over-reliant, and where that experiment leads to integration in a positive way? What can we do, either as individuals or as employers or institutions, to stop that negative slide and potentially push back to a more positive use and frame?

Cornelia: A very useful tool that I have found resonates with many people is the A frame, which looks at awareness, appreciation, acceptance, and accountability. I have an alliteration affinity, as you can see. The awareness stage looks at the mindset itself and really disciplines us not to slip down that slope, but to be aware of the steps we’re taking.

The appreciation is about what makes us, in our own NI, unique, and the appreciation of where, in combination with certain external tools, it can be better. We all have gaps, we all have weaknesses, and that’s what we have to accept. The human being, even though now it’s sometimes put in opposition to AI as the better one, is not perfect either. Like probably you and most of the listeners have read Thinking, Fast and Slow by Daniel Kahneman and many others—there are libraries about human heuristics, human fallacies, our inability for actual rational thinking. But the fact that you have read a book does not mean that you are immune to that. We need to accept that this is part of our modus operandi, and in the same way as we are imperfect, AI, in many different ways, is also imperfect.

And finally, the accountability. Because at the end of the day, no matter how powerful our tools are going to be, we as the human decision makers should consider ourselves accountable for the outcomes.

Ross: Absolutely, that’s one of the points I make. We can’t obviously make machines accountable—ultimately, the accountability resides in humans. So we have to design systems, which I think provides a bit of a transition to pro social AI. So what is pro social AI, how do we build it, how do we deploy that, and how do we make that the center of AI development?

Cornelia: Thank you for that. Pro social AI, in a way, is very simple. It’s the intent that matters, but it starts from scratch, so you have the regenerative intent embedded into the algorithmic architecture. It has four key elements that can be measured, tracked, and can also serve to sensitize those who use it and those who design it—tailored, framed, tested, targeted.

The pro social AI index that I’ve been working on over the past months combines that with the quadruple bottom line: purpose, people, profit, planet. Now all of a sudden, rather than talking in an airy-fairy way about ethical AI—which is great and necessary, but I would argue is not enough—we need to systematically think about how we can harness AI as a catalyst of positive transformation that is with environmental dignity and seeks planetary health. How can we measure that?

Ross: And so, what are we measuring? Are we measuring an AI system, or what is the assessment tool? What is it that is being assessed?

Cornelia: It’s the how and the what for. For example, what data has been used? Is the data really representative? We know that the majority of AI tools are biased. And the other question is, is it only used for efficiency and effectiveness, but to what end?

Ross: Yes, as we are seeing in current conversations around the use of models at Anthropic and OpenAI, there are tools, and there are questions around how they are used, not just what the tools are.

Cornelia: Yes, so again, it comes back to the need for awareness and for hybrid intelligence, because at the end of the day, we can’t rely on companies whose purpose is to make money to give systems that serve people and planet first and foremost.

Ross: This goes on to another one of your wonderful framings, which is AI for IA—AI for inspired action—around this idea of how do we amplify humans and humanity. Of course, this goes on to everything we’ve been discussing so far. But I think one of the things which is very useful there is AI, in a way, leading to humans taking action which is inspired around envisaging what is possible. So, how can we inspire positive action by people in the framing we’ve discussed?

Cornelia: AI for IA is the title of the new book that’s coming out next month. But also, as with most of the things I’m saying, it’s not about the technology—it’s about the human being. We can’t expect the technology of tomorrow to be better than the humans of today.

As I said before, garbage in, garbage out, or values in, values out—it’s so simple and it’s so uncomfortable, it’s so cumbersome, right? Because we like quick fixes. But unfortunately, AI or technology in general is not going to save us from ourselves, and as it is right now, we’re straightforward on a trend to repeat the mistakes made during the first, second, and third industrial revolutions, where technology and innovation were driven primarily by commercial intent.

Now, I would argue that this time around, we can’t leave it at that, because this fourth industrial revolution has such a strong impact on the way we think, feel, and interact, that we need to start in our very own little courtyard to think: what kind of me do I want to see amplified?

Ross: Yes, yes. I’ve always thought that if AI amplifies us, or technology generally amplifies us, we will discover who we are, because the more we are amplified, the more we see ourselves writ large. But we have choices around, as you say, what aspects of who we are as individuals and as a society we can amplify. That’s the critical choice. So the question is, how do we bring awareness to your word around what it is about us that we want to amplify, and how do we then selectively amplify that, rather than also amplify the negative aspects of humanity?

Cornelia: The first thing, and that’s a simple one, is the A frame. I would argue that’s something everyone can integrate in their daily routine in a very simple way, to remind us of the four A’s: awareness, appreciation, acceptance, accountability. The other one, at the institutional level, is the integration of double literacy.

Right now, there’s a lot of hype in schools and at the governmental level about AI literacy and digital literacy. I think that’s only half of the equation. This is now an opportunity to take a step back and finally address this gap that has characterized education systems for many decades, where thinking and thinking about thinking—metacognition—is not taught in schools. Systems thinking, understanding cognitive biases, understanding interplays—now is the time to learn about that. If the future will be populated by humans that interact with artificial counterparts configured to address and exploit every single one of our human Achilles heels, then we would be better advised to know those Achilles heels. So, I think these are two relatively simple ways moving forward that could take us to a better place.

Ross: So this goes to one of your other books on human leadership for humane technology. So leadership of course, everyone is a leader in who they touch. We also have more formal leaders of organizations, nations, political parties, NGOs, and so on. But just taking this into a business context, there are many leaders now of organizations trying to transform their organizations because they understand that the world is different, and they need to be a different organization. They still need to make money to pay for their staff and what they are doing to develop the organization, but they have multiple purposes and multiple stakeholders. So, just thinking from an organizational leader perspective, what does human leadership for humane technology mean? What does that look like? What are the behaviors? What are the ways we can see that would show us?

Cornelia: I think first, it’s a reframing away from this very narrow scope of return on investment, which has characterized the business scene for many decades, and looking at return on values. What is the bigger picture that we are actually part of and shaping here? What’s the why at the end of the day? I think that matters for leaders who are in their place to guide others, and guidance is not just telling people what they have to do, but also inspiring them to want to do it.

Inspiration, at the end of the day, is something that comes from the inside out, because you see in the other person something that you would like in yourself. Power and money are not it—it’s vision. I think this is maybe the one thing that is right now missing. We all tend to see the opportunity, but then we go with what everybody else is doing, because we don’t really take the time to step back and think, well, there is the path of everyone, and there’s another one—how should I explore that one?

Especially amidst AI, where just upscaling your company with additional tools is not really going to set you apart, it matters twice as much to not just think about how do I do more of the same with less investment and faster, but what makes me unique, and how can I now use the artificial treasure chests to amplify that?

Ross: Yes, yes. I think purpose is now well recognized beyond the business agenda. One of the critical aspects is that it attracts the most talented people, but also, over the years, we’ve had more and more opportunities to be different as an organization. Back in the late ’90s and so on, organizations looked more and more the same. Now there are more and more opportunities to be different. The way in which AI and other technologies are brought into organizations gives an extraordinary array of possibilities to be unique, as you’ve described, and distinctive, which gives you a competitive position as well as being able to attract people who are aligned with your purpose.

Cornelia: Yes, exactly. But for that, you need to know your purpose first.

Ross: From everything we’ve just been talking about, or anything else, are there any examples of organizations or initiatives that you think are exemplars or support the way in which, or show how, we could be approaching this well?

Cornelia: I think—this will now sound very biased—but I’m currently working with Sunway University, and I think they are the kind of academic institution that is showing a different path, seeking to leverage technology to be more sustainable, bringing in dimensions such as planetary health, like the Sunway Centre for Planetary Health, and thinking about business in a re-envisioned way, with the Institute for Global Strategy and Competitiveness.

I think there are examples at the institutional level, there are examples at the individual level, and sometimes the most inspiring individuals are not those that make the headlines. That’s maybe, sorry, just on that, for me the most important takeaway: no matter which place one is in the social food chain, the essential thing is, who are you and how can you inspire the person next to you to make it a better day, to make it a better future.

Ross: Yes, in fact, that word “inspired,” as you mentioned before. So that’s Sunway University in Malaysia?

Cornelia: I think they are definitely a very, very good illustration of that.

Ross: Just pulling this back to the global frame, and this gets quite macro, but I think it is very important. It pulls together some of the things we’ve pointed to—the difference between the approach of the United States, China, Europe, in how they are, you know, essentially the leaders in AI and how they’re going about it, but where the global south more generally, I think there’s some interesting things. Arguably, there’s a far more positive attitude generally in the populations, a sense of the opportunity to transform themselves, but of course a very different orientation in how they want to use and apply AI and in creating value for individuals, nations, and society. So how would you frame those four—the US, China, Europe, and the global south—and how they are, or could be, approaching the development of AI?

Cornelia: Thank you for that. I think right now there are three mainstream patterns: the US, which is—I’m overly simplifying and aware of that—the US path, which is business overall; the European model, which is regulation overall; and the Chinese model, which is state dominance. I would argue there’s a fourth path, and I think that’s where leaders in the global south can step in.

You might know I’m working, on the one hand, in Malaysia and, on the other hand, in Morocco, on the development of a sort of national blueprint of what pro social AI can look like. I think now is the time—again, coming back to leadership—to think about how countries can walk a different path and be pioneers in a field that, yes, AI has been around for various decades, but the latest trend, the latest wave that is engulfing society since November 2022, is still relatively new. So why not have nations in the global south that are very different from the West chart their own path and make it pro social, pro people, pro planet, and pro potential—and that potential that they have themselves, which sets them apart and makes them unique.

Ross: Absolutely. Again, you mentioned Malaysia, Morocco. Looking around the world, of course, India is prominent. There are some African nations which have done some very interesting things. Just trying to think, where are other examples of these kinds of domestically born pro social initiatives happening? Of course, the Middle East—it’s quite different, because they’re wealthy, though they’re not among the major leaders, but there’s a whole array of different examples. Where would you point to as things which show how we could be using pro social AI at a national or regional level?

Cornelia: Unfortunately, right now, there is not one country where one could say they have taken it from A to Z, but I think there are very inspiring or positive examples. For example, Vietnam was the first country in ASEAN to endorse a law on AI ethics and regulation—I think that’s a very good one. Also, ASEAN has guidelines on ethics. All of these are points of departure. Switzerland did a very nice example of what public AI can look like. So there are a lot of very good examples.

The question is not so much about what to do, I think, but how to do it, and why. At the end of the day, it’s really that simple. What’s the intent behind it? What do we want the post-2030 agenda to look like? We know that the SDG—Sustainable Development Goals—are not going to be fulfilled between now and 2030. So are we learning from these lessons, or are we following the track pattern of doing more of the same and maybe throwing in a couple of additional indicators, or can we really take a step back and look ourselves and the world in the face and think, what have we missed? Now, frame it however you want, but think about hybrid development goals and ways in which means and ends—society and business—come together into a more holistic equation that respects planetary health. Because at the end of the day, our survival still depends on the survival and flourishing of planet Earth, and some might cherish the idea of emigrating to Mars, but I still think that overall the majority of us would prefer to stay here.

Ross: Yes, planet Earth is beautiful, and it’d be nice to keep it that way. How can people find more about your work? Could you just tell people about your new book and any resources where people can find out more?

Cornelia: Thank you so much. They are very welcome to reach out via LinkedIn. Also, I’m writing regularly on Psychology Today, on Knowledge at Wharton, and various other platforms. The new book that you mentioned is coming out next month, and there will be another one, hopefully by the end of the year. Overall, feel free to reach out. I really feel that the more people get into this different trend of thinking, the better. But thank you so much for the opportunity.

Ross: Thanks so much for all of your work, Cornelia. It’s very important.

The post Cornelia C. Walther on AI for Inspired Action, return on values, prosocial AI, and the hybrid tipping zone (AC Ep35) appeared first on Humans + AI.

Altri episodi di "Humans + AI"