
Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation (AC Ep36)
“Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through.”
–Joanna Michalska
About Dr Joanna MichalskaDr Joanna Michalska is Founder of Ethica Group Ltd., Co-Founder of The Strategic Centre, and an advisor to boards on AI risk, ethics, and governance. She holds a PhD in Strategic Enterprise Risk Management and has twenty years’ experience leading enterprise risk, strategy and transformation at J.P. Morgan and HSBC.
Webiste:
LinkedIn Profile:
What you will learn
- How boards and executives can rethink governance and accountability in the age of AI
- The importance of embedding governance into organizational ecosystems for agile, responsible AI adoption
- How to map and assign human accountability for both automated and hybrid AI-human decisions
- The decision architecture needed for scalable oversight, intervention, and escalation pathways
- Practical examples of effective AI oversight in areas like fraud detection and exception handling
- Steps for complying with new regulations like the EU AI Act, including inventorying AI systems and risk tiering
- Why human qualities like emotional intelligence, psychological safety, and honest communication are critical in AI-driven organizations
- How leaders can foster organizational resilience and help teams adapt by building AI literacy, retraining, and supporting personal growth
Episode Resources
Transcript
Ross Dawson: Joanna, it’s a delight to have you on the show.
Joanna Michalska: Well, thank you for having me, Ross.
Ross Dawson: So, AI is wonderful, but it also brings us into a whole lot of new territory where we have to be careful in various ways. I’d love to just hear, first of all, the big framing around how boards and executive teams need to be thinking about governance and accountability as AI is incorporated more and more into work and organizations.
Joanna Michalska: I think we’re all very excited about the capability that exists today to help us enhance our performance and the way we think about strategic execution for our organizations. It has multidimensional consequences for how we adapt it. What’s very important right now is, as executives and boards think about accelerating their ambitions and growth plans, there needs to be awareness of two components.
First, how do we as leaders, as humans, need to adapt to that new environment? There are new conditions, or perhaps existing conditions that really need to be enhanced. They’re very important to exist in order to be able to adapt and to scale. Second, do we actually have the right systems in place to enable that scale?
I think it’s important to recognize that, yes, governance has always existed, but the way it existed was more as external supporting scaffolding, rather than being built into an organizational ecosystem. We also need to have the right leadership in place to ensure that decisions are made in the right way and the organization is designed in a much more robust, agile way. These two conditions are critical for not only increasing adoption, but also doing so in a safe and responsible way, especially as we expand our ambitions for the future. It’s exciting, but there’s also a lot of caution and a lot of questions being asked by executives at this time.
Ross Dawson: Yes and I guess the more we can address those concerns upfront, the more it enables us to do. I have this idea of minimum viable governance—at least having some governance in place so we don’t go too badly astray. But I always think of governance for transformation as: how do you set governance not as a brake to slow you, but in fact to accelerate you, because you have confidence in how you’re going about it?
Joanna Michalska: Absolutely! I think the mindset shift is very important, because governance, to your point, has always been seen as a compliance-driven thing that we must do because regulators require us to, and we need to demonstrate we have these policies and procedures in place and the right people in the right positions. Now, what the new environment is requiring of us—as executives, even board members—is a different set of responsibilities that really cannot be assumed as pre-existing.
In this accelerated environment—let’s call it that, rather than just “AI,” because it’s so overused and can mean so many different things—where the automation rate is fast and overtaking everything, governance needs to change. It can’t be an afterthought or something we designed at one point in the past and now just try to fit into what’s happening. It really needs to become a well-designed, living organism. It needs to organically evolve. It needs to have the right people with the right accountability that is well understood. Accountability that was designed in the past needs to be looked at, discussed, and understood by all executives and across the organization, cross-functionally, to really work.
Another important thing is to make sure executives have the right level of ownership and responsibility to ensure the conditions exist to enable that system to work. That’s a very difficult thing to do, because now you’re talking about having designed human oversight that doesn’t just become a “human in the loop,” but the right human in the right loop. By “right,” I mean: does this person, or these people, understand exactly what the output of the automated system is? How has this decision been made? Is there the right level of executive oversight when that decision is already made? How confident are we that we can say, with a level of certainty, “I’m comfortable with this, and this is not going to create negative consequences I’m not willing to accept”? That’s not an easy thing to do—to create those conditions of trust and safety.
Ross Dawson: Particularly when there are so many decisions and outputs throughout the organization. Let’s go into decision making. I’ve built a little framework around going from humans-only through to AI-only decisions. Hopefully, there are no purely human decisions anymore; at least you can ask an AI, “Am I crazy or not?” even if it’s a human decision. Some decisions are already fully automated, but they still need oversight.
You can bring in exceptions, conditional things, humans in the loop for approval, humans in the process, or build an explainability layer. There’s a whole array of different things. For every decision, you need to create the right way to implement it. In an organization with that profusion of different decisions and possible approaches, how can you actually make that happen?
Joanna Michalska: Yeah, it’s a great question. Decisions are at the center of everything, and the quality of those decisions—and the whole architecture, how it’s designed for decisions to be made—is really important. It doesn’t stay static; it evolves as the organizational structure evolves. Questions like accountability—what does it look like, and what is the governance around accountability—are critical.
Intervention capability is also very important, because with this level of automation, the whole design of how automated decisions are made raises multiple questions. Are these decisions made by old algorithms that are very simple, where the risk is determined by a set of rules? Is there clarity around who actually has the decision intervention rights in the organization, and how does that roll up to an executive layer?
Determining accountability, the ability to intervene, the time to intervention, the time to stop, pause, change, alter—there are so many different layers that need to be thought through. The quality of human decision-making, and determining when a human is able to review decisions made by complex systems—whether agentic or whatever structure the organization has—is critical at any level, whether it’s middle management, executive management, or board. There are different layers of how the architecture requires design and measurement.
Escalation pathways are another one. People will not naturally escalate if they fear negative consequences, retaliation, or any type of fear created because there isn’t psychological safety or trust within the organization. Even if there is an escalation protocol in place within the decision architecture, how do we know that people will raise the problem?
Ross Dawson: The accountability. Of course, only humans are accountable. Ultimately, the board and their executives are accountable. But what you’re suggesting, it sounds like, is that for every decision, there is somebody where you can say, “That person is accountable.” Obviously, it cascades up to who they’re reporting to, but there is human accountability for every decision made, even if it’s a thousand decisions where somebody has oversight and responsibility that those are the right decisions.
I want to talk about escalation and how that might happen, but perhaps we can ground this with a couple of examples. What are some examples of decisions made in organizations—hopefully well-designed, or perhaps not so well-designed and haven’t worked out?
Joanna Michalska: Yes, I have a couple of good examples where an automated system allows review of multiple false positives, where a human would spend months or weeks looking at exceptions. From an optimization perspective, that’s really valuable. For example, in fraud detection or sanction screening, you can design a process where your algorithm applies rules very quickly with specific risk tiering. You know which decisions need an additional level of checks—let’s say, automated checks. With a confidence of over 90%, your queue for checking and looking at exceptions—what would otherwise have to be done by a human—is not really necessary; it’s done by the algorithm.
In terms of decisioning, from a human oversight perspective, you’re really looking at things that are very high risk and require additional human review, or exceptions to the usual flow that break the rule designed for the algorithm to execute. Then, somebody picks it up and looks at it. These are powerful examples where there’s potentially a high human risk of misinterpreting something, but if the algorithm is designed appropriately and has the right governance in place, it can really speed things up and make space for a human who otherwise would be involved in that process to actually develop and do something different.
In the example I’m talking about, we’ve focused on retraining people to expand their roles and do something else, rather than just being involved in checking decisions or reviewing boring exceptions that were really false positives.
Ross Dawson: What’s another different example?
Joanna Michalska: Fraud detection is another really good one where—
Ross Dawson: Because one of the things about fraud detection is there is an answer—as in, it is fraud or it isn’t fraud. You can get false positives and false negatives, but that’s kind of reductionist. There’s a whole array of decisions where you can’t necessarily say before the fact whether it’s a good decision or not. It’s interesting to look at these very different types of decisions, not just ones that can be very algorithmic because they’re data-based and there is a true or false. Many decisions don’t fit those parameters.
Joanna Michalska: No, that’s very true. Actually, what I’ve seen, especially recently, is that there are a lot of questions being asked by the board or executives when they get to the point where a decision isn’t easy or clear. They look at sets of metrics that do not make a lot of sense, and then the question becomes, “Who can explain to me how this decision was made, that this metric shows me X? Who in the organization can I go to, and how quickly, for them to explain that to me?” In my experience, especially recently, that’s a very difficult and uncomfortable question to ask and answer, because it’s not clear—especially when it comes to things that don’t have a clear accountability pathway, because more than one person is accountable. So the question is, is this a Risk Officer question, or is it a Data Officer, CTO, CSO—who actually is responsible?
In these instances, it’s particularly important to have the right accountability that is understood at that level—who is accountable for what part of the process? It’s not easy, because it is quite complex and creates a lot of challenging discussions. Very often, it depends on the organizational maturity and the level of AI adoption. What systems do we have? Do we understand what is an AI system in the first place? That part is not easy, it’s complicated, and it creates quite a lot of challenging discussions.
Ross Dawson: Well, it is difficult and complicated and challenging, but that’s not very useful. Let’s map the pathway. Give me a roadmap for an organization: we’re going to assess our decisions, rank them in order of priority or risk or uncertainty, apply AI, and put accountability in place for all of these. This might take us sixteen years, but we’re going to start somewhere else.
Joanna Michalska: I think a really good example is a new European EU AI Act, where there is a very clear starting point and clear requirements. For many organizations, that’s step one: what are we actually required by regulators to do? We look at what processes, systems, and outcomes we’ve got. That’s step one. Then, we look at which of our applications are actually what risk, and we tier them. We assign the right executives for the right processes. First, we identify where we are today, then work with our compliance or risk officers to understand where we think we are versus where we are according to the regulation.
That regulation—the high-risk identification of those systems—is going live in August this year, so it’s a very pressured point for people to address. Once that’s done, there’s a clear inventory of the current state, a clear inventory of where we need to be, gap identification for which high-risk systems require transformation and to what extent, and then the right people need to be in the right places so the transformation roadmap is defined. There’s accountability for that transformation to occur, but often a lot of external advisors are invited to help. In that case, my work usually starts with an exposure review, where I speak to everyone accountable and get a view of where the organization is, maturity-wise, versus where it needs to be for the implementation deadline.
Then, there’s a very clear prioritization roadmap: what’s the impact and consequences for these processes or systems not to comply, and what are the next steps for compliance? Who needs to be in the process? What metrics need to exist? What’s the gap to the right level of maturity to demonstrate that we are compliant and can confidently talk to regulators, our organization, stakeholders, whoever, to demonstrate, “Here’s where we were or thought we were, these are the steps we’re taking, these are the people who are accountable, these are the decisions we’re going to make, and we’re going to demonstrate that we’ve taken them this way.” That’s how we design our journey.
All of this sounds very simple, but the initial assessment is always very complicated, because everyone goes through the process and, in what I call a deep dive into documentation and governance structures, very often the outcomes show governance is not mature. It often needs to change—even the level of metrics, the maturity of the metrics thresholds in place is very immature and very legacy. Ultimately, it applies to the old world, but not to the new world.
Ross Dawson: Well, every organization needs to evolve, and potentially rapidly. Do you look at strategic decisions, or the role of AI in strategic decisions, or other very complex, high-level decisions?
Joanna Michalska: Yes, and it’s interesting to see that there is, what I would call, a cognitive dissonance between where people would like to apply AI for strategic decisions versus the reality. AI is actually used much more for operationalization or speeding up optimization—very performative. How can we quickly improve performance? There’s a lot of discussion about it, and I see that people want to think about it, especially as boards are putting a lot of pressure to improve strategic ambition and create a competitive advantage, which is well beyond just regulatory compliance. But it’s not very mature—let’s just put it that way. It’s much more integrated within improving operational performance — let’s just say that.
Ross Dawson: Yeah, well, we might loop back to that. Part of your background is neuroscience, and we’ve been talking a lot about decision making. There’s been a lot of wonderful work over the last seven or eight decades on the role of human cognition in decision making. Tell me about the way you see this understanding of neuroscience being applied to cognition, work, and decision making in a world where we have AI as part of those processes.
Joanna Michalska: I think it’s a fascinating area of science, and we as humans, especially in this fast-paced environment, as leaders, really need to evolve our capability of not only managing organizations, but also leading the people side of things. Historically, the human brain and mindset take time to change; it’s not something that changes overnight, and it typically requires a trigger, which is usually not very pleasant for someone to take responsibility or ownership of that change.
Now, when you think about the acceleration of decisions and what happens around the organization—because when we think about business, evolving organizations, or being accountable to shareholders, regulators, or society—we can’t just remain the same. That means our mindset and our ability to become more human really matter. Emotional intelligence, relationship-building skills, recognizing the importance of trust, building psychological safety so people can take responsibility at every level of the organization, and having the courage to say, “This is not working, I’ve seen this problem, something doesn’t feel right, I’m going to escalate to the right person because I know who the right person is”—all of that really comes from personal qualities and owning those qualities that just cannot be replaced by machines.
There’s a gap between what machines are taking over in terms of processes and things that can be done easily, and, to my earlier example, that almost brings a higher weight and pressure onto us to become better—much more resilient, agile, responsible, and accountable. Those qualities, maybe we weren’t really owning to the same extent in the past because we were focused on performative activities much more. That will be required of us through all the external pressures, but also because we want to achieve better outcomes not just for organizations, but for broader society. That sense of responsibility for an impact that is much deeper and more long-lasting is very important. From a leadership perspective, it’s almost like role modeling becomes even more important for everyone else in the organization, and it creates a higher level of satisfaction, engagement, and level of happiness for everyone.
Ross Dawson: So, this is about personal growth in a way—becoming more, as you say, building these human qualities that we need so much in this world. But at the same time, very few people are not experiencing pressure or stress, not least from the pace of change through AI and many other things. There are countervailing forces: we’re being called to be more human, to bring out more of ourselves, but the context is extraordinary challenge. What are specific things that leaders or organizations can do to help people draw out those capabilities?
Joanna Michalska: I think there are a couple of things to start with. I would always say awareness is the first step, and leadership awareness of where the gap is and what’s actually required is very important. Integrity and honesty follow right after, because people know what’s happening—or even if they don’t know, they’ll fill in the blanks with probably not very positive things. Once fear starts to creep in, it erodes trust and confidence, and it also takes away from participation. No matter what transformation you’re trying to achieve in your organization, you need your people engaged.
Executives need to be honest about what’s happening. I’ve seen a lot of examples where leadership is not honest—they say, “Don’t worry, you’re not going to lose your job, it’s going to be fine,” while everyone knows their job will either be lost or significantly changed. That level of honesty is important: having a uniform communication strategy to communicate honestly to people and say, “Things will change. Things are already changing, but we will take care of you, and this is how it’s going to look.” I’m not saying to lie; you need to be appropriately honest and say, “Yes, there will probably be a reduction in roles, but this is the plan.” How do we communicate honestly to people and make sure they understand that if their job will be eliminated, how will they be supported to develop capabilities and skills to go into another role or do something else somewhere else? That’s a human thing to do—be honest and help people develop that capability.
The second part of that plan is to have, whether it’s AI literacy or AI retraining, whatever the organization decides to do to help people develop the skill set they don’t have. Organizationally, but also, as someone said at an event I attended recently, “What used to be a soft skill now really becomes a hard skill,” because that’s at the heart of everything. As more process-driven tasks are taken away by machines, those human skills will become very, very important and already are.
Ross Dawson: Yeah, and arguably, that’s one of the possible benefits of AI—it helps us to become more human, or develop our intrinsically and distinctly human capabilities. So, Joanna, where can people go to find out more about your work?
Joanna Michalska: They can reach out to me on LinkedIn, And I do have also my website, that’s called ethicagroup.ai, and I’m happy to connect on any topic related to what we’ve just discussed, especially executive authority, how we become more human, and how we can be at the center of what we can actually do within this very fast-moving environment. How do we have as executives and leaders more impact on changing this reality? Because, to your point earlier, if each of us doesn’t take that responsibility, nothing’s really going to change.
Ross Dawson: Indeed. Thank you so much for your time and your insights, Joanna.
Joanna Michalska: Thank you, Ross. Thank you for having me.
The post Joanna Michalska on AI governance, decision architectures, accountability pathways, and neuroscience in organizational transformation (AC Ep36) appeared first on Humans + AI.
More episodes from "Humans + AI"



Don't miss an episode of “Humans + AI” and subscribe to it in the GetPodcast app.








