
Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy (AC Ep32)
“You can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with.”
– Felipe Csaszar
About Felipe CsaszarFelipe Csaszar is the Alexander M. Nick Professor and chair of the Strategy Area at the University of Michigan’s Ross School of Business. He has published and held senior editorial roles in top academic journals including Strategy Science, Management Science, and Organization Science, and is co-editor of the upcoming Handbook of AI and Strategy.
Webiste:
LinkedIn Profile:
University Profile:
What you will learn
- How AI transforms the three core cognitive operations in strategic decision making: search, representation, and aggregation.
- The powerful ways large language models (LLMs) can enhance and speed up strategic search beyond human capabilities.
- The concept and importance of different types of representations—internal, external, and distributed—in strategy formulation.
- How AI assists in both visualizing strategists’ mental models and expanding the complexity of strategic frameworks.
- Experimental findings showing AI’s ability to generate and evaluate business strategies, often matching or outperforming humans.
- Emerging best practices and challenges in human-AI collaboration for more effective strategy processes.
- The anticipated growth in framework complexity as AI removes traditional human memory constraints in strategic planning.
- Why explainability and prediction quality in AI-driven strategy will become central, shaping the future of strategic foresight and decision-making.
Episode Resources
Transcript
Ross Dawson: Felipe, it’s a delight to have you on the show.
Felipe Csaszar: Oh, the pleasure is mine, Ross. Thank you very much for inviting me.
Ross Dawson: So many, many interesting things for us to dive into. But one of the themes that you’ve been doing a lot of research and work on recently is the role of AI in strategic decision making. Of course, humans have been traditionally the ones responsible for strategy, and presumably will continue to be for some time.
However, AI can play a role. Perhaps set the scene a little bit first in how you see this evolving.
Felipe Csaszar: Yeah, yeah. So, as you say, strategic decision making so far has always been a human task. People have been in charge of picking the strategy of a firm, of a startup, of anything, and AI opens a possibility that now you could have humans helped by AI, and maybe at some point, AI is designing the strategies of companies.
One way of thinking about why this may be the case is to think about the cognitive operations that are involved in strategic decision making. Before AI, that was my research—how people came up with strategies. There are three main cognitive operations. One is to search: you try different things, you try different ideas, until you find one which is good enough—that is searching.
The other is representing: you think about the world from a given perspective, and from that perspective, there’s a clear solution, at least for you. That’s another way of coming up with strategies. And then another one is aggregating: you have different opinions of different people, and you have to combine them. This can be done in different ways, but a typical one is to use the majority rule or unanimity rule sometimes. In reality, the way in which you combine ideas is much more complicated than that—you take parts of ideas, you pick and choose, and you combine something.
So there are these three operations: search, representation, and aggregation. And it turns out that AI can change each one of those. Let’s go one by one. So, search: now AIs, the current LLMs, they know much more about any domain than most people.
There’s no one who has read as much as an LLM, and they are quite fast, and you can have multiple LLMs doing things at the same time. So LLMs can search faster than humans and farther away, because you can only search things which you are familiar with, while an LLM is familiar with many, many things that we are not familiar with. So they can search faster and farther than humans—a big effect on search.
Then, representation: a typical example before AI about the value of representations is the story of Merrill Lynch. The big idea of Merrill Lynch was how good a bank would look if it was like a supermarket. That’s a shift in representations. You know how a bank looks like, but now you’re thinking of the bank from the perspective of a supermarket, and that leads to a number of changes in how you organize the bank, and that was the big idea of Mr. Merrill Lynch, and the rest is history.
That’s very difficult for a human—to change representations. People don’t like changing; it’s very difficult for them, while for an AI, it’s automatic, it’s free. You change their prompt, and immediately you will have a problem looked at from a different representation.
And then the last one was aggregating. You can aggregate with AI virtual personas. For example, you can create a virtual board of directors that will have different expertises and that will come up with ideas that a given person may not come up with. And now you can aggregate those. Those are just examples, because there are different ways of changing search, representation, and aggregation, but it’s very clear that AI, at least the current version of AI, has the potential to change these three cognitive operations of strategy.
Ross Dawson: That’s fantastic. It’s a novel framing—search, representation, aggregation. Many ways of framing strategy and the strategy process, and that is, I think, quite distinctive and very, very insightful, because it goes to the cognitive aspect of strategy.
There’s a lot to dig into there, but I’d like to start with the representation. I think of it as the mental models, and you can have implicit mental models and explicit mental models, and also individual mental models and collective mental models, which goes to the aggregation piece. But when you talk about representation, to what degree—I mean, you mentioned a metaphor there, which, of course, is a form of representing a strategic space. There are, of course, classic two by twos. There are also the mental models which were classically used in investment strategy. So what are the ways in which we can think about representation from a human cognitive perspective, before we look at how AI can complement it?
Felipe Csaszar: I think it’s important to distinguish—again, it’s three different things. There are three different types of representations. There are the internal representations: how people think in their minds about a given problem, and that usually people learn through experience, by doing things many times, by working at a given company—you start looking at the world from a given perspective.
Part of the internal representations you can learn at school, also, like the typical frameworks.
Then there are external representations—things that are outside our mind that help us make decisions. In strategy, essentially everything that we teach are external representations. The most famous one is called Porter’s Five Forces, and it’s a way of thinking about what affects the attractiveness of an industry in terms of five different things. This is useful to have as an external representation; it has many benefits, because you can write it down, you can externalize it, and once it’s outside of your mind, you free up space in your mind to think about other things, to consider other dimensions apart from those five.
External representations help you to expand the memory, the working memory that you have to think about strategy. Visuals in general, in strategy, are typical external representations. They play a very important role also because strategy usually involves multiple people, so you want everybody to be on the same page. A great way of doing that is by having a visual so that we all see the same.
So we have internal—what’s in your mind; external—what you can draw, essentially, in strategy. And then there are distributed representations, where multiple people—and now with AI, artifacts and software—among all of them, they share the whole representation, so they have parts of the representation. Then you need to aggregate those parts—partial representations; some of them can be internal, some of them are external, but they are aggregated in a given way. So representations are really core in strategic decision making. All strategic decisions come from a given set of representations.
Ross Dawson: Yeah, that’s fantastic. So looking at—so again, so much to dive into—but thinking about the visual representations, again, this is a core interest of mine. Can you talk a little bit about how AI can assist? There’s an iterative process. Of course, visualization can be quite simple—a simple framework—or visuals can provide metaphors. There are wonderful strategy roadmaps which are laid out visually, and so on.
So what are the ways in which you see AI being able to assist in that, both in the two-way process of the human being able to make their mental model explicit in a visualization, and the visualization being able to inform the internal representation of the strategist? Are there any particular ways you’ve seen AI be useful in that context?
Felipe Csaszar: So I was very intrigued—as soon as LLMs became popular, were launched—yeah, ChatGPT, that was in November 2022—I started thinking, there are so many ways in which this could be used. So myself and two co-authors, Hyunjin Kim and Harsh Ketkar, we wrote a paper, one of the initial papers on how AI can be used in strategy. It’s published in Strategy Science, and in that paper, we explore many ways in which AI could be used in strategy.
Of course, you can ask AI about coming up with answers to questions that you may have. You can also use AI to use any of these frameworks that have been developed in strategy. It was very clear to us that it was usable. Then the question was, how good are those uses? What’s the quality of current AI doing this type of task?
So what we did is an experiment where we compared the performance of AI to the performance of humans. In strategy, there are two types of tasks: one is to generate alternatives, and the other is to select alternatives. You have a problem—the first thing you want to do is have possible solutions, and then you want to be able to pick the best out of those. So we had two experiments: one where we measured the ability of AI to generate alternatives, another to select.
For generation, what we did is we got data from a business plan competition where people were applying with business plans that all had the same format. The important thing is that the first paragraph of that application had the problem—a problem that they thought was important. So we took all of those applications and removed everything except for the problem, and then we gave that problem to an AI and asked the AI, “Hey, complete the rest of the business plan.” So now we have business plans that are real, and the AI twins of those—business plans created by an AI that try to solve the same problem. Then we put both in a kind of business plan competition, where we had people with experience in investments ranking all of these business plans, and they didn’t know which ones were created by humans and which ones were created by AIs.
We looked at their evaluations at the end of the day, and on average, the ones that were generated by the AI were ranked a little bit higher—7% higher—than the ones that were generated by humans. So at least in this very specific context of business plan competitions, there’s potential. We’re saying, hey, AI could generate things at a level that is comparable to the people applying to this type of business plan competition. That has a lot of potential. We could use it in different ways.
The other part of this study was to measure the ability of AI to select strategies among strategies. There, what we did is use data from another business plan competition, where all of the business plans had been evaluated by venture capitalists according to 10 dimensions: how strong is the idea, how strong is the team, how strong is the technology, etc. Then we gave an AI the same rubric that the venture capitalists received and asked the AI to rank or grade each one of these startups according to these 10 dimensions. Then we compared how similar the evaluations of the LLM were to the evaluations of the venture capitalists, and we showed that they are quite similar—there’s a correlation of 52%. This, again, tells us that there is potential here.
An AI could do things that are quite similar to an experienced human evaluating this type of startup. A very interesting result there is that the correlation between two venture capitalists is lower than that 52%. So if you want to predict what a venture capitalist is going to say about your business, you’re better off asking an LLM than asking another venture capitalist.
Ross Dawson: Yes, which perhaps shows the broad distribution of VC opinions. So obviously, LLMs can play valuable roles in many aspects of the strategy process, but this brings us back to the humans plus AI role. There are many—again, a big topic—but rather than looking at them, comparing what humans and AI did, where do you see the primary opportunities for humans and AI to collaborate in the strategy process?
Felipe Csaszar: Yeah, yeah. So I think that’s a fascinating question, and my guess is that the study of the strategy process will completely change in the next 10 to 20 years. So far, all of the strategy process has been to study what happens when you have multiple people making strategy decisions. In the past, we studied things like devil’s advocate, or we have studied the role of changing the size of the group of people making decisions, or the consensus level required. But in the future, there will be AIs in this process that will have completely different bounds or capacities than humans. So we will need to learn what’s the best way of collaborating with them and including them into the strategic decision making process.
Today, we don’t know much about it. We are beginning to learn things, like the study I mentioned—hey, in this task, it seems to be better—but there’s so much that we need to learn. I am working on some things, but it’s still early.
Ross Dawson: Going back to the distributed representation—this is something where, of course, distributed representation can be in multiple people. Arguably, it can include human and AI agents as each having different representations. But this goes, of course, to the aggregation piece, where the aggregation is—you have a board of directors, group of executives, potentially a participative strategy process bringing more people into the organization.
What are the specific roles of AI in assisting or facilitating effective aggregation to form a cohesive strategy?
Felipe Csaszar: Yeah, so the truth is, we yet don’t know. There’s not enough research. We’re starting to think about it. We can see many uses, and I think what people should be doing now is running experiments to see when those add value and when they don’t. It will be different for different companies in different industries, so probably there’s no one solution that’s the same for everybody.
For example, one possible use in strategic decision making is predicting what your competitors would do. If I do this, what would be the most likely reaction of my competitor? That’s one. Another one is predicting consumers: if I launch this product with this set of characteristics, what would be the most likely response of my consumers? In strategy, something that has been very popular for the last 20 years is something called the Blue Ocean Strategy, which is a method to come up with new offerings, with new value propositions, but that requires a lot of creativity. With AI, you can automate part of that. At the end of the day, it’s a search process. You have to think about what would happen if I add this, or if I add this other thing, or if I increase this. Part of that can be automated—that would be another use.
Or if you have different proposals—in this other study, we show, hey, AI is good at evaluating, so if you have the right rubric, this can automate the evaluation, or can automate the first part of that evaluation so that you only have to spend your time among the really complicated, more sophisticated decisions or alternatives. There are many, many things that can be done at this point.
Ross Dawson: Which goes to, I think, one of the interesting points in your work—representational complexity. Some strategies are arguably simple; other strategies, you can call them more sophisticated, but they are more complex. The representation of complexity is greater.
There are two things that are required for that. One is, of course, sophisticated thinking, but also, because strategy in any organization involves multiple people, it requires that there is an ability for a number of people together to hold a hopefully similar or very similar representation of a quite complex topic. What are ways in which AI can be used to enhance that development of more sophisticated or nuanced or complex representations that can support a better strategy?
Felipe Csaszar: So that’s a great point. I have a paper from before this new round of AI called exactly that—representation complexity. There has been a long-standing discussion in strategy of when you want to use a simple representation, whether it’s better to use a complex representation, or something in between. We tried to clarify when each one of these applies.
But then came this new round of AI, and I think it changes things a lot. I talk a little bit about this in a chapter I uploaded recently—it’s called “Unbounding Rationality.” The key thing there is that humans—we have our own computer here, it’s the brain, and the brain has some constraints. One very important for strategy is the capacity of our working memory. There’s this famous paper from the 1950s called “The Magical Number Seven,” that we can hold in our working memory seven plus or minus two items—so between five and nine things we can keep at the same time in our mind. That’s why, for example, I think all strategy frameworks are very simple. There’s the five forces—fits within our working memory—or these typical two by twos, they have four quadrants—fits within our working memory.
But AIs don’t have that bound. They are not constrained by the same working memory constraint that we have. So I would expect that future frameworks will be much more complex, that representational complexity will increase because of AI. Of course, frameworks of the future won’t have a million things, because when you put too many things, you’re overfitting—it works well with things that happened in the past, but not in the future—but they will probably have more than five things.
Also, another reason for not having a million things inside a framework is that at the end of the day, you will still need to communicate frameworks. You will need to convince the other people in the organization, the ones that are implementing the strategy, that this is the right strategy. You will need to convince them, so you don’t want to have something that’s extremely complex. But my guess would be that the complexity of frameworks and of strategies will increase with AI.
Ross Dawson: So looking forward—you talked about 10 or 20 years. If we see the current pace of capability development of LLMs on a similar trajectory, where do you see the remaining role of humans as a complement to AI in shaping strategy? I think you mentioned this possibility of essentially AI forming strategy, but I think for a wide array of reasons, it will be human plus AI—humans will play a role as final decision maker or other things. So where do you see those fundamental human capabilities still being retained for the foreseeable future, as a complement to AI in strategy?
Felipe Csaszar: So I think that for the next 10, 20, maybe 30 years, humans will be really busy coming up with how to use AI—all of these experiments that we mentioned, people will be running all of those things in all different industries, and that takes a while. That will require human ingenuity and trying things and really understanding strategy and understanding the capabilities of AI. So I don’t see AI replacing human strategists in the very short term. On the contrary, because of AI, strategists will be more busy finding what are the best ways of using AI in their businesses.
I think 10, 20, or 30 years is very reasonable. If you think about the previous technological revolution, which I could say was the Internet—the technology for the Internet, we could say, existed since around ’94. The World Wide Web is from ’94, browsers are from ’94, bandwidth enough to send email. Essentially all of the technology that supports internet business today was mostly in place in the mid to late ’90s. But the businesses, or people, ended up using all of those things 10 or 20 years after that, because it takes a long time for people, for strategists, to come up with the idea—for someone to come up with the idea of, let’s say, Netflix or eBay or PayPal or Facebook—all of those things, they take time for people to understand this is doable. Then it takes time to implement. Then it takes time for users to say, “Hey, this is useful.” There’s a lot of adaptation, and then there will be regulation. So the whole process takes a long time. I don’t think that businesses will change from one day to the next. It will be a relatively slow process that will take decades. When we look back in 20 years from now, we will see, “Hey, everything changed,” but every year we will see just a little bit of change, like what happened with the Internet. So I imagine that people designing strategies, implementing strategies, they will be very busy in the next 20 years.
Ross Dawson: So to round out, I won’t ask you to make predictions, but maybe some hypotheses. What do you think are some interesting hypotheses that will inform your research—not just next year, but in the years beyond? Where do you think are the interesting avenues that we should be not just exploring and researching, but where there is a valid and useful hypothesis?
Felipe Csaszar: Yeah, so many things, but one very important—I think that strategy will be more about making the right predictions. The role of foresight. It turns out that when you want to train a machine learning algorithm, you need to have some signal that informs how you train the system. It’s called the gradient, or the objective function. So in strategy, we will need to make that more central, and then think, what are the best ways in which you can use AI to make the right predictions? That requires measuring the quality of predictions. So you change this in the business, and this ends up happening. We want an AI to be able to do that. So coming up with ways in which you can measure the quality of decisions will become more important, so that we can train those AIs.
That’s one. And very related to that is, well, the thing that’s generating the predictions are representations, and then it’s coming up with those more complex representations that are better at making decisions or are better at discovering things that are hard for humans to discover. Those are the two main things. I think the future of strategy will be about finding ways of improving foresight and finding ways of improving the thing that creates that foresight, which are the representations. All of that will change what has been called the strategy process—how we make decisions in strategy.
Ross Dawson: So I just need to pick up on that point around prediction. One of the challenges with external predictions is that, then, as a strategist, you have to say, either I will build my strategy based on that prediction, or I question that prediction. I think there are alternatives or attribute probabilities to it. So even if a prediction machine gets better, it’s still very challenging, particularly cognitively, in terms of accountability for the strategist to incorporate a prediction where you don’t necessarily have all of the logic behind the prediction as a machine learning model to incorporate. So how can a strategist incorporate what may be a relatively black box prediction into an effective strategy?
Felipe Csaszar: Yeah, well, and here we are in the conjecture part of this interview. So my answer is in that spirit. I think there are two ways out of this. One is that we will ask for explainable predictions. There’s a whole area of AI called Explainable AI, which is exactly trying to do this—not just say what’s the best prediction, but why the AI is saying that’s the right prediction. So that could develop, and probably that will develop, because humans will question whatever the AI will predict. That’s one way.
The other is, imagine that the AI becomes very, very, very good at making predictions. Then at some point, it doesn’t matter if it can explain it or not—it’s just making very good predictions. It’s like, imagine you want to win at chess and you have this machine that can play chess very well. This machine wins at chess. You don’t need to exactly understand how that machine is making each one of those decisions. But if the machine is very good at it, and it’s consistently good at it, people will use it. In a sense, the market will decide. If this works better than a machine that provides an explanation for each one of the steps, people will just go with the one that’s making the right prediction.
Ross Dawson: I think there’s all sorts of other places we can go to from there, but that’s fascinating. So where can people go to find out more about your work?
Felipe Csaszar: Well, I upload all of my stuff to SSRN. So if you Google my name and SSRN, you will find all of my papers. In the near future, like in the next three months or so, I’ll have two things coming out. One is a Handbook of AI, written also with my co-editor Nan Jia from USC, that will have 20 chapters that will explore different ways in which AI will be affecting strategies—the Handbook of AI and Strategy, published by Elgar. And then around that same time, there will be a special issue of the Strategy Science journal where I’m one of the co-editors, which will be exactly about the same—about AI and strategic decision making. We already have accepted several of the papers for that special issue. Those papers will be pushing the frontier of what we know about AI and strategic decision making.
Ross Dawson: That’s fantastic. I will certainly be following your work—very highly aligned with the humans plus AI movement. And thank you for all of the wonderful research and work you’re doing.
Felipe Csaszar: Thank you so much, Ross. It’s been a pleasure.
The post Felipe Csaszar on AI in strategy, AI evaluations of startups, improving foresight, and distributed representations of strategy (AC Ep32) appeared first on Humans + AI.
More episodes from "Humans + AI"



Don't miss an episode of “Humans + AI” and subscribe to it in the GetPodcast app.








