Humans + AI podcast

Jeremy Korst on the state of AI adoption, accountable acceleration, changing business models, and synthetic personas (AC Ep30)

1/30/2026
0:00
36:07
Rewind 15 seconds
Fast Forward 15 seconds

“What we’re seeing now is that when we think about some of the friction and challenges of adoption, this isn’t a technology issue, per se. This is a people opportunity.”

–Jeremy Korst

About Jeremy Korst

Jeremy Korst is Founder & CEO of Mindspan Labs and Partner and former President of GBK Collective. He lectures at Columbia Business School, The Wharton School, and USC, and is co-author of the Wharton + GBK annual Enterprise AI Adoption Study, one of the most cited sources on how businesses are actually using AI. Jeremy also publishing widely in outlets such as Harvard Business Review on strategy and innovation.

Website:

mindspanlabs.ai

Accountable Acceleration:

LinkedIn Profile:

Jeremy Korst

What you will learn

  • How enterprise AI adoption has shifted from experimentation to ‘accountable acceleration’
  • The key role of leadership in translating business strategy into an actionable AI vision
  • Why human factors and change management are as crucial as technology for successful AI implementation
  • How organizations are balancing augmentation, replacement, and skill erosion as AI changes the workforce
  • The importance of intentional experimentation and creating case studies to drive value from AI initiatives
  • Early evidence, challenges, and promise of digital twins and synthetic personas in market research
  • Why a culture of risk tolerance, alignment across leadership layers, and clear communication are essential for AI-driven transformation
  • The emerging shift from general productivity gains to domain-specific AI applications and the increasing focus on ROI measurement

Episode Resources

Transcript

Ross Dawson: Jeremy, it’s wonderful to have you on the show.

Jeremy Korst: Yeah, hey, thanks for having me.

Ross Dawson: So you, I think it’s pretty fair to say you are across enterprise AI adoption, being the recent co-author of a report with Wharton and GbK Collective on where we are with enterprise AI adoption. So what’s the big picture?

Jeremy Korst: Yeah, let me start—now that I’ve reached this stage in life, in my career, and I look back over what I’ve done the last couple decades, it’s actually been at the intersection of technology adoption and innovation. I spent a couple of careers at Microsoft, most recently leading the launch of Windows 10 globally. I worked at T-Mobile, led several businesses there, and more recently, have been spending time really with three things.

One is through my consulting company, GbK Collective, working with some of the world’s largest brands on market research and strategies for consumers and products, working with academic partners who are core to that work we do at GbK—so leading professors from Harvard and Wharton and Kellogg, and you name it—but then also very active in the early stage community, where I’m an advisor and board member of several of those.

And so I’ve had this bit of a triangle to be able to watch technology adoption unveil both inside and outside the organization, whether it’s inside the organization, how people are using it and effectively, or outside, how it’s being taken to market. So fast forward to where we’re at with Gen AI. It’s been fascinating to me, because all of those things are happening and all of those communities. Where we started with the Wharton report was three years ago. Stefano and Tony, one of the co-authors, and I were literally just having a conversation right after the launch of ChatGPT. And of course, there were all the headlines and all these predictions about what was going to happen and what could happen. And we said, well, wait a minute, why don’t we actually track what actually happens?

And so therein started the three-year program. It’s now an annual program sponsored by the Wharton School, conducted by GbK—my research company—that looks specifically at US enterprise business leader adoption. We decided to focus on that audience because we believe they were going to be some of the most influential decision makers around budgets and strategies as this unfolded, so that’s been our focus. We’re now in our third year, and there’s lots to dig into.

Ross Dawson: So the headline for this year’s report was “accountable acceleration,” and I’ve got to say that that phrase sounds a lot more positive than what a lot of other people are describing with Gen AI adoption. “Accountable” sounds good. “Acceleration” sounds good. So is that an accurate reflection?

Jeremy Korst: I think it is. And I’ll say that, yeah, the Wharton School, with three co-authors—Sonny, Stefano, and myself—we all have a relatively positive perspective and perception of what is and could be the impact of Gen AI. Now, we don’t try to dismiss some of the concerns and challenges. They’re there, they’re realistic, and should be considered, but we have a generally positive perspective going into this. As we’ve looked at the three years that we’re at now, we’ve moved from the first couple of years, which were more around experimentation and maybe hype, to where we started seeing accountability—businesses really looking at this as a potential tool, not only to drive efficiencies across their businesses, but also perhaps new ways of growth.

For example, one of the things that we added this year, because we expected to find more of this accountability start to unfold, is we added ROI as a measure, for instance. And we were frankly surprised at the report level we saw of organizations reporting both that they were tracking ROI and that they were seeing indications of early positive ROI in that work. That’s one of the areas that lends itself to the title, when we started to see some of that accountability start to come into play.

Ross Dawson: So one of the stats being, I think, 72% formally measure Gen AI ROI, and 74% report positive ROI, which is a bit higher than some other things.

Jeremy Korst: That’s right. I’m glad you clearly read the report, thank you. We intentionally decided to take a broad measure of ROI at this stage of the adoption cycle. While we were sponsored by Wharton—I’m a Wharton grad, and I’m on the board at the Wharton School—we very much would love to have hard measures of ROI, and so we yearn for that. But at this stage of the adoption cycle, what’s maybe even more important is the perception of business leaders on the returns and progress they’re seeing on their initial investments, because that’s how they’re going to evaluate this next stage of investment as we start scaling across the enterprise.

Ross Dawson: So, one of those three themes, I guess, from the report—one was that usage is now mainstream, the other is this idea of getting measurement of value, and the other was digging into the human capital piece, where I think there are a number of interesting aspects. One is, I suppose, how leadership use of AI correlates with where positioned businesses stand. But also, well, first, let’s dig in a little bit more into some of the other aspects of that. But at a high level, this is a Gen AI technology, but it’s an implement of the organization with people. So it is more about people than technology, ultimately. What are some of the things which were highlighted for you in looking at the people aspect of change?

Jeremy Korst: Yeah, the people aspect has always been core to this work, and some of the work I do advising companies in this space. One of our co-authors, one of my HBR co-authors, Stefano Puntoni, is a social scientist who comes from a psychology background and has studied for his entire career the intersection of people and technology. I’ve been in the trenches, watching and learning about the intersection of people and technology from my roles. So this has been near and dear to our hearts.

As we suspected from the early days, and what has definitely unfolded, what we’re seeing now is that when we think about some of the friction and challenges of adoption, this isn’t a technology issue, per se. This is a people opportunity—from whether strategies are being translated effectively throughout the ranks into a vision, to some of the challenges middle managers are having. We’ll talk about that here, because we found some of that in our study, or some of the real concerns that others have studied, like the Pew organization and others around workforce concerns, of course. So we’ve got this really interesting mix of hype and concern that translates itself across the adoption friction.

That’s definitely been a lens that we’ve been trying to look at through our purview, to understand, particularly from a leadership perspective, what those perceptions and issues may be. For instance, one of the things that we’ve looked at for three years is how business leaders report that they believe Gen AI will either enhance or replace their employees’ skills, and we’re seeing a mix of both. But we’re happy to see that consistently over the course of our three-year studies, now almost 90% of leaders are saying that they believe AI does and will enhance their employees’ skills, while about 70% consistently have raised concerns—or not necessarily concerns, but say—that it will replace some employee skills.

This year, we had another question about skill atrophy. It’s like, okay, so we understand that you have perceptions that this is going to enhance employee skills but maybe replace others. What’s your worry about skill atrophy, about your employees’ skill proficiency? And 43%, just under half of leaders, reported they were concerned about declines in employee proficiency. The takeaway there, and this becomes to me like a strategy and leadership opportunity, is this Goldilocks issue. Yes, it can enhance and replace certain skills, and leadership for a given organization needs to be really crystal clear about what skills it should replace versus which skills it won’t. That’s one of the things we talk about a lot in our advisory and other work as well because of this type of dynamic.

Ross Dawson: Yeah. Well, if you take those stats—89% seeing the potential for augmentation, 70-odd percent saying it can replace skills, and 43% saying that it can erode skills—well, there’s obviously quite a few that answered yes to all three of them, which is true?

Jeremy Korst: Exactly. That’s getting it right. And this is where, fast forward, this section too, is the need for leadership. We’re seeing a rising need in this age of AI—not only people with AI skill sets, per se, but also change management and strategic leadership skills that can help evaluate these situations and make choices, be choiceful around, “Hey, AI could perhaps replace some of these skills within my team or within my workforce, but because of my strategy, because of the assets I need, I’m not going to let it,” just as an example.

Ross Dawson: So, have you been able to see any AI implementation or adoption journeys or approaches or roadmap styles that have been more effective than others, or anything you’d be able to pick out around what has really been correlated to a positive adoption, flowing through to value—to, obviously, financial value, but also to the organization?

Jeremy Korst: So I think what we’ve seen, or I’ve seen in the work in this space, is really a combination of top-down support with bottoms-up actual action. There needs to be a vision. The first thing to me is the organizations that have been able to at least translate their strategy into an AI vision. I’m carefully selecting my words there because I don’t believe organizations need an AI strategy—hopefully they already have a strategy, and what AI is, is a really powerful tool to help them achieve that.

Once they’ve translated that such that leaders within the organization can look at that vision, understand the company strategy, say, “Okay, now I can translate that to my team,” and a lot of teams we work with—in the marketing organization, marketing analytics, marketing research—can see, “Okay, I understand that now, and I also have leadership support for experimentation.” Now, I’m going to work within my team, particularly early adopters on my team, and what I’ve seen being really successful is letting them loose—not just necessarily going out and experimenting with everything and just throwing stuff against the wall and seeing what sticks, but more intentional experimentation. Saying, “Hey, based on strategy, based on that vision, based on what we need to get done in our organization, where do we think there are some quick wins that we can go out and execute?” That seems to work really well.

Of course, not all those things work, but when they do, getting the excitement within that team, where you start getting some kind of lighthouse examples of these wins that start making wildfire. I’ll give you an example. I had a person in one of the startups I work with, who’s on the sales side—he’s a sales engineer, and so he was an early adopter. He’s always been an early adopter, but we had the right strategy and vision and leadership support in place. Henry—actually wrote about Henry in my second-to-last HBR article—was an early adopter of Google Notebook. Henry went out for the team and built all these notebooks for personas that the company was focused on—this happened to be a cybersecurity startup, so enterprise cybersecurity personas—and the team started, well, after Henry demonstrated the use of this, Henry started using these personas to do sales pitches and get feedback on his sales pitch, on how he should change his language, how he should promote certain things, etc. He started getting really good, and so he started demonstrating out to the rest of the team. And it caught on like wildfire, because of that very basic experimentation in line with strategy and vision and then showing early success. That’s an example.

Ross Dawson: So you’re familiar, I think, with the broad “humans plus AI” thesis—as in, human-AI to augment humans, human first, but AI to be able to create a complement. Part of that obviously flows through that it’s not just individuals using AI to augment them, but it changes essentially the way work is done inside organizations, where some tasks or skills are being replaced by AI, others are being augmented. Then it requires a new landscape of what may be described as redesigning work. That requires vision, in terms of being able to envision what that future stage for the organization might be, and the leadership to be able to guide that. What proportion, or where are you seeing the difference with those organizations that are deliberately and intentionally redesigning work, as opposed to just applying AI to their current organization?

Jeremy Korst: Yeah, we talk about this. I collaborate with another of my HBR co-authors, Olivier Toubia at Columbia. He has an AI for business class that I lecture in, and this is a topic we discuss in that class. The way you hit on it is that the first phase that we’re still frankly dead in is bringing AI to work. So, how can we, within our existing processes and flows, use copilots, use other types of augmentation capabilities to make us more efficient and effective in our current workflows? We’re still in the early stage of that adoption. The OpenAI report data came out and showed more of that type of adoption. Our report showed a lot of that productivity adoption, and frankly, there’s a lot more to do there.

But what you hit on is, I think, some of the BCG and McKinsey work that shows that if we look at AI laggards versus AI leaders, it takes all kinds of different combinations of things. But one thing it definitely takes is around changing workflows and maybe even changing organizational design. This is more of the thinking—how to put AI to work for me, and what does it mean to have autonomous agents, what’s the management and governance structure for that within a team? In our world, you and I talked previously about one of the recent papers we’d written about using digital twins and synthetic personas in the marketing research world, in the product development world. These approximations of human input not only have the promise to make existing workflows more efficient, but also dramatically change the workflows overall and add new workflows to the organization.

Frankly, the organizations that we’re working with are really at early stages of that, and it’s still much more in the experimentation phase, from what we’ve seen, versus scaled adoption of brand new workflows to take advantage of these technologies. But I will say, one of the other data points we saw this year is that 30% of the overall technology budget for AI for these organizations that we surveyed—over 800 business leaders—was in internal R&D. That’s just an indication of the level of investment that organizations are starting to put to this notion of more specialized workflows, maybe more vertical solutions, but internally specialized systems is kind of what we’re talking about.

Ross Dawson: So, I mean, you probably don’t have the data on it yet, but do you have any hypotheses around what supports or is correlated with that redesign of work inside organizations? What are the front runners, what is enabling them to do that effectively? Because this is experimentation, because you are redesigning work, these are experiments. You can’t know that it’s going to work, and so it does require that mentality of experimentation, but you have to do it in order to be able to find the future structure and architecture of the organization. So is there anything which you have been seeing which gives any hints or clues as to what are promising pathways there?

Jeremy Korst: Well, I think part of it is, I’m a big believer in the technology adoption curve, which has been around since the mid-50s, popularized by Geoffrey Moore and “Crossing the Chasm.” I’ve seen it both on internal organizational technology adoption and most definitely on the go-to-market side of technology adoption, and I think it plays out at the organizational level too. Those organizations that tend to be leaders in having a more entrepreneurial, experimental, intentional risk-taking type culture are going to be ahead of the game in this, because they’re willing—and by the way, it’s just natural, that’s what we should expect—and other companies will take their lead, see the case studies, and those with lower risk tolerance will begin to adopt.

I think that’s one of the reasons culture is so important—having a culture of early adoption, a little bit higher risk tolerance, and experimentation is super important. But also, and of course, the other thing we already talked about, which is having clear strategy and AI vision of how AI fits in that strategy, is imperative. But in between, we’ve got the middle management layer, and this is where some of our study this year—some findings came out around some of the disconnect between executive leadership and middle management. Things like in perception of ROI, where middle managers did not have as rosy a perception of current ROI versus executive VP-plus leaders. They also didn’t have as optimistic a view on how their company stood compared to other companies on AI adoption—meaning, how far ahead they were on AI adoption. Executives tend to have a rosier picture of that; middle managers tended not.

This came across in several other factors. To me, you’re not going to be able to have that organization that can be adaptive to the level that you and I were just talking about—about truly changing workflows, truly changing business models, being more innovative—if you don’t have all of those components. The strategy-vision connection is clearly communicated, you already have a culture of experimentation, intentional experimentation, adoption, and then, most definitely, that middle leadership level that is all aligned and focused, empowered, and in line with all of those things.

Ross Dawson: So, you mentioned it a moment ago. You have this wonderful paper just out recently in Harvard Business Review on how Gen AI is transforming market research—basically saying, how does it change what’s currently done, and how does it create new ways of doing stuff? One of the interesting things there is, you did a study in which more than 40% are saying they’re already experimenting with digital twins. One of the questions is, of course, it’s a lot easier to ask a digital twin than it is a human person, but it may not be exactly the same.

Jeremy Korst: Currently, that’s what the survey would say. I do aspire to the day when I have my digital twin that can answer surveys on my behalf so I don’t have to, but we’re not there yet. We’ve been watching this area. Stefano and I, along with Olivier Toubia from Columbia—Olivier has been, in part, leading an initiative at Columbia that’s an open source, public project that Columbia is invested in around digital twins. They have impaneled a couple thousand actual humans, and they’ve had a course of survey waves where they’ve collected a lot of real data—behavioral, all kinds of other survey data—and then been using those data to try to construct digital twins and understand their effectiveness, etc. It’s been part of the work that we’ve been doing. He’s been doing this and brought it into our HBR work.

We decided to look at the world of market research because several people predicted that this would be one of the most impacted, early-impacted areas that AI can bring both a combination of efficiency and value to. Companies spend hundreds of millions of dollars—Lightspeed and Sequoia, Lightspeed and Andreessen rather, invest billions of dollars—in traditional market research. I’ve been a consumer and a provider of market research; it can be slow, expensive, and frustrating because organizations want more real-time insight. The promise of AI to deliver that is compelling.

I would say we’re not there yet. What we’re seeing in the Columbia work—we talk to a bunch of startups, talk to a bunch of venture capitalists, and of course practitioners on the brand side who are using this—it’s all very, very early days. But the promise is there. What I hope, and I think I can speak for my co-authors as well, is that rather than replacing a bunch of market research—because if you’re going to make a bet-the-business type decision, I know I would want to talk to some real people. I may hope my competitors don’t, but I sure want to talk to some real people. But then I also know there’s all kinds of decisions that are made within organizations today that, because of budget, timing, or otherwise, don’t get the benefit of outside-in feedback. The decisions are made without having some of this feedback.

Where I’m excited about things like digital twins and synthetic personas is, how can we help inform the longer tail of product design, user experience, other types of strategy decisions that would otherwise not have had the opportunity to benefit from any outside feedback, or at least limited outside feedback, that this can perhaps now provide all kinds of additional insight into that process, versus replacing a bunch of the strategic work that companies, I think, really really need to do, need to invest in, and need to take the time to do thoroughly.

Ross Dawson: Yeah, I think that’s—you know, it’s not just market research. Across the board, Gen AI is able to enable things that would not otherwise have happened. That’s very much the case with coding, where there’s—how many software projects are happening now that never would have happened without Gen AI.

Jeremy Korst: Right now, spending the weekend, I’ve got Antigravity, I’ve got Codex, I’ve got Quadcode. I have never in my life—outside of my Commodore 64, I’m going to age myself, writing BASIC programs—I’ve been in technology most of my life, and I’ve never coded. I am now doing prototypes that, if I would have had this capability throughout my career, it’s mind-boggling. So what you just touched on, to me, is so exciting about the ability for a non-engineer to be able to actually demonstrate their ideas and come up with new ideas—it’s extremely powerful.

Ross Dawson: Yeah, yeah. I think that, you know—so just, you know, talk about Microsoft, talk about coding, but that’s across the board. Another great example is healthcare advice in disadvantaged countries and regions, you know, where there’s less in terms of possibilities.

Jeremy Korst: Healthcare advice in developed countries, where the GDP and cost is going up. The opportunity there is tremendous. So, yes.

Ross Dawson: So, but just going back to the market research—actually, just noticing that, you know, 40% currently use digital twins, 42% intend to soon. There was a study last year from a group at Stanford where they, I think, had two-hour interviews with people, and they managed to get some pretty—over 90%, if I recall—sort of correlation of responses to actual humans. But coming back to what was—so, you say, okay, they’re not there yet. And yeah, we also will never get to the point where we don’t bother to talk to people anymore. But what’s it going to take to make these digital twins more valuable for this market research?

Jeremy Korst: Yeah, to some evidence, I think the Stanford study showed about 88%, and by the way, that’s similar to what some of the Columbia work has done. Now, averages can be dangerous, so when we start dissecting that into what types of questions, from which audiences, etc., the numbers start declining in accuracy on many fronts. That’s the work—like I said, it’s entirely open source at Columbia. So the Columbia Digital Twins Project—I’d suggest anybody go out and read the research. The data are all available; you can play with it yourself, because Olivier and team are trying to advance this topic and thinking broadly versus proprietary models.

One of the challenges is around correlation variation. While we may be able to get some point estimates for a population that tend to have accuracy levels, when we start looking—particularly in marketing and product strategy—variance is extremely important. Trying to understand different populations and not only the average estimate, but what is the variance within that sample. This is where we’re seeing, just as an example, digital twins having perhaps more of a challenge in trying to recreate actual human response.

Olivier is the statistician, so I’d have him speak much more eloquently about the true statistical science behind that, but from a decision-making perspective, for those things where perhaps, if we want to have an aggregate point estimate for a population, like we write about in the article, there is some that may be useful today to be able to use like that. But if we start trying to get into certain subsegments and trying to understand variation, that’s where today, and the current technology, at least, needs some additional work. And by the way, that’s where a lot of the decisions are made in the strategic world.

So we’re optimistic about this, where it’s going. Just also want to caution—with our clients we’re working with on this, it’s about experimentation. We very much recommend that organizations in the marketing research, product, UX world, etc., should be experimenting with these technologies, run them side by side with existing programs and existing surveys to test, map, and understand where they work and where they don’t for that company’s own individual needs. But in terms of broad scale scaling, you really need to take the right scenario, but you’d find that right scenario through that type of experimentation and evaluation, and we recommend.

Ross Dawson: So just to round out, coming back to the beginning about enterprise AI adoption—so this report, which was on 2025, was “accountable acceleration.” So where does it go from here? What do you see? What do you hope for? What do you think is the pathway in how AI is used in organizations to create value in the coming year or two? That’s already a long way.

Jeremy Korst: Yeah, right. Well, to me, it’s a call to action to leadership today—leadership to make sure that there is that alignment of strategy and vision, that they’re getting the middle management team on board, and moreover, being excited about this. I think organizations—and we’re already seeing this happen with different CEOs announcing different things about, “Am I going to be an efficiency-focused organization, or am I going to be a growth-focused organization?”—help their organization understand which path they’re taking, so that the rest of the organization can get behind them.

Then, as we look forward, some of the things that I’m excited to evaluate in the study and watch is the move from horizontal productivity and efficiency applications and use cases to more domain-specific and function-specific use cases. We’re starting to see evidence of that in the study. We’re expecting to see more of it next year. Particularly, like I said, we’ve seen some evidence of investments in internal R&D growing substantially, etc. Like we all yearn—at least, I can’t speak for everybody, but a lot of us yearn—for true ROI measurement and evidence. I think we’ll see that in the coming year and years. And overall, hopefully seeing some of these, evidence of companies—particularly perhaps smaller, nimble companies—that are able to do amazing things, new things they wouldn’t have been able to do before with this potentially fantastic technology, be able to compete in new ways, deliver in new ways, etc.

Ross Dawson: Fantastic. Well, exciting times. It’s nice that you’re keeping on the pulse for all of us.

Jeremy Korst: I don’t think any of us are on the full pulse, but I’ve at least got a little bit of a finger on it.

Ross Dawson: Thanks so much for your time and your insights, Jeremy. I’ll be continuing to follow your reports and your work.

Jeremy Korst: All right. Thanks, Ross. Pleasure being on.

The post Jeremy Korst on the state of AI adoption, accountable acceleration, changing business models, and synthetic personas (AC Ep30) appeared first on Humans + AI.

More episodes from "Humans + AI"