The Innovators Studio with Phil McKinney podcast

R&D Spending Is the Most Misleading Number in Business

0:00
16:31
Reculer de 15 secondes
Avancer de 15 secondes

Every public company's R&D number is a lie hiding in plain sight.

Not because anyone falsified it. Because the number was never built to tell the truth. It was built to satisfy an accounting standard written in 1974. And for fifty years, boards, analysts, and CEOs have been making billion-dollar innovation decisions based on a number designed by accountants to solve a different problem entirely.

Here's what makes this genuinely strange. The real number exists. The government has been collecting it from every major US company for decades. It would answer the question every innovation leader and investor actually needs answered. And it is locked away by federal law. Confidential. Never published. Never seen by the people who need it most.

It's sitting in a federal database right now. And there's a way to estimate it for any public company, without asking anyone's permission.

I know it exists because I spent years building it from the inside.

Why the R&D Signal Was Blurry

When I was running innovation at HP, we discovered this problem firsthand. We had a connection between R&D investment and gross margin that held up across decades of HP history. Better than anything Wall Street was using. But the signal was blurry. None of us could figure out why.

The answer came from a question someone on the team asked almost as an aside.

What if R&D isn't one thing?

Research and Development Are Not the Same Thing

Think about what actually lives inside a typical R&D budget.

There's a team somewhere investigating whether a new approach could enable a capability that doesn't exist yet. No product defined. No spec written. Asking whether something is even possible.

And there's a team building the next version of a product that ships in eighteen months. Spec locked. Timeline set. Engineering executing against a defined target.

Both show up on the same line in the budget. Both get called R&D. Both count equally toward the number that gets reviewed every quarter.

They are not the same thing.

One is Research. The other is Development.

Research is the work you do when you don't yet know what you're building. The output is understanding. New knowledge that might enable future products nobody has designed yet. You can't know exactly what you'll find. If you already knew, it wouldn't be research.

Development is the work you do when you know exactly what you're building. The spec exists. The product is defined. The question isn't what to make. It's whether it can be made, on time, at cost, at quality.

One creates the future. The other delivers the present. And for fifty years, every public company in America has been required to report them as one indistinguishable number.

When we split the HP data along that line, Research on one side and Development on the other, the signal sharpened immediately. Research spend, measured against gross margin three to five years later, was a meaningfully stronger predictor than the combined number had ever been.

The blur hadn't been in the gross margin data. It had been in the R&D number itself. Two fundamentally different things, averaged together, producing a number that looked precise and predicted almost nothing.

But splitting R from D at the company level was only the beginning. The model was still lying to us. Just more quietly.

Why Company-Level R&D Splits Still Mislead

Even with the split, something was still soft. HP wasn't one business. It was dozens. Printers, PCs, servers, software, each running on different timelines, different technology cycles, different competitive dynamics.

What if the R/D split meant something different depending on where it was applied?

We pushed it to the product line level. Then further, to the platform level within product lines.

Printers were the clearest example.

HP's printer business wasn't one story. There were platforms built on established technology. Mature ink systems, proven print head chemistry, products that had been shipping for years. And there were platforms built on genuinely new core technology. New chemistry. New mechanisms. New approaches to fundamental problems that nobody had solved yet.

Research investment by platform told a completely different story than Research investment by product category. The Research going into new technology platforms had a completely different relationship to future margin than Research going into mature platforms. Different time horizons. Different risk profiles. Different margin implications years down the road.

Laptops told the same story. A traditional consumer laptop line and a high-performance portable workstation weren't the same investment. One was Development-heavy. Defined product, known market, engineering executing against spec. The other had genuine Research behind it. Unsolved thermal problems, new form factor constraints, and materials questions that hadn't been answered yet.

When a single R&D assumption is applied across all of that, treating every dollar the same regardless of what it actually does, the signal disappears into the average. Peanut butter across the portfolio.

The model only got honest when it got specific. Research by platform and Development by platform, matched against the margin performance of those specific platforms years later. Which platforms were building future margin? Which ones were running on margin that past Research had already bought?

We could see it because we were inside the company. The question is whether anyone on the outside could ever see the same thing.

The R&D Data the Government Collects and Won't Release

Outside the internal budget process, everyone sees the same thing: a single line on the income statement.

The US government recognized decades ago that the combined R&D number was analytically useless. So they built a system to collect the real one.

The National Science Foundation runs a survey called the Business Enterprise Research and Development survey. The BERD survey. Every year, roughly 47,500 US companies are required to report their R&D spending broken into three categories: basic research, applied research, and experimental development. The split that every board and every investor needs to see. Mandatory. Collected. Verified.

And then locked away.

The firm-level data is confidential under federal law. The NSF publishes only industry-level aggregates. So every company fills out this survey and reports its real R/D split to the government. That data sits in a federal database. And the boards, investors, and analysts who need it most cannot access it.

Researchers at Northwestern and Boston University were given rare access to that confidential data. What they found is striking. When companies face financial pressure and cut R&D, they don't cut Development. They cut Research. Almost entirely. Development barely moves.

Every earnings squeeze. Every activist campaign. Every cost optimization program. Systematically targeting the one part of R&D that builds future margin. And because the combined number barely moves, nobody on the outside sees it happening.

That's not a coincidence. That's the accounting standard doing exactly what it was designed to do: produce one clean number for the income statement. It was never asked to protect the future.

How to Estimate the Research-to-Development Split Without Inside Access

So what can actually be done without access to the locked data?

More than most people realize.

Step 1. Find the industry baseline. The aggregate BERD data is public at the sector level. Ask an AI tool for the Research-to-Development ratio for the relevant industry. That's the benchmark. Everything else gets measured against it. A company spending 8% of its R&D on Research in an industry where the average is 25% is telling you something the combined number never would.

Step 2. Look at the gross margin trend compared to peers. Gross margin over time is the most honest external signal of Research health. A company with a declining margin relative to peers, while reporting flat or growing R&D spend, is almost certainly shifting the mix toward Development. The math works in the other direction, too. An AI tool can pull this comparison for any public company in minutes. This is exactly the signal that was invisible at HP until it was too late.

Step 3. Look at patent trends compared to peers over time. Patents are an imperfect but useful directional indicator. Not because more patents always means more Research. It doesn't. But a sustained decline in patent output relative to peers, alongside flat R&D spend, suggests the investment is maintaining existing products rather than creating new knowledge. Combined with the gross margin trend, it starts to triangulate where the split actually sits.

None of these three steps requires access to an internal budget. All of them can be done in an afternoon with public data and an AI tool. Together, they produce a working picture of the R/D split that the income statement was never designed to reveal.

What the R&D Split Revealed at HP That No One Outside Could See

When Hurd took over in 2005, HP was spending $3.5 billion on R&D. Roughly 4% of revenue. By 2009, his last full year as CEO, that had dropped to $2.8 billion. Revenue had grown significantly over that period, so the percentage had fallen further still, to under 2.5%. Both the dollar amount and the ratio were declining simultaneously while the company got larger.

Wall Street tracked the combined number. The board reviewed it. Nobody raised a structural alarm.

The Research component within that total was well below the industry average for comparable technology companies. Not slightly. Significantly.

The margin consequences arrived years later. They always do.

What Happens When the Definition of Research Doesn't Exist

The R/D split gave us a real predictive signal. We ran with it. The conversations were sharper. But the team kept pulling on a thread that nobody expected.

When we looked closely at what was actually being called Research, project by project and budget line by budget line, things that didn't feel the same kept appearing. Work aimed at fundamental discovery. Work aimed at solving a specific defined problem using entirely new methods. Both labeled Research. Up close, they behaved differently, predicted different things, and when budgets got tight, got treated very differently.

So we went looking for the agreed definition. The official standard that would tell exactly where to draw the lines inside Research.

It didn't exist. Not the way we needed it to. And without it, everything we'd built was sitting on sand.

How do you build a predictive model on a definition that doesn't exist?

That's the next episode.

If this helped you see something you might have missed, subscribe wherever you listen to podcasts. On YouTube, hit subscribe and the bell so you don't miss the next episode. And if you want to go deeper every Monday, join us at Studio Notes — free, at philmckinney.com.

Until next time. See the pattern. Make the call.

 

D'autres épisodes de "The Innovators Studio with Phil McKinney"