Humans of Martech podcast

200: Matthew Castino: How Canva measures marketing

0:00
55:37
15 Sekunden vorwärts
15 Sekunden vorwärts

What’s up everyone, today we have the pleasure of sitting down with Matthew Castino, Marketing Measurement Science Lead @ Canva.

  • (00:00) - Intro
  • (01:10) - In This Episode
  • (03:50) - Canva’s Prioritization System for Marketing Experiments
  • (11:26) - What Happened When Canva Turned Off Branded Search
  • (18:48) - Structuring Global Measurement Teams for Local Decision Making
  • (24:32) - How Canva Integrates Marketing Measurement Into Company Forecasting
  • (31:58) - Using MMM Scenario Tools To Align Finance And Marketing
  • (37:05) - Why Multi Touch Attribution Still Matters at Canva
  • (42:42) - How Canva Builds Feedback Loops Between MMM and Experiments
  • (46:44) - Canva’s AI Workflow Automation for Geo Experiments
  • (51:31) - Why Strong Coworker Relationships Improve Career Satisfaction

Summary: Canva operates at a scale where every marketing decision carries huge weight, and Matt leads the measurement function that keeps those decisions grounded in science. He leans on experiments to challenge assumptions that models inflate. As the company grew, he reshaped measurement so centralized models stayed steady while embedded data scientists guided decisions locally, and he built one forecasting engine that finance and marketing can trust together. He keeps multi touch attribution in play because user behavior exposes patterns MMM misses, and he treats disagreements between methods as signals worth examining. AI removes the bottlenecks around geo tests, data questions, and creative tagging, giving his team space to focus on evidence instead of logistics.

About Matthew

Matthew Castino blends psychology, statistics, and marketing intuition in a way that feels almost unfair. With a PhD in Psychology and a career spent building measurement systems that actually work, he’s now the Marketing Measurement Science Lead at Canva, where he turns sprawling datasets and ambitious growth questions into evidence that teams can trust.

His path winds through academia, health research, and the high-tempo world of sports trading. At UNSW, Matt taught psychology and statistics while contributing to research at CHETRE. At Tabcorp, he moved through roles in customer profiling, risk systems, and US/domestic sports trading; spaces where every model, every assumption, and every decision meets real consequences fast. Those years sharpened his sense for what signal looks like in a messy environment.

Matt lives in Australia and remains endlessly curious about how people think, how markets behave, and why measurement keeps getting harder, and more fun.

Canva’s Prioritization System for Marketing Experiments

Canva’s marketing experiments run in conditions that rarely resemble the clean, product controlled environment that most tech companies love to romanticize. Matthew works in markets filled with messy signals, country level quirks, channel specific behaviors, and creative that behaves differently depending on the audience. Canva built a world class experimentation platform for product, but none of that machinery helps when teams need to run geo tests or channel experiments across markets that function on completely different rhythms. Marketing had to build its own tooling, and Matthew treats that reality with a mix of respect and practicality.

His team relies on a prioritization system grounded in two concrete variables.
Spend
Uncertainty

Large budgets demand measurement rigor because wasted dollars compound across millions of impressions. Matthew cares about placing the most reliable experiments behind the markets and channels with the biggest financial commitments. He pairs that with a very sober evaluation of uncertainty. His team pulls signals from MMM models, platform lift tests, creative engagement, and confidence intervals. They pay special attention to MMM intervals that expand beyond comfortable ranges, especially when historical spend has not varied enough for the model to learn. He reads weak creative engagement as a warning sign because poor engagement usually drags efficiency down even before the attribution questions show up.

“We try to figure out where the most money is spent in the most uncertain way.”

The next challenge sits in the structure of the team. Matthew ran experimentation globally from a centralized group for years, and that model made sense when the company footprint was narrower. Canva now operates in regions where creative norms differ sharply, and local teams want more authority to respond to market dynamics in real time. Matthew sees that centralization slows everything once the company reaches global scale. He pushes for embedded data scientists who sit inside each region, work directly with marketers, and build market specific experimentation roadmaps that reflect local context. That way experimentation becomes a partner to strategy instead of a bottleneck.

Matthew avoids building a tower of approvals because heavy process often suffocates marketing momentum. He prefers a model where teams follow shared principles, run experiments responsibly, and adjust budgets quickly. He wants measurement to operate in the background while marketers focus on creative and channel strategies with confidence that the numbers can keep up with the pace of execution.

Key takeaway: Run experiments where they matter most by combining the biggest budgets with the widest uncertainty. Use triangulated signals like MMM bounds, lift tests, and creative engagement to identify channels that deserve deeper testing. Give regional teams embedded data scientists so they can respond to real conditions without waiting for central approval queues. Build light guardrails, not heavy process, so experimentation strengthens day to day marketing decisions with speed and confidence.

What Happened When Canva Turned Off Branded Search

Geographic holdout tests gave Matt a practical way to challenge long-standing spend patterns at Canva without turning measurement into a philosophical debate. He described how many new team members arrived from environments shaped by attribution dashboards, and he needed something concrete that demonstrated why experiments belong in the measurement toolkit. Experiments produced clearer decisions because they created evidence that anyone could understand, which helped the organization expand its comfort with more advanced measurement methods.

The turning point started with a direct question from Canva’s CEO. She wanted to understand why the company kept investing heavily in bidding on the keyword “Canva,” even though the brand was already dominant in organic search. The company had global awareness, strong default rankings, and a product that people searched for by name. Attribution platforms treated branded search as a powerhouse channel because those clicks converted at extremely high rates. Matt knew attribution would reinforce the spend by design, so he recommended a controlled experiment that tested actual incrementality.

"We just turned it off or down in a couple of regions and watched what happened."

The team created several regional holdouts across the United States. They reduced bids in those regions, monitored downstream behavior, and let natural demand play out. The performance barely moved. Growth held steady and revenue held steady. The spend did not create additional value at the level the dashboards suggested. High intent users continued converting, which showed how easily attribution can exaggerate impact when a channel serves people who already made their decision.

The outcome saved Canva millions of dollars, and the savings were immediately reallocated to areas with better leverage. The win carried emotional weight inside the company because it replaced speculati...

Weitere Episoden von „Humans of Martech“