Prior & Effect

Measurement Fundamentals · Issue 01

Did Your Marketing Actually Work?

The case for incrementality and causal inference — and why correlation is never enough.

HEDI MOUSSAVI·MARCH 2026·25 MIN READ
01

The Boardroom Problem

Imagine the slides go up. The room is full: CMO, CFO, a couple of VPs, someone from the board. The campaign numbers for the new phone launch look good. Click-through rates are up 34%. Engagement is climbing. The retargeting ads show a 6x return on ad spend. Someone says 'great work' before the deck is even finished.

Now imagine you are the measurement leader in that room. You built the framework. You ran the numbers. And you know that what is on that slide is, at best, incomplete and at worst, actively misleading. The 6x ROAS from retargeting? That is last-click attribution talking. It is giving full credit for the sale to the final ad a customer clicked before purchasing: the retargeting banner they saw after already being reached by a YouTube pre-roll ad, a paid social post, a display ad on a tech review site, two weeks of their own research, Reddit threads, and four visits to the product page. The click was not the cause. It was the last witness to a decision that had already been made.

"The question isn't whether marketing drove outcomes. It's whether it drove outcomes that wouldn't have happened otherwise."

You leave the meeting with your case unmade. The room was not ready, and you know it. But you also know something the slide deck does not show: there is a better way to measure what actually happened, and once you see it, you cannot unsee it. The rest of this piece is that way forward.

02

Correlation Is Not Causation And It Never Was

Most traditional measurement systems are built on correlation. Last-click attribution, multi-touch attribution, even many early Marketing Mix Models observe that marketing happened and sales followed, and conclude that one caused the other. This is intuitively appealing and analytically dangerous.

Consider a simple example. You send a promotional email to your highest-value customers — the ones who buy from you regularly, who search for your brand by name, who were probably going to purchase this week regardless. Conversions spike. Your email platform reports a 400% ROAS. But how much of that was your email, and how much was the inevitable behavior of a loyal customer base you would have captured anyway?

This is what measurement scientists call the counterfactual problem. To truly know whether your marketing worked, you need to know what would have happened in its absence. And you cannot observe a world that did not happen, at least not directly. Paul Holland formalized this framework in his landmark 1986 paper, introducing the potential outcomes model as a rigorous basis for causal inference. Judea Pearl extended this into a comprehensive causal framework in his 2009 book Causality. This is precisely what causal inference is designed to solve.

03

Multi-Touch Attribution: A Bridge and Its Limits

To be fair, the industry recognized the limits of last-click attribution and tried to solve it. Multi-touch attribution was the answer: a more sophisticated framework that distributes credit across every touchpoint in the customer journey rather than handing it all to the last click. It was a genuine improvement. But it introduced its own set of problems.

MTA relies on individual-level tracking data, cookies, device IDs, and cross-platform identity resolution, all of which have become increasingly difficult to obtain as privacy regulations tighten and third-party identifiers disappear. More fundamentally, even a perfect MTA model still cannot answer the counterfactual question. It can tell you which touchpoints were present on the path to purchase. It cannot tell you which ones actually caused the purchase.

04

What Causal Inference Actually Means

Causal inference is a framework for reasoning about cause and effect in the presence of imperfect information. Rather than simply measuring what happened, it attempts to estimate what would have happened under a different set of conditions. In marketing, this almost always means answering one question: What would sales, signups, or revenue have looked like if we had not run this campaign?

The gold standard for causal inference is the randomized controlled trial, what marketers call an incrementality test or lift study. The logic is identical to a clinical trial. You randomly split your audience into two groups: one exposed to the marketing stimulus (the treatment group) and one that is not (the control group). You hold everything else constant and measure the difference in outcomes. That difference, net of what the control group did on their own, is your true incremental lift.

AVERAGE TREATMENT EFFECT (ATE)

τ = E[Y(1) - Y(0)]

Y(1) = outcome with marketing exposure. Y(0) = counterfactual outcome without it. The difference is the true causal effect.

INCREMENTAL LIFT

Lift = (Conversions_Treatment - Conversions_Control) / Conversions_Control

Treatment group receives marketing exposure; control group does not. The lift ratio captures net new behavior attributable to the marketing activity.

"Incrementality is not a campaign metric. It is a strategic lens. It reframes the entire question from 'did people convert?' to 'did we cause the conversion?'"

05

Bayesian MMM: When You Cannot Always Run a Test

Not every channel, market, or time period can be cleanly tested with a randomized holdout. Running simultaneous experiments across channels risks contamination between test and control groups. Holding back spend for a control group carries real business risk, especially during a high-stakes launch window. And even when experiments are designed well, they are slow. A single well-structured incrementality test can take weeks or months to reach statistical significance. In a world where marketing decisions happen continuously, across dozens of channels and markets at once, you cannot wait for an experiment to tell you what is working. You need an always-on, cross-channel view of performance. That is precisely what Bayesian MMM is built to provide.

BAYES' THEOREM

P(θ | data) ∝ P(data | θ) × P(θ)

The posterior distribution equals the likelihood of the observed data multiplied by the prior distribution. This is how the model learns from evidence without starting from scratch.

06

The Fool's Gold Problem

The case for causal measurement has never been stronger or more urgent. Three forces are converging to make traditional attribution increasingly unreliable.

First, signal loss. The deprecation of third-party cookies, restrictions on mobile identifiers, and growing privacy regulation have eroded the data infrastructure that multi-touch attribution depended on. The models are not getting better; they are getting noisier.

Second, channel complexity. Modern consumers touch six to eight channels before converting. The customer journey is non-linear, cross-device, and increasingly cross-platform. Attributing credit across that landscape with deterministic rules is not just difficult. It is mathematically arbitrary.

Third, executive scrutiny. Marketing budgets are under more pressure than at any point in recent memory. CFOs are asking harder questions. 'Our ROAS looks strong' is no longer a sufficient answer. What leaders want to know is: if we cut this channel by 30%, what would we actually lose? That is a causal question, and it requires a causal answer.

"Platform ROAS tells you who converted near your ads. Incrementality tells you who converted because of them. These are very different questions."

07

The Causal Measurement Triangle

No single measurement approach is sufficient on its own. The most defensible investment decisions emerge when multiple methods converge, and when they diverge, that divergence is itself a valuable signal worth investigating.

The key is understanding what each method is actually built to do. Bayesian MMM provides the macro view: how each channel contributes to revenue at the aggregate level, controlling for seasonality, pricing, distribution, and competitor activity. Incrementality experiments provide the causal ground truth: clean estimates of what specific campaigns actually caused. Attribution provides the granular signal: user-level data useful for in-channel optimization and real-time checks, even if not causal in isolation.

But these three are not equal legs of a stool. The MMM is the engine. Incrementality experiments calibrate and validate it. Attribution informs it at the tactical level. Everything ultimately flows into the MMM as the cross-channel decision framework for budget allocation.

08

Adstock: The Time Dimension of Advertising

Advertising does not work the way a light switch does: on when you spend, off when you stop. Its effects linger, accumulate, and decay at rates that vary significantly by channel, creative format, and purchase cycle. This is the phenomenon that Adstock was designed to capture.

The concept was first introduced by Simon Broadbent in 1979 and has since become a foundational component of every modern MMM. At its core, Adstock transforms a raw media spend variable into a cumulative exposure variable that reflects how advertising effects build and fade over time.

GEOMETRIC ADSTOCK DECAY

Adstock(t) = Spend(t) + α × Adstock(t-1)

Alpha (0 < α < 1) is the decay parameter. Low alpha (0.2–0.3): fast decay, 1–2 week half-life, typical for branded search. High alpha (0.8–0.9): slow decay, typical for TV, OOH, and podcast.

09

Diminishing Returns and Saturation

Every channel has a ceiling. The question is where you currently sit on the curve. One of the most actionable outputs of a well-built MMM is the saturation curve: a channel-by-channel mapping of how incremental returns change as spend increases.

HILL TRANSFORMATION (SATURATION)

f(x) = β × xᵅ / (γᵅ + xᵅ)

x = spend; β = maximum response; α = steepness of curve; γ = half-saturation point. Applied after Adstock to model diminishing returns on cumulative exposure.

Reallocation from channels operating near saturation to channels with room to grow improves total portfolio ROI without increasing total spend. This is perhaps the most powerful and underutilized insight that a well-calibrated Bayesian MMM produces.

10

From Model to Strategy: The MMM Lifecycle

All of the methodology we have covered — incrementality, Bayesian priors, Adstock transformations, saturation curves, and causal triangulation — converges into a single living system. Understanding how that system works across its lifecycle is what separates a measurement function that produces reports from one that drives strategy.

The initial model build is the most consequential moment. This is where the reference frame is set. Adstock parameters are specified by channel. Saturation curves are calibrated. Priors are defined from previous MMM runs, incrementality experiments, industry benchmarks, and expert judgment. Getting this build right is not about speed. It is about credibility.

11

The Strategic Payoff

Over the course of more than a decade in marketing measurement, I have seen the transformation that happens when organizations move from correlation-based measurement to causal frameworks. Budget reallocations that felt risky become defensible. Channels that looked essential turn out to be largely redundant. Investments that appeared modest in platform dashboards prove to be driving outsized real-world lift.

The numbers matter, of course. But the deeper shift is cultural. When marketing teams trust their measurement, they make bolder, better-calibrated decisions. When finance teams trust the methodology, they engage as partners rather than skeptics. The measurement system becomes a shared language for growth strategy, not a point of friction between functions.

"AI can run the model. It cannot decide when the model is wrong. That judgment is the measurement leader's edge."

Incrementality and causal inference are not niche methodological concerns for data scientists. They are the foundation of every trustworthy answer to the question every marketing leader is ultimately responsible for: Did our marketing actually work? You now have the framework, the math, and the mental models to find out. The rest is the work.

About the Author

Hedi Moussavi, PhD

Measurement and analytics leader with 12+ years of experience building and scaling high-impact advertising and marketing measurement systems. Deep expertise in Bayesian MMM, incrementality experimentation, cross-channel measurement, and forecasting frameworks that drive accountable growth.

Prior & Effect

© 2026 Hedi Moussavi