Advertising

What Is Incrementality in Advertising and Why Does It Matter?

Incrementality in advertising measures the true, causal impact of marketing efforts, moving beyond correlation to prove an ad campaign's real value. It's essential for optimizing budgets and making data-driven decisions in today's complex media landscape.

LH
Leo Hartmann

April 4, 2026 · 8 min read

Abstract digital representation of marketing data streams converging, highlighting the concept of incrementality and true ad campaign impact in a complex media landscape.

Incrementality in advertising answers how many customers would have purchased even without seeing an ad, moving marketers beyond correlation to causal proof of an ad campaign’s true value. In a media environment defined by signal loss and opaque data platforms, understanding this metric is essential for justifying and optimizing marketing budgets.

For years, advertisers relied on attribution models—like last-click or multi-touch—to assign credit for conversions across various marketing channels. However, these methods face significant challenges as the digital landscape shifts due to fragmented consumer journeys, the disappearance of third-party cookies, and "walled gardens" like Meta and Google limiting user-level data. Traditional attribution methods are becoming less viable, leaving marketers without a reliable way to understand which media dollars are actually working. This necessitates a more scientifically rigorous approach to measurement, one that proves an ad didn't just get credit for a sale, but actually caused it.

What Is Incrementality in Advertising?

Incrementality in advertising is the science of measuring the true, causal impact of marketing efforts by isolating the lift in business outcomes—such as sales, sign-ups, or app installs—that is directly attributable to a specific campaign or channel. In essence, it answers the fundamental question: "What would have happened anyway?" By separating outcomes caused by marketing from those that would have occurred organically, incrementality reveals the genuine value added by advertising spend. This stands in stark contrast to traditional attribution, which often measures correlation rather than causation.

An effective analogy is a clinical trial for a new pharmaceutical drug. Researchers don't just give the drug to a group of patients and measure the results. To prove the drug’s effectiveness, they must compare that group to a similar group that received a placebo. The difference in health outcomes between the two groups is the drug's incremental effect. Incrementality testing in advertising applies the same scientific principle.

  • Test Group (or Exposed Group): This is the segment of the audience that is exposed to the advertising campaign being measured.
  • Control Group (or Holdout Group): This is a statistically similar segment of the audience that is intentionally withheld from seeing the advertising campaign.
  • Incremental Lift: This is the difference in conversion rates or other key performance indicators (KPIs) between the test group and the control group. This lift represents the true impact of the ad.

Establishing a baseline of behavior with the control group allows marketers to determine how much the advertising message influenced the test group's behavior. This data-driven approach provides a clear, evidence-based understanding of an ad's causal impact, moving beyond assumptions about ad performance.

Why Is Incrementality Crucial for Proving Ad Effectiveness?

Measuring advertising incrementality is a critical task for financial planning and optimal budget allocation. As the media ecosystem grows more complex, simply knowing the last touchpoint before a conversion is no longer sufficient. Incrementality allows marketers to make smarter, more profitable decisions by understanding the causal relationship between their spending and business results.

A key factor to consider is the reliability of platform-reported metrics. According to analysis from the measurement platform Measured, ad platforms may over-report or under-report actual campaign performance. This discrepancy can occur because platform algorithms often target users who already have a high propensity to convert, making the ads appear more effective than they truly are. They may also struggle with signal loss from anti-tracking measures, further muddying the data. Without incrementality testing, a marketer might attribute a sale to a retargeting ad, when in fact that customer was already on the verge of purchasing.

Focusing on incremental lift enables a more accurate calculation of Return on Ad Spend (ROAS), often referred to as iROAS. As noted by the marketing analytics platform Improvado, this allows brands to avoid wasting budget on campaigns that are not generating a true, additional impact. For example, a brand might discover that its expensive search ads targeting its own brand name have very low incrementality, as most of those users would have navigated to the site directly anyway. Reallocating that budget to a channel with higher proven incrementality, such as a top-of-funnel video campaign, could drive significantly more net-new business. This strategic reallocation is only possible with the causal insights that incrementality provides.

How Do Marketers Measure Advertising Incrementality?

The gold standard for measuring incrementality is the randomized controlled trial (RCT), often executed as a simple A/B test. This involves splitting a target audience into statistically identical test and control groups. The test group sees the ads, while the control group does not. The subsequent difference in conversion rates reveals the incremental impact.

For example, a brand might create a 10% holdout group for a retargeting campaign. Over the campaign period, data reveals:

  • The control group (who saw no ads) had a repurchase rate of 10%.
  • The test group (who saw the ads) had a repurchase rate of 14%.

From this, two key metrics can be calculated. The incremental lift is the percentage increase in conversions in the test group over the control group, which in this case is 40% ( (14% - 10%) / 10% ). This means users who saw the ads converted at a rate 40% higher than those who did not. The incrementality rate, which measures what percentage of total conversions were incremental, is 28.6% ( (14% - 10%) / 14% ). This indicates that nearly 29 out of every 100 conversions from the ad-exposed group would not have happened without the ad campaign.

However, implementing user-level RCTs is not always feasible. Research presented at the 2020 AdKDD workshop points out that closed ad networks, or "walled gardens," often prevent advertisers from having direct access to user-level traffic, making true user randomization difficult. To overcome this, an alternative method has gained prominence: the controlled geo-experiment. In this design, randomization occurs at the market level rather than the user level. An advertiser might increase or decrease ad spend in a set of designated test markets while holding spend constant in a set of control markets. By using advanced statistical models to compare the outcomes, they can estimate the incremental lift of their spend interventions.

Testing MethodHow It WorksProsCons
User-Level RCT (A/B Test)Randomly splits individual users into test and control groups.Considered the most accurate and direct method of measurement.Often not feasible within walled gardens; requires large audience sizes.
Geo-Experiment (Geo-Lift)Splits geographic markets (cities, states) into test and control groups.Possible on platforms that don't allow user-level splits; good for measuring broadcast media.Less precise than user-level tests; requires advanced statistical modeling to control for market-level variations.

Why Incrementality Matters in Practice

Real-world advertising campaigns demonstrate incrementality's concrete benefits, allowing brands to validate strategies and optimize their media mix with high confidence. A field experiment detailed in the AdKDD paper provides a compelling case: When Google Universal App Campaign (UAC) spend was suspended in test markets, researchers measured a statistically significant 6.57% decrease in conversions compared to control markets. This figure represents the true, incremental contribution of the UAC campaign, a hard number that justifies its existence in the marketing budget.

Consistently running incrementality experiments helps advertisers understand how channels, campaigns, and tactics contribute to their bottom line, answering specific strategic questions: Does social media advertising drive new customers or primarily reach existing ones? What is the point of diminishing returns for video ad spend? Is lower-funnel retargeting cannibalizing sales that would have happened anyway?

However, it is also important to acknowledge that incrementality testing is not a silver bullet. According to the marketing measurement firm Keen, for some marketers, the drawbacks of running constant tests—such as the cost, complexity, and the opportunity cost of holding out a control group—may outweigh the benefits. For this reason, some may need to supplement or integrate incrementality test results with broader cross-channel analysis, potentially using a Media Mix Modeling (MMM) platform to gain a more holistic view of how all marketing efforts work together to generate lift.

Frequently Asked Questions

What is the difference between attribution and incrementality?

Attribution models assign credit for a conversion to various marketing touchpoints based on a set of rules, such as last-click or multi-touch. It answers "which channels were involved in the conversion journey?" Incrementality uses controlled experiments to measure the causal lift from a marketing activity, answering "how many conversions would not have happened without this ad?" Attribution measures correlation, while incrementality measures causation.

How is incremental lift calculated?

Incremental lift is calculated by comparing the conversion rate of a test group (exposed to ads) to a control group (not exposed to ads). The formula is: (Test Group Conversion Rate - Control Group Conversion Rate) / Control Group Conversion Rate. The result is expressed as a percentage, representing how much more likely the test group was to convert.

Why can't I just use the metrics from my ad platform?

Ad platforms often report on correlational metrics and can be prone to crediting themselves for conversions that would have occurred organically, especially when targeting users with high purchase intent. Platform-reported conversions do not isolate the true, causal effect of the ads. Incrementality testing provides an independent, more accurate measure of an ad campaign's actual impact on business outcomes.

What is an example of an incrementality test?

A simple example is a holdout test for an email campaign. A company could send a promotional email to 90% of its customer list (the test group) while withholding it from a random 10% (the control group). By comparing the purchase rate between the two groups over the next week, the company can measure the exact number of additional sales generated by the email.

The Bottom Line

Incrementality measurement shifts advertising analysis from correlation to causation, providing marketers with a scientific framework to prove the true financial impact of their campaigns, moving beyond often-inflated ad platform metrics. In an era of increasing data privacy and signal degradation, adopting an experimental mindset and leveraging incrementality testing is becoming a non-negotiable strategy for marketers to justify spend and optimize for genuine business growth.