Incrementality is the most precise and accurate method to measure the real, incremental value generated by individual marketing activities.
In marketing, we’re always striving to optimize your marketing to drive real growth and achieve tangible business results.
Most of us spend a considerable amount of time tracking attribution, a host of different metrics and KPIs, and continually adjusting and refining our campaigns to achieve the best results.
When it comes to marketing optimization, particularly in growth marketing, incrementality is the holy grail and North Star of marketing measurement!
It can show you exactly which marketing techniques and channels are generating real value, and which ones are just taking credit for organic results.
It can also provide data-driven insights into every aspect of your marketing strategy, from the best marketing mix for your business, down to the most effective (and cost-effective) marketing activities at the individual and ad set level.
So, what, exactly, is incrementality? Why is it so important? How do you test and measure incrementality? And, most importantly, how can you use it to optimize your marketing?
Today, we’re covering everything you need to know about incrementality – what it is, how it differs from attribution, why it is important, how to test and measure incrementality, and two incrementality case studies, to show you exactly how you can leverage the power incrementality in your marketing!
Let’s dive right in!
In marketing, incrementality is a way to measure causation and identify events/outcomes, such as conversions, that would not have occurred in the absence of a specific interaction, such as an ad view.
Incrementality measures how effective a marketing activity is and demonstrates the real value it generated, which would not have occurred without it.
No marketing activity exists in isolation, which makes it difficult to measure the true impact of each component of your marketing mix.
The overlap between organic outcomes and outcomes generated by your marketing, for example, an ad run, unique campaign, or use of a particular channel, can be hard to separate and accurately measure. This often leads to imprecise decision-making and budgeting, which leads to wasted ad spend.
Incrementality measuring is the most accurate method available to identify which specific elements of your marketing are driving real value, and which are merely cannibalizing credit for organic outcomes.
It is important to note the distinction between incrementality and attribution, so let’s take a closer look at how incrementality and attribution differ:
Attribution refers to how marketers track and give credit to the various touchpoints that led to conversion along the customer journey. There are several models used to measure attribution, including single-touch and multi-touch attribution (MTA).
MTA is the most detailed or holistic attribution model and tracks interactions at various points along the customer journey, as opposed to just the first- or last touchpoint. While this provides some important insights, they are quite limited when compared to the insights provided by incrementality.
Take a quick look at this video by Google Ads for a nice refresher or overview of attribution vs incrementality:
MTA is limited to data from actions taken in an online setting, such as clicks or web visits. It does not take into account offline data from say print or tv ads, or online data with no immediate action, such as ad or social media impressions.
This is especially limiting for channels where impressions are difficult to track and quantify, such as offline campaigns and ‘walled garden’ social media channels like Facebook or Pinterest.
Incrementality, on the other hand, is measured using a different approach (more on that below), and data from a variety of sources. It determines the desired outcomes that would not have occurred without that marketing activity.
It takes into account impressions, as well as clicks, for each platform or campaign being measured. Additionally, it provides insights into the impact of that activity, when compared to the impact other campaigns/channels, and the organic outcomes that would have occurred anyway.
As such, incrementality provides your attribution data, and a whole lot more. With this nuanced and detailed picture of your marketing activities, you can make more precise and effective decisions to get the most value for your marketing spend, across your entire portfolio.
TLDR: Attribution vs Incrementality
Attribution gives credit to the touchpoints that lead to the desired outcome, such as a conversion, in the presence of a variety of marketing activities. Incrementality measures the desired outcomes achieved with a specific marketing activity, such as an ad run, and without it – giving you a true reflection of the value that specific activity generated, which would not have occurred in its absence.
Incrementality is frequently referred to as the ‘North Star’ of marketing. While this might seem like quite a grand claim to make, measuring incrementality will have a profound effect on the results you achieve and the direction of your overall marketing strategy.
Measuring incrementality allows you to identify and address the impact of variables that are otherwise difficult to isolate and assess. It allows you to answer questions like:
Measuring incrementality allows you to accurately identify the positive, negative or neutral impact of any marketing initiative.
Incrementality testing provides the data and actionable insights you need to make strategic data-driven decisions. It will inform and justify budget allocation, prevent wasted ad spend, and allow you to continually fine-tune your marketing to drive tangible results and business revenue.
The Bottom Line – Importance of Incrementality Measurement
Measuring incrementality determines the real value generated by a marketing activity. When you measure incrementality, you can see what happened with that activity, as well as what happened without it. Knowing the real value provided by individual activities, such as ads, campaigns, channels, and even individual variations of each, allows you to know precisely where to focus your efforts, and where you’re wasting resources on activities that do not generate measurable results.
As such, incrementality really is the guiding star of not just your marketing but your organization as a whole.
This mindset can be applied to all aspects of your business – fostering a framework where data-driven decision-making is the norm.
In a data-driven company culture, achieving measurable, incremental results, in the most efficient way, becomes part of the company’s DNA, leading to long-term success.
Measuring incrementality can be done in various ways, depending on the use case and the insights required. Both Marketing Mix Modelling (MMM) and Multi-touch Attribution (MTA) can be considered ways to measure incrementality, in the right scenario.
However, the most commonly used method to test and measure incrementality is the Design of Experiments (DoE) test. This is similar to A/B testing but leans closer to statistical analysis and uses a test group and control group instead of two test groups.
In its simplest form, incrementality testing involves randomly segmenting your target audience into two cohorts or groups, which then become your test and control groups.
The test group is then exposed to the marketing activity that you’re measuring (e.g., a video ad, or a specific campaign, on a specific channel or platform), and the control group is either exposed to no ads or random, unrelated ads/public service announcements, for a set period.
The difference between the two groups in terms of the number of desired outcomes (e.g., conversions, sign-ups, downloads, etc.) demonstrates the incremental lift contributed by that activity, which would not have occurred without it.
The control group results represent a baseline of what would naturally occur, in the absence of that particular marketing activity.
The formula to measure incremental lift is: test group result, minus control group result, divided by control group result, equals incremental lift.
For example, your test group performed 1000 desired outcomes (e.g., sign-ups or conversions), and your control group performed 800 desired outcomes.
The formula would then be 1000 minus 800 divided by 800 equals 0.25. To convert that to a percentage, multiply the final result by 100 (e.g. 0.25 x 100 = 25% incremental lift).
If you prefer, you can use the same formula using your results when converted to percentages.
For example, if the test group achieved a 2% conversion rate, and your control group achieved a 1.5% conversion rate, the formula would be 2% minus 1.5%, divided by 1.5%, equals 0.33% incremental lift.
Incrementality testing can be as simple as the example above, measuring just one or two variables, or it can be much more complex, involving multiple variables and requiring specialized expertise to design, deploy and interpret.
There is no limit to how many data sources you can use for incrementality testing and as long as your experiments are well designed, controlled, and properly executed, you can measure the incremental contribution of any variable against any measurable outcome!
Measuring incrementality identifies causation between the specific activity being measured and the positive, negative, or neutral impact of that activity on the desired outcome when compared to results from the control group. This accurately shows you the incremental contribution of that activity towards your objectives and overall marketing goals, which shows you exactly how valuable that activity is (or isn’t).
Now that we have covered what incrementality is, how it differs from attribution, why it is important to measure incrementality and how to measure it, let’s take a look at how incrementality testing works and some tips to design an effective incrementality test.
So, how, exactly, does incrementality testing work? Let’s a look at the process, step by step:
Incrementality is all about testing and experimenting, and then adjusting your approach based on the results you get.
Like all scientific experiments, you need to plan carefully and determine exactly what you’re testing, what outcomes you want to achieve, and how you will determine your results.
You also need to identify your parameters and variables and plan how you will control those variables to limit any interference from factors outside of your control.
Take a look at this video by AppsFlyer for an overview of incrementality testing and how it works in practice:
Broadly, there are five stages of a Design of Experiments incrementality test:
Let’s take a quick look at each stage individually:
Incrementality tests need to be carefully planned and thought through, from end to end before you begin.
Initially, you need to define the question you’re asking, what you want to find out by asking it and what KPIs you can use to determine your answer. To do this, you need to define:
Once you have defined the above, your question (or hypothesis, if you will) might look something like this:
“Does running x ad, on y channel, for z period, have any statistically significant impact on our conversion rate?”
Note that results should, ideally, be statistically significant, which is a measurement of the likelihood that the incremental lift you observe is not merely a coincidence. In the very simplest terms, a statistically significant result is one that is very unlikely to have occurred by chance or due to a sampling error.
Now that you have defined the parameters of your incrementality test, you need to select the target audience and segment it into a test group and a control group.
These groups should be randomly segregated, have similar characteristics, and, as far as possible, not overlap.
Using the test and control groups accounts for some of the factors you can’t control, such as the impact of organic behavior or exposure to your other marketing activities, because both groups are impacted by the same factors.
In terms of size, the larger your two groups are, the better. Larger sample sizes are less prone to sampling errors where coincidental factors skew the results of the test. The control group should be no less than 20% of the total audience.
Attribution platforms can be a useful place to start when defining and segmenting your audience.
However, for incrementality tests where the target audience does not already have any kind of unique identifiers, such as campaigns for user acquisition, you will need to use other factors to define and segment your audience.
These can be demographic, geographic, time frames, products, etc. The parameters you use will depend on the test you’re conducting and the nature of your business.
Deploy the marketing activity you’re testing to the test group of your target audience.
The duration of the test will depend on your parameters, requirements, group/sample sizes, the volume of data you’re working with, and the average business cycle for your type of business.
However, in general, the test should run for at least 7 days and be deployed at a time when you are not deploying any other new or unique marketing campaigns.
This will give you a more accurate result and reduce the potential for your other marketing activities to impact the results of your experiment.
When your incrementality test is complete, collect the data and measure your results to see the incremental lift in the KPIs you determined to gauge your results.
Looking at the relationship between the test and control group results will give an idea of how they behaved differently and how and why you got the results you see from the test.
Remember that your incremental results may be positive (incremental lift), negative (test group performed poorly compared to the control group), or neutral (no significant difference between the two groups).
If there is a very large gap between the two results from the two groups, which is much wider than you expected, you may need to re-configure your test and run it again.
Incrementality testing can be a complex and challenging process and, depending on the type of test you need to run, you may need to get help from a professional in the field of attribution or incrementality testing. There are also platforms you can use that simplify the process and give a bird’s eye view of your data and what you can learn from your results.
When you have a clear picture of which marketing activities are driving real value for your business, and how much channels cost vs the profit they generate, you can identify where you need to focus your attention and budget, and where to cut back and reduce wasted marketing spend.
One of the most useful things about measuring incrementality is that it gives you a better indication of your ROAS vs organic revenue. In some cases, you will be able to determine that you can spend less on advertising altogether because your marketing activities have just been cannibalizing organic results.
To calculate incremental ROAS (iROAS) you can deduct the revenue generated by your test group from the revenue generated by the control group and divide it by your total marketing spend. That will remove organic conversions from the equation and show you the real, quantifiable, impact of your marketing activity.
In marketing, the more you know about your results and how you achieved them, the more effective your optimization efforts will be. When you’re making data-driven decisions and continually testing and optimizing your campaigns you can truly achieve the most while spending the least.
Incrementality can be used to measure all manner of marketing-related activities and help you determine which ones are the most valuable, which ones are taking credit for organic results, and which ones are costing more than they’re worth in terms of real value generated.
To see how incrementality can be used in real-world applications, let’s take a look at some examples and incrementality case studies from the attribution and incrementality experts at Measured:
Measured was approached by Shinola, a luxury lifestyle brand based in Detroit, for help after the introduction of Apple’s App Tracking Transparency (ATT) came into effect. ATT and the associated data restrictions made it challenging for the brand to access accurate measurements and reporting on customer acquisition through Facebook campaigns.
Shinola observed that there was a considerable drop in the number of conversions that Facebook reported but they knew that their Facebook campaigns were still generating sales for their brand.
It was suspected that the platform was underreporting conversions, but the problem was that they had no way of accurately measuring the incremental contribution of those campaigns. Shinola then hired Measured to solve the problem, using incrementality testing to show the true value their Facebook campaigns were delivering.
To determine the number of incremental conversions generated by Shinola’s Awareness and DABA (Dynamic Ads for a Broad Audience) campaigns on Facebook, they ran a geo-matched market test, deployed at the zip-code level.
Geographic experimentation was chosen because it could be run without relying on the platform’s reporting and did not require any data restricted by Apple’s ATT. They chose the zip-code level for increased precision and to reduce risks associated with a media holdout (holdouts are frequently used in incrementality testing and involve ceasing all other marketing activity for the duration of the test).
When conducting the test, a random selection of zip codes was excluded (control group). By comparing the results from the test group (zip codes that were included and shown the campaigns), they were able to determine the number of conversions generated by the campaign, and the number that would have occurred anyway, without the campaign.
The incremental test results showed that for both the Awareness and DABA campaigns there was an incremental lift of 14.3%. This meant that Facebook was underreporting the overall performance of Shinola’s campaigns by 413%!
Knowing this allowed Shinola to identify that the channel was more valuable, in real terms than the platform’s reporting indicated. This meant that they could appropriately allocate funds to campaigns that were driving real results, instead of reallocating that budget to other channels, which might be less effective or, worse, where attribution was exaggerated instead of underreported.
Read the full case study here: Measured – Shinola Case Study
The Bottom Line
Incrementality testing allowed the client to 1) identify the true impact of their Facebook campaigns and verify that the reporting they were receiving from the platform was under-valuing the real impact of their campaigns; and 2) accurately identify where to allocate their marketing budget, focusing on channels that were driving real results.
Measured was approached by Faherty, a family-run clothing and lifestyle brand operating from brick and mortar stores, and a rapidly expanding e-commerce store on Shopify.
Faherty’s growth was driven primarily by advertising on Facebook and Instagram. They needed to know if they had exhausted these channels and saturated their audience, or if there was still room to scale their Facebook and Instagram advertising.
They also wanted to know how they could continue to drive aggressive growth by diversifying their marketing channels. Faherty was operating on a lean marketing budget and needed to be sure that they were spending appropriately to drive true incremental value while wasting as little as possible.
They also wanted insight and guidance on how to apply the insights gained from incrementality testing to their existing business.
Measured implemented a four-week cross-channel incrementality measuring and reporting framework, to inform Faherty’s choice of media channels and decisions to scale different channels.
They looked at Granular incrementality insights for:
For robust measurement and reporting, they implemented a marketing data warehouse (MDW), which acts as a unified data asset and consolidates online and offline transactions, media platform reporting, CRM, LTV, web traffic data, third-party ID mapping, and various incrementality measurement outputs.
Using a cross-channel incrementality measurement and data warehouse approach enabled Measured and Faherty to:
The above led to positive, double-digit, percentage improvements YoY for all core marketing and business KPIs, with a 53% increase in sales revenue.
Read the full case study here: Measured – Faherty Case Study
The Bottom Line
Incrementality measurement provided an accurate method to analyze a large volume of data, across multiple channels, and accurately measure which channels were the most efficient at providing real results. It also allowed them to assess how much they could scale each channel, without compromising profitability, and put in place a detailed and actionable reporting system.
Incrementality is known as the ‘north star’ of marketing because it is the best method we have to accurately test and measure the real impact of any marketing activity.
It can provide nuanced and detailed insights into every aspect of your marketing strategy, for your whole marketing portfolio.
Using incrementality testing can help you refine and optimize your marketing to make the most of the opportunities and minimize the risks, through data-driven decision making.
Today, we have covered everything you need to know to understand how incrementality testing works, how to measure incrementality, and what kind of insights you can gain from measuring incrementality in your marketing!
Incrementality in marketing refers to the incremental benefit or incremental lift generated by each action taken as a result of a particular marketing activity. Incremental lift refers to the increased benefits or number of desired outcomes (such as conversions or web visits) experienced as a direct result of a marketing event or activity (such as an ad campaign or proportional offer). Read the full guide to learn more about incrementality in marketing.
Attribution refers to the process of matching two data points, such as clicks to conversions, to assign credit to the interaction or event that led to the conversion. Incrementality, on the other hand, refers to a method of measuring the true impact of any marketing activity. Read the full guide to learn more about incrementality, and how it differs from attribution in marketing.
An incrementality test is an experiment that is designed to measure the true effectiveness of a marketing activity. The target audience is segmented into two groups, a test group, and a control group. The test group is then exposed to a marketing activity and the control group is not exposed or exposed to a neutral or unrelated equivalent. The difference between the number of desired outcomes achieved by the test group and the control group demonstrates the true incremental lift/benefit that resulted from the marketing activity in question, and would NOT have occurred in its absence. Check out the full guide to learn more about incrementality testing.
In marketing, incrementality is measured in terms of the incremental lift or benefit generated by a marketing activity, which would not otherwise have occurred organically. Incrementality is measured by running an incrementality test, using a test group and a control group. The difference between the test and control group results demonstrates the true, incremental benefit generated by the marketing activity. The incrementality formula is: test group result, minus control group result, divided by the control group result, equals incremental lift. Incremental lift may be positive, negative, or neutral. Read the full guide to learn more about how to measure incrementality in marketing.
The formula to calculate incrementality is: test group result, minus control group result, divided by the control group result, equals incrementality. Multiply the final result by 100 to convert it into a percentage. For example, if the test group achieved 1000 conversions and the control group achieved 800 conversions, the calculation would be (1000 – 800) ÷ 800 = 0.25. You can then convert the final result to a percentage by multiplying it by 100 and conclude that the marketing activity you tested resulted in a positive incremental lift of 25%. That is the true incremental value of that activity, which would not have occurred without it/organically. Take a look at the full guide to learn more about how to calculate incrementality.
Measured: What’s the Difference between Attribution vs Incrementality?
Meta for Business: The Benefits of Using Incrementality Measurements for Your Business
Nielsen: The Importance of Incremental Lift
Skai: Difference Between Incrementality and Multi-Touch Attribution
Smart Insights: Incremental Measurement – Advantage and Opportunities
Social Media Today: How to Fix Ad Measurement with Incrementality Testing and Experiments
Think with Google: Your Measurement Resolution for 2021: Get a Grip on Incrementality
Towards Data Science: A/B Testing: A Complete Guide to Statistical Testing