500,000 monthly readers are maximizing their advertising conversions with conversion intelligence.
The average online user is exposed to anywhere from 6,000 to 10,000 ads every day.
Sep 27 2021
Incrementality testing has become the best way to evaluate ad spend and eliminate cannibalization.
We know that neither the "first contact" nor "last contact" models are the answer to attribution accuracy. We also know that not all campaigns or advertising channels are effective in driving positive ROAS. Fractional attribution weights different interactions differently, for example, 70% of first contacts and 30% of last contacts. These questions can be summarized as: Is my ad driving actual value, or just claiming credit for naturally occurring behavior?
How do we measure the value of individual touchpoints along the path to purchase? That's where incrementality comes in.
Click "Learn More" to drive your apps & games business with ASO World app promotion service now.
Incrementality testing is a mathematical approach to advertising that helps you measure incrementality lift and shows you the true impact of your campaigns.
Incrementality is essentially an A/B test. Standard A/B testing divides your product or campaign into two parts, A and B, and then divides your audience into Audience 1 and Audience 2. You then apply different versions of your product or campaign to different audiences to see which one provides better results.
In the online advertising world, AB testing can be used to test creative treatments, email subject lines, call-to-action (CTA) phrases or website pages. In contrast, incrementality testing focuses on the improvement of key buying metrics measured by conversion rates (CVR).
Run an incrementality testing will allow you to:
Incrementality is not about assigning credit to conversions. It's about identifying the interactions that move users from passive to active. Any interaction that affects actual results is identified as an increment.
Incrementality testing provides advertisers with a new holistic view to assess the true value of their app campaigns and provides answers about which campaigns are most effective and which ones generate more sales.
It is a solution to the problem of cannibalization: in the case of retargeting, for example, it addresses marketers' doubts about whether their campaigns will cannibalize organic conversions.
Incrementality testing will be very useful when testing new media channels before deciding whether to invest more. You can also use incrementality testing on smaller media campaigns to see if there is a positive ROAS. If the answer is yes, then you can confidently scale up your marketing efforts for that channel.
When you need to create a re-engagement strategy, incrementality testing can come in handy. Incrementality testing helps highlight the best dates post-installation to re-engage users and ensure the greatest incrementality lift in marketing efforts.
But not only that: the insights gleaned from incrementality testing on target groups and campaigns represent valuable information that can be used as the basis for optimizing the entire paid advertising strategy.
Incrementality testing is the best way to measure the effectiveness of your app marketing efforts - as long as you do it right.
The basic idea is to divide users into two equal groups - a test group and a control group.
One group will see ads for your app, while the other will not. The conversion rates for each group will then be analyzed and the actual causes and effects of your marketing efforts will be understood, allowing you to make better marketing decisions.
When starting an incrementality test, it's important to define your hypothesis and identify any important business KPIs you want to examine further. Think about what you want to prove with this scientific approach.
For example, do you want to examine the number of installs, ROI, return on ad spend, or different metrics at the same time?
When running incrementality tests on marketing campaigns, select the audience group you want to run this experiment with and make sure you segment a portion of this audience group correctly into a control group.
The control and test groups should have similar but not overlapping characteristics. This can be tricky when focusing on UA (user acquisition) campaigns because we don't know the audience without a unique identifier. A specific identifier, such as an ID or code, sets it apart from other identifiers and makes it unique.
However, you can also use other identifiers to segment your audience, including parameters such as geography, time (similar to the three types of incrementality growth described above), product, or demographics.
Determine the duration of the test and the test window, run the test.
Best practices dictate that the duration of your experiment should last at least one week.
The test window, as the number of days of user operations before testing, depends on the business cycle of your application and the amount of data you have to process.
Tests and test windows should be planned when the calendar is clear, which will most accurately represent the effectiveness of your activities.
You've just run your first incrementality test - congratulations! But what do you do now?
First, take a close look at your marginal costs. Just because a specific ad generates a boost in installs doesn't mean you should scale it. You need to evaluate whether the lift is worth the cost to achieve it. If not, turn off the ads and try something new.
Next, do what you can to combat cannibalization, which you can do in two steps:
First: Assign someone to track and analyze organic results. That way, when the data is needed, you have someone to oppose paid UA.
Second: Authorize this person to stop ad spending when they see signs of cannibalization. This will help ensure that your paid UA strategy is in line with your overall growth goals. Don't blindly rely on paid traffic!
From there, continue your testing efforts to find a solid path forward for your company.
When creating control groups and test groups, it is important to remove any noise or external factors that may affect user behavior. You also need to try to clean up the data and make sure there are no overlapping audiences, as this can also skew the results.
Identifying and excluding outliers is another important step, as this can twist the data and lead to incorrect conclusions. The amount of data will affect the extent to which outliers affect the results, so it is also an important factor when considering experimental benchmarks.
a) Seasonality - Response rates for control groups can vary greatly depending on the buying season. For example, if we test on Black Friday, the response rate for the control group may be higher than at other times of the year because users are more likely to buy during that time. This, in turn, can diminish the impact of your campaign.
b) Brand awareness - Some products are more familiar to consumers than others. In the world of marketing, popularity worths it. More recognizable brands will naturally get more attention. In the case of incrementality testing, well-known brands will see lower conversion lift than brands that have never advertised digitally before when launching high-impression campaigns for the first time. This does not mean that a 5% lift for old brands is less meaningful than a 20% lift for new brands. Success is subjective, so marketers need to analyze their results with future potential in mind, taking into account the economies of scale of the new customers' lifetime value, as well as the overall revenue each brings in.
c) External media - If you are running media outside of testing with other programmatic vendors, digital channel partners (social, search) or traditional offline partners (TV, billboards, radio, print). How do you ensure that control groups are not reaching your brand through these other marketing activities? The point of a control group is to be unbiased. If your control group is exposed to other media in an uneven or unknown way, you run the risk of data contamination. It is unrealistic to expect advertisers to shut down all other media for testing, but it should help interpret the results.
Incrementality testing is a great way to better understand your marketing performance and draw stronger conclusions about the value of your marketing investments. However, setting up tests, collecting data and interpreting results does require a significant amount of effort and investment. Take the time to set up a robust test and iterate each round to see if you can improve through experience. No one test or result will tell you everything you need to know, but each should drive learning and help you make impactful changes to your marketing plan.
Mobile Growth,
User Acquisition,
Mobile Analytics,
Get FREE Optimization Consultation
Let's Grow Your App & Get Massive Traffic!
All content, layout and frame code of all ASOWorld blog sections belong to the original content and technical team, all reproduction and references need to indicate the source and link in the obvious position, otherwise legal responsibility will be pursued.