For app promoters, getting a decent ranking in the App Store requires optimizing the metadata displayed on your app's pages to improve visibility and install conversion rates. You might have created a great icon, but you can't trust your instincts here, and the user's response isn't always as good as you'd expect. This is why an A/B test needs to be performed before any changes are made to the list elements that affect the conversion rate.
Google is a pioneer in A/B testing on the Internet. Google integrates the A/B test tool into the Google Play Console. Here you can see more information on how to experiment in store listings on Google Play, as well as the best tips for A/B test. Google Store recently announced monitoring and configuration changes to store listing experiments to help developers have more control over A/B testing. ASO practitioners need to stay informed about these new capabilities and understand what they mean.
What changes have been made to the
Google Store experiment? What is the impact of this on the analysis of statistics, we will provide you with an in-depth reference here.

Click "Learn More" to drive your apps & games business with ASO World app promotion service now.
What's new features added to Google Play Experiment?

The picture above shows the new capabilities of google paly experiment. It's important to note that the features described are not yet available to all developers, and to help optimize your store listing on Google Play, you will no longer need a third-party sample size and test duration calculator. The new parameter configuration will result in more reliable test results based on confidence intervals and minimal detectable effects. In addition, the possibility of customizing experimental settings helps to calculate sample size and completion time.
You can also visit our previous blog to knowing the ASO A/B test strategy for your app product page following as:
ASO A/B Test Strategy: How to Increase Your Conversion Through A/B Testing Your App.
Knowing the detailed metrics about ASO A/B test:
- Confidence level
- Efficacy
- Minimum Detectable Effect
- Variant number, sample size, and duration of test
What we do for your app growth?

* Grow with our app growth solutions - choose
guaranteed app ranking service for
TOP 5 app ranking acquirement, and maximize your app traffic.
Or click the "
Promote Now" above (for
increase app installs, or
keyword installs and
app reviews and ratings service for app visibility.
Confidence level
There is an important difference between
confidence level and the
confidence interval. Confidence indicates how confident you are that if you repeat the test, the test results will be the same. Confidence level values are expressed as percentages (for example, 90% confidence level). Alternatively, when performing a hypothesis test, the significance level is sometimes mentioned, which is equal to 1 minus the confidence level. For example, if the confidence level is 90%, the significance level is 10%.
Confidence interval is the range of results that we expect to have a true value. For example, a 90% confidence interval is a range of values that you can 90% determine that contain the true mean.
Efficacy
Efficacy is the probability of making the right decision to reject the null hypothesis (when it is indeed wrong). When higher efficacy is achieved, the conclusion that the null hypothesis is false can be concluded with greater accuracy. On the other hand, when the experimental power is insufficient, what usually happens is that we fail to reject the null hypothesis, when in fact it is wrong. How can we avoid this mistake and ensure that we don't get a false negative result?
First, increase the sample size. 15,000 users are more reliable than 500 users, right? Second, reduce the number of variants. Ideally, you will have a control variant + a test variant. You can also consider A/B/B testing, which, in our experience, minimizes the likelihood of false positives or false positive results.
Minimum Detectable Effect
In the experiment, MDE is the smallest relative change in conversion rate that you are interested in detecting. In other words, it is a hypothesis to control the conversion rate increase between versions and variant groups in the test.
For example, if the baseline conversion rate is 20% and the MDE is set to 10%, the test will detect any change that moves the conversion rate outside the absolute range, which in our example is between 18% and 22% (the relative change of 10% is a 2% absolute conversion rate change). In addition to this, the smaller the MDE, the larger the sample size required to achieve significance.
Variant number, sample size, and duration of test
Time is money in A/B testing, and we want to get test results as quickly as possible (preferably 7-14 days). We also want our test to be positive. Therefore, we want to focus on planning, setting goals, creating strong assumptions, and evaluating app or game metrics before we start experimenting. Factors that largely influence the course of the experiment are the number of daily visitors, sample size, baseline conversion rate, and the number of variants we want to test. All of these factors can affect the duration of the test.
When testing locally, if the market has a low conversion rate and insufficient daily visits, the test will continue to "need more data" for a longer period of time or indefinitely. ASOWorld can provide you with
keywords install service and
reviews and ratings service, which will help your app have more impression in the app store and increase the downloads. Then, you can reach the proper user base to run A/B test.
Let's say we have a 19% conversion rate in a country and an average of about 600 visitors per day. According to the calculator, we need more than 9000 users per variant and it takes 22 days to complete the test. This test is going to be problematic, right? Unnecessary. While it's important to have a good conversion rate and a large number of visitors when testing, sometimes you may be surprised. What happened to us a few times was that even with a conversion rate of about 20% and 1,000 daily visitors, the experiment was able to give results in about 14 days.
It is also helpful to test for large differences in graphic elements rather than making minor changes to certain elements. If your paid activity is active in the market where you want to A/B test, you might want to use audience lift and analyze organic lift when measuring results. All in all, it doesn't take you 22 days to end such experiments, and you can influence conversion rates by doing the right tests. That doesn't mean you should test every country, but if your target market needs improvement, you definitely want to give it a try.
Google Play A/B Testing Best Practices
However, there are many instances where tests have been improperly run. We summarized the best practices of running Google Play Store Listing Experiments correctly the increase conversions.
- Determine a clear objective
- Target the right people
- Test just 1 variant at a time
- Test 1 element at a time
- Prioritize visuals over words
- Don’t test worldwide
- The experiments should use the largest possible test audience
- Test for at least 7 days, even if you have a winning test after 24 Hours
- Running A / B / B Tests to flag false positive results
- Don’t apply a Google Play winning test on iOS
- Monitor how your installs are affected after applying a wining test
A/B testing is an essential part of an
App Store Optimization strategy. For Android developers, Google Play Store Listing Experiments is a prominent tool for store listing A/B testing. It is free. It enables developers to run well-designed and well-planned A/B tests to find the most effective graphics and localized text which can lead to higher conversion rate and more downloads.
*And you can visit our ASO A/B test for iOS 15 updated product page:
iOS15: A/B Testing Your Product Page.
The best combination of these metrics to get valid experiment results
In the new Google Play experiments you need now, as part of the Google Experiment Setup Wizard, you'll need to configure your tests by selecting a few parameters:
-
Minimum Detectable Effect (MDE) = Determines the minimum "boost" you wish to accept as a factor in declaring a test winner (rejects the null hypothesis, i.e. "there is no meaningful difference in performance between test variants)" . The higher this value, the fewer samples required for the test (because the test is less sensitive and will only detect meaningful bumps, although the tests are more likely to conclude that no bumps are found).
-
Confidence level = Probably a more well-known factor, which basically means that the result is not the likelihood of error, for a 90% confidence level, the result you receive will be 1 in 1 out of 10 experiments as a false positive.
-
Experimental Objectives – You need to choose between the first and first downloads reserved for the D-1 as a metric you use to measure the performance of each variant.
How could ASOWorld help you?
Google Experiments new features really give you more control and more data for each of your Google Experiments. If you do invest time in configuring tests based on your unique app/game performance characteristics, you can run tests that should be more accurate than previous versions of experiments.
But not every marketing team can consistently contact good data scientists and statisticians to help with test configuration, and results that don't configure test parameters correctly almost always lead to wrong results. If you need help, you can consult an expert at ASOWorld for free.
Google experiments are still likely to go wrong, in addition to relying on the results of the experiment to make decisions, how do you verify that your metadata is appropriate for your app? You can choose to do A/B test using the new experiments offered by Google Play, just as you can choose to try our
ASO service, which can provide you with optimizations for your metadata about screenshots, icons, videos, descriptions and titles etc., to provide you with a comprehensive ASO strategy.
This change may affect the ASO of Google Play, which will affect the development of developer delivery plans. Fortunately, experts from our ASOWorld team are already actively following up on this update, so if you need help, please consult our experts and don't forget to follow our blog.