Useful Blogs for App Promotion
500,000 monthly readers are maximizing their advertising conversions with conversion intelligence.
The average online user is exposed to anywhere from 6,000 to 10,000 ads every day.
Mar 29 2022
Since the Apple App Store adjusted and updated Product Page Optimization (PPO) in 2021, the promotion of iOS apps in the Apple App Store has gone through a period of attempts, and marketers have also experienced many challenges during the optimization process. Challenges and corresponding solutions are discussed.
Unclear data, clunky icon tests, and long waits for conclusive results, all become the troubling factors for mobile marketers driving overall marketing performance during the product page optimization process. "Barely useable" is the consistent experience many marketers have with this accessibility feature in the Apple App Store.
The state of Product Page Optimization (PPO) A/B testing has become one of the most talked about topics in ASO panels and forums.
After spending several months figuring out how to use PPO to actually and reliably increase App Store conversions, it’s also acutely felt that app developers face some fundamental challenges.
We'll walk through some of the challenges we've identified, which we hope will help clarify what's holding you back from your ability to consistently increase App Store conversions through PPO. As ASO teams are under increasing pressure to improve conversion rates, and more tools are available, this can also help marketers find more inspiration and effective solutions to increase conversion rates.
Unable to analyze results with GEO
Cannot run multiple tests at the same time
New version release ends running tests immediately
Bulky icon testing ability
Unclear test data
PPO requires a very large sample size
1) Unable to analyze the results on a GEO basis
While you can choose which countries and localizations your PPO a/b tests will run in, you can't analyze the results at the GEO level, which means you can only see the results, no matter the translation if the localization of the test run has significant. With different audiences, rates, impressions, downloads, lifts, and confidence in aggregation can make test results almost meaningless.
Based on our testing, we clearly found that different audiences in different countries have different preferences when deciding to download apps/games based on App Store creativity and messaging.
PPO testing in multiple countries does not reveal these different preferences, making it impossible to make informed decisions about which product page creatives should apply to which countries.
So the solution is to run an a/b test for each country? Let's continue our topic and find the answer later...
2) Cannot run multiple tests at the same time
The current PPO implementation does not allow multiple tests to be run at the same time, so if you need to improve conversion rates in many different countries and markets on the App Store, you must do this sequentially.
Since each test requires time to wait for results, this means that running tests for multiple markets can take up to months, if not years, doubling down on first-test findings in each country.
3) New version release ends running tests immediately
To make things even more complicated, every time your team submits a new version of your app to the App Store, whether it's a bug fix or a normal version update, your PPO testing ends immediately.
Many teams have two-week update schedules, and some even have 7-day cycles. It's actually hard to have enough time to run tests when you factor in bugfix releases that are committed from time to time.
Product teams that need to push a new version to the App Store will almost always prioritize PPO a/b testing, given the user experience, and even when iterative cycles affect the app’s ranking in the App Store.
4) Bulky icon testing ability
If you want to test and optimize your conversion rate through app icon a/b testing, then you will need to include the icon to be tested in your app's binary in your next app release submission. Then you'll be surprised if a variable other than the control variable performs better and you want to hit "apply".
What will actually happen is that all the other creatives of the winning variation you choose to apply will actually be applied and become your default product page, except for the icon.
An icon is an element that you designate as the "default icon" in your app version's binary so that users can actually see it on the home screen after they finish downloading the app.
Clicking "Apply" doesn't change this when you choose a version of the app that doesn't specify a win icon in the app version's binary as the default app.
In order to actually implement the winning icon, you will need to submit another version of the application and specify the new icon as the default icon in the binary.
So, between two app release commits and a PPO a/b test, means that, in fact, if you can actually pull it off and plan to have enough testing time between releases, you'll need weeks before Test an icon correctly.
5) Unclear test data
All results are based on a metric called "estimated CVR", we don't have any data on what that means, the only raw data we have is a rounded total impression.
This means that you can't analyze the actual raw data to understand which variation is performing best, and you need to rely on unverifiable data.
6) PPO requires a very large sample size
Tests don't garner hundreds of thousands or even millions of impressions anytime soon, and the confidence levels are too low to allow you to draw any conclusions about the test itself.
This is due to the chosen statistical model and its current implementation in PPO a/b testing, as evidenced by multiple reports in various ASO discussion groups with extremely low confidence even after running for several weeks degree level.
This makes average test results insufficient to actually decide which creatives will convert more of your desired audience. In some cases, the test only reached a 1%-5% confidence level, which actually means that in most cases the app winner will have a different impact than you would expect. If this is a negative, your "app" decision could have disastrous results and lose tens of thousands of potential downloads as a result.
Make sure you have your assets ready before publishing so you can start right after publishing, and make sure you don't have a release planned for the next 2-3 weeks.
Development work and the inability to know the type of users under test makes icon testing extremely challenging at this time (eg - browsing users who see your icon may convert at a low rate. This is the nature of browsing users. If one way of handling browsing the higher the percentage, the test itself is biased).
Given that many tests fail to achieve high enough confidence levels, great care should be taken when deciding whether to apply the results.
To give you more confidence in your decision to apply your new App Store idea and ensure you don't ruin your conversion rates, savvy marketers like you should consider more data points.
For savvy mobile marketing and ASO teams, PPO testing in current implementations is far from a complete, adequate solution. In order to overcome these problems, an intelligent tool with more complete data monitoring can help, and more effective optimization strategies and forward-looking feasible solutions are also very important.
Get FREE Optimization Consultation
Let's Grow Your App & Get Massive Traffic!
All content, layout and frame code of all ASOWorld blog sections belong to the original content and technical team, all reproduction and references need to indicate the source and link in the obvious position, otherwise legal responsibility will be pursued.