Useful Blogs for App Promotion
500,000 monthly readers are maximizing their advertising conversions with conversion intelligence.
The average online user is exposed to anywhere from 6,000 to 10,000 ads every day.
Sep 26 2021
As Apple announced a series of upcoming changes and improvements to the App Store that will help developers better target their apps to users, get their apps discovered by more people, and even highlight what kind of events are happening within their apps to attract new users to download apps and encourage existing users to return. The App Store will enable you to A/B test the visual elements of your app's product pages to try to get better results and improve app download conversion rate.
The App Store A/B test will help you improve your ASO strategy, you will be able to try different app icons, screenshots and previews (videos) on your App Store product pages. You will also be able to compare the performance of product pages to see what converts best.
The App Store's new A/B testing feature will enable you to test up to 3 different elements. You will see the results of key ASO KPIs from the App Analytics section of App Store Connect, such as the number of impressions or conversion rates. You will be able to compare them to the performance of the original product page.
However, Custom Product Page (CPP) will not include native A/B tests, but will allow mobile marketing teams to run the tests manually. As a reminder, under the new setup, if you utilize a Custom Product Page (CPP) and drive all paid UA traffic to one of the 35 available Custom Product Pages, the remaining portion of traffic, the natural end (browsing and search traffic) will land on your default Product Page.
To learn more about iOS15 mobile growth, please check "How iOS15 new features drive mobile growth?" of our previous articles.
Developers will soon have a simplified way to test changes to their idea sets. Historically, iOS developers have had to fully deploy creative changes to the iOS App Store and then measure traffic before and after deployment. This process does not always provide the best insight, as trends and traffic can change significantly from the pre-deployment to the post-deployment period. Product Page Optimization will allow A/B testing to be performed in real-time directly on the default product page of the app.
Using Product Page Optimization, developers can test variations of application icons, screenshots, and preview videos. Apple will allow three different treatments to be applied at once, with a maximum duration of 90 days for each test.
While it may be tempting to test as many changes as possible, Apple recommends limiting the number of creative changes per test - a best practice for any iteration of testing. By limiting variables, it's easier to isolate which specific changes will lead to the best results. To see performance and compare it to a baseline, developers can access these tests in App Analytics.
An exciting new feature for developers behind iOS 15 is Custom Product Pages. Custom product pages are a novel way to curate lists so that they are relevant across different demographics at the same time. This new tool will allow developers to show the most relevant positioning to different groups of users, all pointing to the same app.
Apple will allow up to 35 unique custom product pages, each providing insight into various features through different screenshots, app previews and promotional text. Developers can then drive traffic to the appropriate product page by using unique URLs that will be placed in relevant ads or third-party testimonials.
Performance data for custom product pages will also be available in App Analytics. The targeted page view will provide actionable data on impressions, app downloads, app install conversion rate and retention rates.
In addition to providing developers with more tools to optimize their performance, the new metrics will allow them to understand performance in ways that App Analytics could not provide before. Unlike Google, which is changing many of its reported metrics with the new Google Play Developer Console in 2020, Apple will keep familiar metrics like the number of impressions and app units while adding new ones.
Developers will now have a dedicated pre-order dashboard separated by familiar metrics such as territory and device type - device type now includes macOS for compatible apps. On the metrics tab, pre-orders can be viewed on a line graph over time to see engagement at the start of the pre-order, up to the day before release.
Revenue is another new metric developers can view through App Analytics, as well as the "Sales" and "In-App Purchases". Further insight into how apps earn profits can help developers better understand how to optimize to increase purchase rates.
Apple will also add "Updates" and "Redownloads" to App Analytics, both of which were previously unreported metrics. This will allow developers to view not only first downloads through "App Units", but also total downloads and be able to distinguish between new and returning users. Developers will ultimately have insight not only into new user engagement, but also into ongoing existing user engagement through updates and re-downloads of existing users re-engaging.
To learn more about user engagement strategy, you can check "How Mobile User Engagement Strategy Drive Your App Business?" from our previous articles.
Since May 2015, the Google Play Store has allowed developers to conduct A/B testing through Google Play Experiments, providing developers with a valuable tool to develop their app store optimization strategy. A/B testing will be available on iOS15 soon that iOS developers no longer need to deploy new versions and compare before-and-after metrics to measure how the changes of product pages affect product page conversion rate.
However, as exciting as it seems, Apple's A/B testing platform won't be identical to Google's own. The entailed uncertainties could easily lead to lots of potential complications. It's important to address them early and plan accordingly in advance. ASOWorld tech team expects 7 fundamental differences below for you to get prepared.
A/B testing is a quantitative research method. This means that the statistics behind each test are its life blood. If you collect and analyze enough data, you will get reliable test results to scientifically validate or reject certain hypotheses. The ideas behind these hypotheses are ultimately what drives CRO. That's why the entire CRO will change when A/B testing on the App Store has a different amount and form of data compared to the Play Store.
Here are the most important differences you should be aware of.
Google provides a 90% statistical significance level for the test results, while it remains a mystery whether Apple will provide anything comparable. In fact, it is unknown whether they will provide any confidence intervals, as their announcement does not mention this topic (yet).
Google shows a chart that continuously maps the timeout execution of each test variant. You can choose to view past performance for more detailed insights if needed.
Apple, on the other hand, has no evidence that they offer a similar feature. They only mention that you can "compare performance [of test variants] against your original product page throughout the duration of your test”. Whether this comparison applies to current and past performance is yet to be determined.
Google only shows the number of installs, scaled installs, and conversion rate ( CVR) differences between the test variants. Therefore, you can only estimate how well or poorly they compare to each other. Instead, Apple displays the traffic size (impressions) and exact CVR for each variant on top of the CVR differences (improvements.) This means it's up to you to investigate the individual versions or compare between them.
On Google Play, all localizable assets other than App Title/Name are eligible for A/B testing. More importantly, they include visual and textual elements. This provides enough capacity for an extremely versatile CRO strategy.
In contrast, only visual assets on the App Store have been identified as eligible for A/B testing. They are Icon, Screenshots, and Preview (Videos). No further insight was provided on text assets.
However, looking at the different assets allowed in Product Page Optimization and Custom Product Pages, there may not be any. Specifically, Apple mentions the promotional text associated with the latter, but not the former. Here are the factors to consider for this fact.
Of course, this has yet to be confirmed. However, when in doubt, it's safest to prepare for the worst-case scenario. If this proves to be true, you will have a much narrower selection of assets to choose from in A/B testing on the App Store compared to the Play Store.
This also relates to the relationship between custom app store presence and A/B testing: while you can certainly run experiments on custom Play Store listings, it is still a challenge whether the same is true on custom product pages. If so, the possibilities for iOS CRO would be greatly expanded, for example using something like "custom product page optimization" - but that's a big "if".
In fact, some evidence may suggest that this is unlikely to be the case. As mentioned earlier, promotional text demonstrates the disconnect between custom product pages and product page optimization, as it can be customized but not A/B tested. Another proof is the App Icon, which is testable but not customizable. Therefore, if A/B testing could be run on custom product pages, the app store would become intolerably inconsistent.
Here's the problem: It doesn't sound like Apple allows it. That means they simply don't allow any crossover between the two features. So, it's safe to say that we can't expect A/B testing of custom product pages.
While promotional text and icons are the difference between the two features, they do blend together at screenshots and previews. With any luck, Apple will let us test one of these two assets, or both, on a custom product page. As far-fetched as this scenario sounds, it's still a logical possibility that shouldn't be dismissed prematurely.
An icon is an important visual asset of an application that expresses its sense of identity. If anything, it should be a consistent asset across app stores. Unfortunately, this is not the case.
First, you are free to upload new icons for testing on Google Play only, but not on the App Store. Apple will require that all variants of the Icon (on the device and in the store) be added to the app binary beforehand and reviewed with the new app version. Therefore, if they are rejected or delayed, A/B testing will be affected.
Second, again related to app binary requirements, while Google allows for independent changes between on-device and in-store assets, Apple will enforce some level of icon dependency between them. Specifically, if you apply an App Store icon variant through Product Page Optimization, the matching on-device icon variant will automatically replace the original icon.
Localization is an important part of ASO, especially for CROs, which is why it's critical to be able to run A/B tests on localized product pages. Most importantly, the more localization tests you can run at the same time, the more efficient you will be at scaling your overall CRO.
On Google Play, up to five such tests are allowed (not including custom product details). On the Apple App Store, the number is unknown. If it is the same, you can scale the CRO in the same way on both stores, so it doesn't require much tweaking. In contrast, if one store allows significantly more tests to run simultaneously than the other, you'll need to plan ahead and consider the differences between them to make an informed decision.
Why does this matter? Two factors.
If you can upgrade your CRO for your app store, you can quickly learn what works and what doesn't. The more you learn, the more informed your decisions will be. That's why different scalability can lead to different levels of CRO performance. Ultimately, asset production, human resources, time and effort will have different values between app stores. They should then be managed in different ways.
Imagine that you have 3 months to test 10 localizations of all assets before launching a critical and relevant campaign. For the same idea or hypothesis, you would need two rounds of 5 localization experiments in the Play Store to cover all content. If each round takes three weeks to produce statistically significant results, then you would need 1.5 months to complete a full series.
That means a three-month timeframe would allow testing of two ideas before release. If Apple allows fewer than five tests to be run simultaneously under the same conditions, then 10 localizations for three months is too many. You either need to test fewer ideas, you need fewer localizations, or you need more time.
The degree of flexibility you have in setting up A/B tests will also vary from one app store to another. In fact, the difference will be caused by two factors.
While Google allows A/B testing for visual and text assets, Apple will only let you test the former. This means that even if you can turn an idea, concept, message, or hypothesis into an appealing copy, you can't set it up as a test. That's only half as flexible as Google Play.
Test setups on Google Play can become more flexible when multiple assets are combined together. Specifically, you may have text-text, visual-visual, or visual-text combination experiments. On the App Store, it's also possible to combine multiple assets in a test (otherwise Apple wouldn't suggest we limit them). But the best thing you can do is to set up only visual-visual combo experiments.
So what do we do when we're not so flexible? Add depth. Not using text assets for testing means testing more with visual assets.
To learn more about iOS15 Custom Product Page, you can check "How to prepare your App Store custom product pages (CPPs) for iOS 15 ASO?" of our previous articles.
A/B testing involves more than just the technical factors that affect CRO. On top of the design, setup and assumptions of the test, it must also cover creative ideas, concepts and stories. These are what translate into tangible app store assets, which you must upload before any tests can be run. Therefore, it is important to have enough creative freedom to allow bold and innovative ideas to be tested and in turn contribute to the enhancement of the CVR.
Of course, Apple and Google allow varying degrees of this creative freedom. In the Play Store, test assets never seem to be subject to Google's scrutiny. All policies and metadata guidelines apply to product details, not product detail experiments. So, it's safe to say that as long as you don't apply a "risky" variant, you can test almost anything with any qualified asset without restriction.
The separation between story and storytelling is not possible on the App Store. According to Apple's announcement, all separate test variants will be reviewed independently. If you test a brave and bold but risky idea with them and they're rejected, you'll never know if the idea worked. You will only know that the implementation did not. Plus, the testing schedule will be delayed. That's why you have to deal with less creative freedom in App Store A/B testing compared to the Play Store.
Your CRO strategy should be more conservative in the App Store and more aggressive in the Play Store. This means:
Merchandise detail experiments should be a testing ground for bold and risky ideas because they allow a higher degree of creative freedom. Overtime, you can relatively point out which ones are good for CRO. Then, you can move on to testing and learning which methods are available to present such ideas publicly - whichever method Google allows you to use.
This learning can then be further consolidated with product page optimization. If you find that Google embraces asset iteration, given its similar guidelines and policies, Apple will likely embrace it as well. This is how you can test bold ideas on the App Store with minimal risk of rejection.
While waiting for Play Store A/B tests to "lead the way," App Store A/B tests can be run independently to "find the way. Start with safe ideas, then move on to riskier ideas until you approach the "restricted area. CRO will take longer, but at least you won't get stuck waiting for Android test results or be misled by untested assumptions.
Audience allocation between the current version and variants. When deciding on the portion of traffic to allocate to each variant, keep in mind your current conversion rate, the hypothesis to be tested, the elements involved, and the statistical significance (currently 90% in Google Play Experiments).
Split the duration of the test. Let the test run for at least 7 days to avoid seasonal spikes that could have an impact on your results. Tests should be run until a sizeable potential audience enters all variants of the test.
Minimum installation for each variant. To get important data about the performance results of a/b tests, you need to pay attention to the required amount of installs. This is highly correlated with the daily install rate of the application.
Test localization. It is highly recommended to localize your split tests in a specific country. Worldwide/global experiments can be misleading, as each country performs differently creatively.
Closed changes may not lead to much learning. However, all elements of your idea should be tested to continually improve your store page.
These are just a few of the new features and capabilities announced at WWDC21. A/B testing has historically been a more complex task on the iOS App Store than Google Play's experimental capabilities - these new tools will be the improvements iOS developers have been waiting for.
Performance data accessible from App Analytics, from product page optimization and custom product pages, as well as user analytics and activity data, will provide developers with the insight they need to continue to grow.
We test to learn what ideas can improve CVR and then learn how to implement those ideas without violating policy. As we listed above, you can plan ahead for iOS15 A/B testing model.
Get FREE Optimization Consultation
Let's Grow Your App & Get Massive Traffic!
All content, layout and frame code of all ASOWorld blog sections belong to the original content and technical team, all reproduction and references need to indicate the source and link in the obvious position, otherwise legal responsibility will be pursued.