What are A/B Testing Statistics
A/B testing statistics refers to the statistical model used in the conduction of an A/B test (controlled experiment) which in the App stores is the comparison in performance between two variations of an App store page. An A/B test will be used to approve or disprove a hypothesis by testing just a sample of the entire population in the live store, before using the observations collected to predict with a reasonable level of accuracy how the entire population in the live App stores will behave.
Every statistical model has a number of pre-requisites (test parameters) that need to be met in order to conduct a reliable test that shows which iteration performs better.
There are three A/B testing statistical methods that can be used in different ways. The first is the ‘frequentist’ approach which ignores any previous findings or knowledge from similar tests, using only data from the current experiment.
Why A/B Testing Statistics are Important
Statistics is vital to the process of planning, running and evaluating A/B tests.
Simply put: failing to use the right statistical model when A/B testing would be a waste of time and money. The effective implementation of A/B testing statistics should translate to an increase in installs for the tested audience once the better performing page is applied to the entire population in the live App stores.
A/B Testing Statistics and ASO
ASO teams need to be aware of any statistical model being used in A/B testing to ensure that it’s one that can be trusted to best serve the purpose of the test. It must produce results that can be implemented with confidence in the real App Store or Google Play Store and provide tangible benefits. Using the wrong statistical method that isn’t suited to the metrics being measured could result in having to endure the frustrating process of running tests, implementing results but without gaining any usable results initially hoped for.