What is Statistical Significance?
When a test result is said to be statistically significant, it means that the best performing variation can be declared a winner, and served to all users going forward. More specifically, it means that probability that winning variation outperformed the rest just due to pure chance is small in some sense (the exact interpretation depends on the statistical A/B engine driving the test).
How do you reach statistical significance?
In a classic testing scenario, a marketer would choose variation as the control and split traffic randomly between it and another variation. Upon reviewing the data, the A/B test stats engine declares if the difference in performance and amount of data is large enough to be statistically significant (usually, that there is less than a 5% chance that difference in performance is due to pure chance). Then, the test result can be generalized to the entire population of visitors as winner is declared.
How important is statistical significance when A/B testing?
Marketers must keep in mind that at the moment you pick a variation, you are generalizing the measures collected so far on the entire population of potential visitors. This is a significant leap of faith, and we have to do it in a valid way, otherwise, you are often bound to make bad decisions that will harm your web page in the long run.
Read Further: Why Reaching and Protecting Statistical Significance is So Important in A/B Tests
See It in Action: Use our free online Bayesian A/B test calculator to understand if your test is statistically significant enough to declare a winning variation.