What is an A/A Test?
An A/A test uses split testing processes to test two identical versions of a website, ad, or creative. Unlike an A/B test, which helps you understand which of two slightly different versions performs better, An A/A test helps you understand the reliability of your testing process and technology.
Put differently, A/B tests can be immensely valuable, but only if the data it returns can be trusted. An A/A test helps you ensure that the testing tool you use is properly tuned before you begin to fine-tune your messaging, creative, or layout based on any A/B test.
When to Use A/A Tests
A few scenarios lend themselves especially well to A/A testing:
- Before running A/B tests for the first time. If you’re new to A/B testing, an A/A test is an ideal start to the process. Run it as many times as needed until your system is fine-tuned enough to test different versions of the same page.
- Early in the implementation of a new A/B testing tool. Even experienced A/B testers can benefit from A/A tests if they adjust or change the tool they use for split testing. It helps to account for different audience segmentations, timings, and more.
- When determining test audience sizes. Unless your audience sample is large enough, an A/B test may not be able to randomize it enough. A/A tests allow you to tweak your sample sizes until you can be confident the result will be representative.
- When trying to determine baseline or benchmark metrics. Because it tests identical versions of the same page, successful A/A testing can help you learn the baseline conversion rates and other core metrics you can expect before you test any variations.
What to Expect from A/A Test Results
An A/A test is successful if neither of the two sides of the test outperforms the other. For example, if you are testing for conversion rate, both sides of the A/A test generate approximately the same number of conversions. The same is true when testing for other metrics like bounce rate, time on page, etc.
At the same time, it’s important to keep the concept of statistical significance in mind when running A/A tests. Even a 90% confidence interval means that there is a 1 in 10 chance two identical pages with an accurately randomized audience will produce a winner. Running multiple A/A tests on the same pages and using the same process can help to minimize that potential issue.
How to Set Up an A/A Test
A few steps can help you set up a successful A/A test:
- Determine the KPI(s) on which you want to test. Most websites test for conversion rates, but other options include bounce rate, average session duration, scroll depth, and dwell time.
- Choose the page to set up your A/A test. The test will be most representative with a popular, high-traffic page, like your homepage, an eCommerce product page, SaaS pricing page, etc.
- Determine your goal audience sample size. Your testing tool may recommend an audience size, that keep in mind that you might have to raise it if initial A/A tests don’t return identical results.
- Finalize your test duration. Expect your test to run at least 5-7 days for the results to normalize over time. For high-traffic websites, that duration may be shorter.
Finally, keep in mind that you might have to run the test multiple times depending on its confidence interval. An initial failed A/A test is not immediately a cause to abandon the A/B test that should follow, but requires further validation for you to trust its result.
Building successful split tests can get complex. Fortunately, you don’t have to be on your own. Contact us for help in setting up tests that can lead to tangible insights, helping you improve your website and online presence over time.