road sign with arrows pointing in opposite directions
Editorial

How to Get Started With A/B Testing

5 minute read
Pierre DeBois avatar
SAVED
A/B testing is nothing new. So why do so many businesses still fail to do it?

Conducting online A/B and multivariate testing is nothing new. The practice has been around for decades now, with Google engineers reportedly conducting A/B tests back in 2000 to decide the right number of search results to display. 

So why is it that businesses still ignore this technique in 2021, arguing they do not have enough data or time for a test? Improvements in technology development combined with the shift to online consumer behavior over the last 18 months make A/B tests even more essential than before.

An Overview on Split Testing Planning

An A/B test is a methodology to compare responses to a control element and a test element. The control element is how your existing media — be it webpage, app page or page element — currently displays. The test element is the proposed change to the existing media you are exploring, e.g. a different image, a different page text, or a combination of elements. 

Confusion sometimes arises around test results. An A/B test splits the displayed media between a given set of people. Some see the control version while others view the test version. Because of that, marketers mistake A/B test results as an absolute choice between one element over another. Technically that's right, but an A/B test demonstrates if the tested choice represents a statistically significant difference. For example, given a sample of people who were presented a control and test layout, did one layout generate better conversion rates over the other?

Answering this question is why you plan your test through the lens of a hypothesis. A test hypothesis is a statement that establishes that, given a normal distribution of data, your alteration of a control element — the test version — will cause a significant change in customer behavior. A null hypothesis implies that no significant difference between control and test elements exists. 

Creating a hypothesis helps you then view test results in terms of business objectives. The end result is marketers can form assumptions and decisions with a clear lens on the impact on customers. As mentioned in a previous post on analytics mistakes, managers can be tempted to compare too many user interface elements. Avoid this error as it results in lost time and needless costs for little if any value.  

Related Article: 10 Mistakes to Avoid When Rethinking Your Analytics Strategy

What Is a Good Element to A/B Test?

The elements most likely to influence conversion rate optimization are typically good choices for testing. A webpage can be tested for copy, or a combination of a call to action button and copy (if the difference between control and test elements are clearly different).  

Email campaigns are also well-suited for A/B and multivariate tests. Longer campaigns allow a test to gain enough data points to validate results. Sometimes a customer segment is opening emails over a long enough period of time to generate sufficient data to gain learn if a change to a subject line or an element within the email delivers superior performance.

Testing a digital ad for a given audience is also a worthwhile testing scenario. Images or ad copy adjustments can be tested, as well as landing pages for any given ad. 

Learning Opportunities

Related Article: How Google Optimize Testing Can Help Improve Customer Conversions

Factors Behind Good A/B Tests

Tests, be they A/B or multivariate, cannot address every conversion issue. So knowing potential pitfalls in advance will help decide what a test can and can't answer.

One factor is the amount of data needed for test accuracy. It's possible to calculate the minimum amount of test data needed. One rule of thumb is for your test sample to equal 10% of your population size. So when testing for an email distribution of 7500 people, 750 would be your test sample. Online calculators are available to help here. 

All of these simple steps assume an even split of test samples and control samples and normal distribution of data. Advanced formulas based on data statistics, such as standard deviation, are also available to calculate a more precise estimate and to account for other data distribution concerns.

After establishing the number for your test sample, you will have weight it against the practical considerations of gaining enough samples. How long does an ad campaign have to run to reach the number? Will enough people see the test email or web page?

The test audience must also be representative of your broader, intended audience. Analytics tool integrated into the test platform, like the Google Optimize integration with Google Analytics, can help monitor test quality.

Related Article: 7 Factors That Determine Email Deliverability

Available Testing Platforms

A number of test platforms are available. I covered Google Optimize in an earlier post. Another commonly used platform is Adobe Target, a direct enterprise-level competitor to the Google Optimize 360. Like Google, Adobe initially offered Adobe Target in an analytics suite, but later offered it as a stand alone platform. HubSpot also offers an A /B test platform through a free software application called the A/B Testing Kit. Users can download the kit which includes a statistical significance calculator for assessing sample size. Crazy Egg is a popular webpage A/B test software that includes a heat map to display results.

Beyond test platforms, a split test analysis can be conducted in an open source program like R programming or Python. The advantage of this approach is in cases where the sample is unevenly split or exhibit data is not normally distributed. Both languages are supported by a large range of advanced statistical libraries. The downside here is it requires some planning with a developer to set up, as opposed to the self-service nature of platforms like Google Optimize and Adobe Target.

No matter what approach you choose, you should develop a split test procedure that frames the elements of your website or app against your business needs. Once you establish a regular routine of testing, you'll see how your media improvements can result in stronger customer engagement.  

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author

Pierre DeBois

Pierre DeBois is the founder and CEO of Zimana, an analytics services firm that helps organizations achieve improvements in marketing, website development, and business operations. Zimana has provided analysis services using Google Analytics, R Programming, Python, JavaScript and other technologies where data and metrics abide. Connect with Pierre DeBois:

Main image: Pablo García Saldaña | unsplash