<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=192061487988692&amp;ev=PageView&amp;noscript=1">

Mastering A/B Tests: The Ultimate Guide to A/B Testing

Jul 26, 2018 5:00:19 PM  |  BY: Robbie Hammett

Bold Red LinkedIn Post Header

By now, most marketers know they need to integrate A/B testing into their process for developing new promotions and advertising.

However, knowing you should use A/B tests and knowing how to run A/B tests are two different ball games. 

The world of A/B testing can be confusing, with many people using terms like ‘A/B testing’, ‘A/B/n testing’, or ‘Multivariate testing’ interchangeably.

However, these are all different types of tests. Here is a quick run down:

  • ‘A/B’ and ‘A/B/n’ testing are very similar. They are both testing one specific variable change to see the impact of that change. An A/B Test means that you are running two versions of the promo - usually a control and a second version with the change you want to test. An A/B/n test means that you are running multiple versions of the promo - still testing the one specific variable, but running multiple variations of that variable change. For example...
    • A/B Test: You are testing the text on your CTA button. One version says 'Sign Up!' the second version says 'Join Now!'
    • A/B/n Test: You are testing the color of your CTA button. One version is red, the second version is blue, the third version is green, and so on.
  • A Multivariate test is any test that measures the impact of multiple variable changes at once. The hint is in the name itself: multi (many) variate (variables). For example...
    • You are testing variations of the CTA button color AND text at the same time.

In this article, we'll cover why you should be running A/B tests, what types of tests you should run, and when to run your A/B tests. Let's jump in!

Why Should I Run an A/B Test?

In order to run a successful A/B test, you must have the proper reasoning and intent behind the test.

A broad reason for running a test is to optimize for conversions, but you must have a deeper and more-detailed reasoning if you wish to be successful.

Let's dive in with the most common reasons for running an A/B test. 

1. My promotion is underperforming

This is the top reason why most marketers start the process toward testing.

An underperforming promotion is a real concern to your revenue and is something you can measure.

By running A/B tests, you can tweak the details of your promotion in order to incrementally improve its effectiveness. This allows you to provide a better experience for your customers, while also improving your bottom line. 

 

2. My brand guidelines changed.

This happens more often than you might think. Any time a company goes through a brand refresh, typically promotions will require completely new designs and content.

While A/B testing is a great way to implement these changes quickly, it can also help to guide the brand refresh. 

By testing different colors, verbiage, and design styles with A/B tests, you can help to inform the overall brand direction by presenting real user data of what your customers respond to best.

In this case, you would not test the new designs against the originals.

With a brand refresh, you typically have a general guide for where the brand designs and voice will head, but you still need to figure out the details for things like font, color pallet, etc. You’re starting over from square one and your tests should reflect your learnings from your previous promotion success.

Before You Begin...

Before you begin to set up your A/B tests, there are a few things to keep in mind. Primarily, a few pitfalls to avoid...

Pitfall #1: Not checking promotion performance against industry benchmarks.

Of course you want your promotions to outperform ‘average,’ but if your promotion is already outperforming industry averages by 20%+, it may be best to test something else in order to reach your goals.

The good news is that your promotion is far above your competition. The bad news is you may do more harm than good by running A/B tests on this promotion. If the promotion is already your customer's ideal, then introducing variations may make the performance drop.

Pitfall #2: Failing to identify a measurable KPI.

If you ‘just want this promotion to do better’ then the chances are you haven’t tied that promotion to a KPI that will impact a bottom line business goal.

To improve the performance of a promotion, you need to keep an eye on what KPIs will be affected so you can accurately assess a change.

If the attached KPI is to increase the average order value and you run a test and see increased engagements but a decreased average order value, then the test should not be considered a success.

The most common KPIs attached to promotions are:

  1. Conversion rate
  2. Cart Abandonment Rate
  3. Email capture rate
  4. Average order value

You should determine which KPI you are targeting before launching any tests to ensure that you can properly track changes.

What Should I A/B Test? 

Now that you have a better understanding of the basics, we can dig into the most important and most commonly misunderstood aspect of testing - how to structure a reliable A/B test.

Elements of a Promotion

  1. CTA - The Call to Action
  2. Tag Line - Copy to grab visitor attention
  3. Offer Copy - The copy that describes your offering
  4. Offer Amount - The tangible value of the offer
  5. Image asset - Image used to represent offer, brand, etc. 
  6. Number of steps - How many steps it takes a visitor to complete the engagement

Above are the primary elements of a promotion and therefore your main testing variables.

Offer Copy and Offer Amount are separated because you can change the specific offer amount without actually changing the surrounding copy.

Here's an example:

  • ‘Start today to get 10% off your subscription!’

And

  • ‘Start today to get 15% off your subscription!’

 

In the above the example, the ‘offer copy’ is the text that your ‘offer amount’ is wrapped in. This distinction is important to keep in mind as you move forward into creating actual tests.

Offer Amount is almost always going to have the most significant weight. If you think about this in practical terms you can see why.

For example, would you rather have $5 off a premium car wash or a FREE premium car wash?

All other things being equal, more people are going to be more willing to get the free car wash than only $5 off.

How to Set Up an A/B Test

Step 1: Identify KPI

The most important first step in setting up any test to optimize for conversions is to identify the KPI you want to affect.

This could be reducing cart abandonment, increasing email sign-ups, increasing engagements, or any site-specific KPIs that you and your team have identified.

Once you identify your KPI, take the component list from above and start formulating hypotheses about which of these elements will best help you improve the KPI you are targeting. 

Let's sketch this out for you with an example so you can see how this is done:

ab_test_promotion12

In this example there is a lot going on and a lot to unpack, so let’s start by breaking it down using the list from above.

  1. Offer Amount - In this particular promotion there are actually two different offers being pushed — 20% off and free shipping.
  2. Number of steps - 1
  3. CTA - Use code.
  4. Image asset - None
  5. Tag line - Instant savings
  6. Offer copy - Take (offer amount 1) off the items in your cart now + (offer amount 2)

Visually breaking down each part of a promotion will give you a clear understanding of all the different moving parts and allow you to prioritize your tests.

Sticking with this example, let's use average order value as our main KPI. This is a clear, measurable KPI that will certainly impact the company's bottom line.

 

Step 2: Identify and Hypothesize

Next, you need to identify the elements that you believe will lead to having the most impact on your identified KPI. For this example, we'll use Offer Copy and Offer Amount. 

For each promotion, it's important to write down your hypothesis so that you can track your line of thinking.

Here is an example hypothesis:

  • I believe that changing the Offer Copy to include a minimum order value and increasing the visibility of the Offer Amounts will increase Average Order Value.

As you can see, a typical hypothesis is not overly complex, but is very specific about actions and outcomes.

 

Step 3: Create a variation

Now that you have formulated a hypothesis and identified the elements to test, you can begin creating variations based on those elements.

Luckily, with Justuno, this part is usually very simple.

The variation:

ab_testing_promotion_2

When setting up your A/B tests, it's best to test variations on only one element at a time. In the above example, you'll notice that only the Offer Copy has changed, while the Offer Amount remains the same.

In theory, any user glancing at this pop-up should immediately take in the basics of the promotion: 20% off, free shipping, I have to buy $100 worth of stuff.

In this example, we also visually de-prioritized ‘instant savings’ because that text is largely irrelevant to the goal. 

Step 4: Test and Iterate

Now that you have your variation, you can set up your test.

The length of time you should run a test depends entirely on how long it will take to reach statistical significance - which is a fancy way of saying, ‘the data you’re getting is 90% reliable’.

In most cases, you will need at least 1,200 impressions in order to have reasonable confidence in your results.

 

When to Run an A/B Test

One of the most important parts to any test is to figure out when it is appropriate to test.

It’s easy to say, ‘Always'. You should always be testing.

I sometimes still fall into this mindset myself, but before each test it’s important to take a step back and ask yourself a few questions:

How many visitors are going to see this promotion over the next week? Two weeks? Month? 

The answer to this question is incredibly important. In an ideal world, you will need a few thousand points of data (i.e. visitors actually running through your test).

As stated above, at a minimum, your test should run until it has 1,200 impressions to be reasonably confident in the results.

With this in mind, your first step is to figure out how many visitors will predictably funnel through your test per day.

This number should be easy to get from your Google Analytics or Justuno dashboard if you have your Analytics account linked.

Google_analytics_report

When looking at your traffic numbers, it's important to keep in mind the type of promotion you are running and the targeting rules for the promotion.

For example, if you're running an Exit Intent promotion, then it's fair to assume that every visitor to your site will see this. If you're running a banner promotion that only displays on the homepage, then you should be looking at the visitor data for your homepage ONLY and not for your entire site. 

With those general rules in hand, you can go into your Google Analytics and look at the page that your promotion will live on and find the information you want. Go to the Audience tab and set the date range to look at the last 3-6 full months then sort by week. Each data point available on the chart will be a 7 window into your traffic.

Enter each of the data points into a spreadsheet (Google Sheets or Excel will work equally well) then average those numbers. You now have a reasonably accurate average weekly traffic flow for your site. 

Once you have your average visitor count per day, you can calculate the minimum time you’ll need to run your test in order to reach 1,200 impressions.

See our handy chart below for guidelines to get started:

 

 # of Visitors per day

 Minimum length of test

0-100

14 days (2 weeks)

101-250

14 days (2 weeks)

251-500

7 days (1 week)

501-1000+

7 days (1 week)

 

The minimum length of time you want to run a test is one full week. Your test needs to start and stop on the same day and, ideally at the same time.

This is because even if you have 10,000 visitors per day your site will experience different user behavior on different days of the week. By starting and ending your test on the same day of the week you can help smooth unusual user behaviors and patterns.

This critically important as it allows you to control for fluctuations in traffic and user behavior from day to day. 

A few words of warning:

  • Avoid starting a test on a Friday. Should anything go wrong, you may not notice until Monday.
  • Start your tests in the morning. For the same reason above, try not to start your tests later in the day.
  • Don't run tests through a holiday as the fluctuation in traffic and buying patterns with throw off your data and invalidate the test.

Generally speaking the maximum time you want to run an A/B test will be one month.

The reasons for this cap are based around resource allocation.

If it takes longer than a month to reach minimum viability, or about 1,000 visitors, then you need to think about running a different type of test.

Summary of ‘When’ Dos and Don'ts:

Do: 

  1. Find your per day traffic
  2. Use traffic numbers to determine length of test
  3. Start and stop your test on the same day of the week

Don’t:

  1. Start an A/B test on a weekend
  2. Run your A/B test for longer than a month
  3. Run your A/B test during a holiday
  4. Start and stop your tests on different days, at different times

What A/B Test Success Looks Like

If you've made it through this entire article, congratulations! You just received a crash course in A/B Testing.

Before you take off and setup your first A/B test in your Justuno account, take a look at a recent case study performed by Shopify Plus jewelry retailer, The GLD Shop.

In this A/B Test, The GLD Shop was able to increase conversions by 400%!

 

READ THE A/B TEST CASE STUDY

Topics: A/B Testing, Personalization, Conversion Optimization, Promotion Strategy