Digital Marketing

How A/B Testing Works

AdStage Team 3 minute read

Have you ever wondered what would happen if you changed part of your ad? Will changing the headline yield better results? Maybe including a price directly in the copy will result in a higher conversion rate.

You could just make the change and see if things improve going forward, but other variables like seasonality can affect your results and skew your data. To avoid this, marketers conduct A/B tests. In this post, I’ll show you how A/B testing works so you can run tests on your own and discover improvements.

A/B Testing Overview

In online advertising, A/B testing (also called split testing) is a way to test whether changes you make yield better results. You simply create a variation of the entity you're testing to see if you can get more of a response out of your audience. You can A/B test many aspects of your ad campaigns: ad copy, targeting settings (especially for Facebook ads), landing pages and more.

The Process

AB Testing - Variation A Current Ad

To illustrate how it works, let’s take a look at how you would A/B test an ad in a Google search campaign. Let’s say you’re currently running an ad that has received the following results in the past 30 days:

  • Clicks: 16
  • Impressions: 1,000
  • CTR 1.59%

The Hypothesis

You hypothesize that including the price in the ad copy will result in more clicks. So you create another variation of the ad and end up with two ads: a control and an experiment.

AB Testing - Variation A Control

AB Testing - Variation B Experiment

Next, you’d make both ads active and let them run in order to collect enough performance data to accept or reject your hypothesis and discover the winner.

Checking the Results

When testing ads, you’ll want to check your results once you have enough data to make an informed decision (probably  1,000 impressions per ad at a minimum, and the more data the better).

Suppose we earned the following results over the past 30 days:

AB Testing - Variation A Control

AB Testing - Variation B Experiment

  • Clicks: 70
  • Impressions: 4,400
  • CTR 1.59%
  • Clicks: 28
  • Impressions: 1,208
  • CTR 2.32%

At first, it looks like the control is doing better, because it has more clicks. However, the control ad also had a disproportionate number of impressions , so it’s not necessarily better. When comparing the CTR, it looks like the experiment is doing better. But perhaps the sample size is too small, and that is artificially inflating the CTR.

At this point, you may be wishing that each ad had the same number of impressions to make comparing them easier, but there’s another way.

Checking the Statistical Significance

You can use a statistical significance calculator to find the winner . I’m using one provided by MixPanel here. Since the response you're optimizing for is clicks, plug your clicks into the goals field and  plug your impressions into the visitors field.

AB test result input

The calculator will compare the response rate for the two ads and tell you whether it’s confident if either will continue to outperform the other.

AB test winner

I n this example, it looks like we can be 95% confident that the experiment ad copy will outperform the control! It seems including the price in this ad will lead to a higher CTR. Now future tests can build on this insight.

Winning Variation

Quick Tips for A/B Testing Ads by Network

Knowing how to test for statistical significance is extremely valuable, but you can also rely on the networks to find the best performing ad variation. Here's how:

Google AdWords

  1. Create multiple ads in each ad group
  2. Set your ad delivery to "Optimize for clicks (or conversions)."
  3. Google will automatically split test your ads and begin favoring the winner more much more quickly than if you were measure your test results manually.

Bing Ads

  1. Create only two or three ads in each ad group
  2. Set your ad group's ad rotation to "Optimize for clicks."
  3. Bing will automatically test your ads and begin favoring the winner, but their system doesn't handle large numbers of ad variations very well so stick to three ads per ad group at most.

Facebook Ads

  1. Create multiple ads in each ad set (and try the AdStage Ad Scrambler to input multiple images, headlines and descriptions to build dozens of ad variations quickly).
  2. Make sure your test ads are created with the same ad type, targeting and bids.
  3. Facebook will automatically test your ads and begin favoring the winner.

LinkedIn Ads

  1. Create multiple ads within the campaign.
  2. Change your ad variation rotation setting to "Optimize click-through-rate."
  3. LinkedIn will automatically test your ads and begin favoring the winner.

AdStage Team