Digital Marketing

Quick Start Guide to Ad Variation Testing

AdStage Team 2 minute read

If you are running multiple ad campaigns simultaneously like we do here at AdStage, then you're most likely concerned with identifying which creative provides you with the best results. In this post, you'll learn at a high level how to track your ad testing performance and ensure that you're optimizing your campaigns for success.

Identifying the Right Success Metric

Depending on your campaign objective, your success will be measured by various metrics. The most common is click-through-rate (CTR). While CTR provides a great indicator on how relevant an ad is to your audience, cost per conversion reveals true business impact. A conversion can come in the form of a form completion, app install, newsletter sign-up, or white paper download.

It's important to determine your success metric before you begin your testing. That way you can formulate a clear hypothesis for how your ad changes will affect your core KPI metric. This will keep you honest about the experiment. No cheating.

Creating Ad Variations

This is where your creativity shines! You'll want to create at least one variation, but most likely multiple ad copies. Each variation should iterate on your original ad creative, changing one element at a time.

Depending on which network you are advertising on, you can test:

  • Headline Text
  • Description Text
  • Link Display Text
  • Sitelinks
  • Structured Snippets
  • Image Creative
  • Call to action text

In addition to testing ad copy, you can experiment with your targeting criteria using ad groups. We have a couple of great write-ups on using segmentation for your campaigns here:

Upgrade Your Social Retargeting with Web Audience Segments

Using Ad Groups for Segmentation on Twitter Ads

Documenting the Process

No matter what variable you are testing, you'll want to document your variations, so you do not duplicate your efforts. Remember to write down a hypothesis on how your variation will impact your core KPI metric. For example, "By changing the ad headline to X, metric Y will increase by 10%." With a clear hypothesis statement, you will be able to determine if that variation was a success or failure.

Documentation is important when you are working on a team with multiple people launching ad campaigns simultaneously. After a couple of months of testing, you'll have a nice knowledge base to refer to for future tests.

Here is a basic Google Sheet template ( link ) that you can copy to get started with tracking your variations. Make sure also to track impressions and success metric results.

Ad Test Tracking Template

Determining the Results of Your Test

Let's start with this question, "What sample size is required for a valid test?" This subject is too vast to cover in this post. However, Evan Miller's Sample Size Calculator ( link ) is an excellent tool to determine if your experiment is scientifically relevant. In other words, has the experiment accumulated enough data to determine a winner.

For example, if you currently have a baseline CTR of 5% and want to be able to measure a minimum detectable effect of 3% (2% - 8%), each variation needs to be viewed by 711 people.

Source: http://www.evanmiller.org/ab-testing/sample-size.html#!5;80;10;3;0

Using the example data in the provided Google Sheet, we see that Example.001 has a CTR of 1.6% while Example.002 has a CTR of 1.875%. At first, you may want to declare version 2 as the winner. However, let's double check that assumption.

Luckily we do not have to break out the calculator and statistics textbook. Instead, just use this calculator by Kissmetrics ( link ).

A/B Test Result Calculator

As the example reveals, the first ad variation was the winner, even though it had a slightly lower CTR (the calculator rounds up, so you want to make sure to do your own calculation to verify).

Wrap Up

Developing a process for your testing practices is important. Always document your ad variations and test results, building your knowledge base on what does and doesn't work with your target audience. Don't reinvent the wheel and use available statistical testing tools to verify your tests. Lastly, share your winners & losers with your team for applied learnings. Have any tips to share? Let us know in the comments.


AdStage Team