Defining a Split Test
A Split Test means that Facebook will split your target audience into randomized, non-overlapping segments. That enables comparing campaigns' performance against each other, only showing certain user ads from one test cell. After that, you simply measure the results of each test cell using the standard Facebook conversion attribution model to see which test cell drove the cheapest conversions, the better CTR or higher ROAS.
Make sure you have a clear goal for your split test. This can be achieved by having a clear question in your mind as to what you try to answer to with the test. For example, the following questions are two different tests
- Should I run a mix of creatives: link ads, videos, collection ads?
- Which creative types should I put more resources to develop new creative concepts: link ads, videos, collection ads?
When creating a test, you can see the following view when you click advanced settings. The tool calculates how many conversions you need and how large a budget you need based on your CPA. Note that reliably finding very small differences requires large amounts of conversions, and can thus be costly compared to the potential improvement in performance.
Know these before running a Split Test
Start by reading general Best practices for Ad Studies, including:
- Make sure that your conversion events are working properly
- Use newly created or cloned campaigns
- Make sure you do not have other campaigns overlapping with the test campaigns
- Don't use the same posts
- Set yourself a reminder before the test ends
- AA/BB testing: Don't do it!
Expect a hit in performance. Whenever you're running a split test, the audience will be split, likely resulting in a dip in performance KPIs. Therefore, any comparisons should be made between the study cells and not against e.g. historical performance.
Ideally, there should be only one difference between study cells. This is the only way to interpret the results reliably. When the study cells differ in multiple ways (e.g. both creative types and bidding), you cannot know for sure how each of these differences affects the results.
For example, if you want to test whether manual bidding is better than automatic bidding, create two campaigns that are identical except that one uses manual bidding and the other uses automatic bidding. When you observe a difference, you will know exactly what caused it.
It is easy to setup campaigns like this in Smartly.io. First, create one campaign, then clone it, and change just the one thing you want to test in the cloned campaign.
Understand what kind of changes you can make during the test. The key is to treat all the study cells equally and not make changes that can skew the interpretation of results. For example, if you are running a test comparing two different target audiences, both the ad set targeting the control audience and the ad set targeting the test audience should contain exactly the same creatives. If you add creatives during the test, you should add the same creatives to both ad sets.