AB Testing: What is it and how can you use it to gain insight into your business?
Everyone remembers the classic school experiment where two seeds are planted separately in different conditions – one kept in a dark cupboard and the other in the light. Everything was kept equal except one variant – this is a very simple case of the method of A/B Testing that this article will seek to explain in greater detail.
Making greater use of data and analytics than with the simple (yet effective) example above, A/B Testing can be used to measure the impact of a range of different tests, such as email marketing, PPC campaign testing, website content and more.
But for the large part of this article we’re going to discuss a very simple scenario – let’s say we want to run a regional marketing test, and therefore wish to measure the impact of this to inform whether it’s worth running again in the future or not… but what exactly is A/B Testing and how can it help? How should we go about running the test itself? And how do we actually measure its impact?
‘A/B Testing’, sometimes otherwise known as ‘Split Testing’ or ‘Test and Control’ is, very simply, used to compare two versions of something and measure the impact from one element that is changed. In our regional marketing test example, we are looking to measure the uplift (if any) across the regions that the marketing ran compared with the regions where it didn’t.
Start with the hypothesis to be tested: for example, you believe a certain piece of marketing will outperform the norm. This could be prompted by a question; such as ‘why isn’t a particular channel working?’ or ‘we’d like to explore this strategy in the future, how can we test if it works?’ – such as regional marketing in our example.
From here, you then need to make sure you have the data required to measure the test and set said test up in the correct way. For example, if running a regional test, ensure you are able to collect regional / store level data. And be sure to discuss the test with the relevant parties to ensure that the spend, flighting and regions chosen for the test are sufficient.
At any one time, there will be a whole range of factors that influence sales. However, keeping everything the same except for the marketing that is being tested, we can confidently assume that any uplift is coming from the test itself.
First, we have to collect all required data and in our example of regional testing, split the sales data into two distinct sets: a test set and a control set. Collecting store level sales data would inform the split of sales data in our case.
Next, we must index the test and control sets to have the same baseline so that a fair comparison can be made. Given that they may be of different absolute values, this is the best way of comparing the two when looking at the change in sales.
As mentioned previously, from keeping all other factors the same i.e. availability of product, no differences in offering between stores etc., any difference seen in the test set can be attributed to the marketing activity we have run. See below chart for a nice visual example of this.
A/B Testing allows us to calculate an ROI more accurately than we would otherwise be able to modelling total national sales using traditional MMM, as we may struggle to measure the impact with the proportion of impacted store sales out of total sales potentially quite low. Store level modelling is another solution, but for smaller ad-hoc tests it is often not cost efficient due to the resource required and relative spend behind the test.
A/B testing doesn’t just provide data-driven proof that a test has or hasn’t worked, but it also provides the opportunity to apply insights into consumer behaviour across other areas of your marketing. If a regional test has worked when marketing a particular product, then it may well work again by using a different channel.
We have found that carrying out regular testing works best, as it helps to reach the optimal marketing spend and laydown quicker within each channel, making for more fruitful marketing. And of course, similar to our favourite seed experiment, shedding more light on marketing will only help its impact grow!
Brightblue Consulting are a London based consultancy which help businesses drive incremental profit from their data. We provide predictive analytics that enable clients to make informed decisions based on data and industry knowledge. Through Market Mix Modelling, a strand of Econometrics, Brightblue has a proven track record showing a 30% improvement in marketing Return on Investment for clients’ spend. If you are interested to find out more please contact us through email by clicking here and one of our consultants will get back to you shortly.