Why aren’t my tests working?

Tom Adams, Head of Conversion Optimisation at Code, discusses the importance of user research when it comes to devising a testing programme, and shares some tips on ensuring your tests are successful (and ultimately profitable).

Gathering real evidence

Effective conversion rate optimisation is about maximising return on investment, and user research is a highly valuable investment. It’s critical to really understand what you are seeing in test results and why.

The most common reason that tests fail is because the ideas behind them aren’t rigorous enough – maybe it’s based on hunches rather than real evidence, or perhaps it’s the wrong solution to the wrong problem.

There’s a general rule of thumb; that a third of tests show a positive result a third are negative, and the final third are inconclusive. However, the objective of a good testing programme is to beat these odds – and you do that by being prepared and creating A/B tests based on real data and insight, rather than guesswork.

There are a number of techniques you can use to gain the evidence you need to do this, like usability testing, eye tracking, surveys, interviews, analysing Analytics data and other business data, and so on. Remember: it’s always best to have a mix of behavioural (what people are doing) and attitudinal (what they are thinking or saying) insights if possible.

Choosing the right thing, the right people, and the right time

You also need to make sure you are testing the right things with the right people. This means, firstly, really analysing current website performance analytics to look for the areas of your site with the biggest commercial opportunities for improvement i.e. what areas aren’t performing to your targets? And it means breaking your audience down into segment, e.g. returning visitors, or one set of your PPC traffic.

In doing this, you need to make sure you are testing on large enough segments of visitors – tests can be inconclusive because you just don’t get enough traffic through them to create a statistically valid result.

Finally, ensure you’re running your test for the right amount of time, and be aware of external factors. Your users might show different behaviour on weekends compared to weekdays, or you may have a particularly seasonal product; your competitors might be running an aggressive promotion during your test period, or performance might even be affected by the weather or something in the news.

It’s often worth re-running a test at a different point in the year too; we ran an A/B test for Asda during the back-to-school period in August, and when we re-ran exactly the same test a few weeks later, we saw slightly different results.

What to do when tests fail

it’s important to remember that sometimes tests don’t win simply because they don’t win. A hypothesis that you are completely confident in might bomb, and sometimes one that you’re taking a punt on will show a huge win – you can never be sure, and that’s why A/B testing is so valuable!

It’s OK when tests don’t work as long as you understand why they didn’t win. This means analysing all tests – losers and winners – to understand why you saw what you saw. Segment test results by different kinds of visitor, consider revalidating a winning test by re-running the test a few weeks later, and take both winning and losing tests back into the lab for user testing.

Five testing tips

  • Invest in user research regularly. This doesn’t have to be expensive – even just a day or so of data insight and research work a month can make a huge difference.
  • Use web analytics to identify the problems on the website that present the biggest opportunities for return on investment.
  • Understand which users will visit those areas of the website.
  • Do research activities to understand those users’ behaviours and motivations, and create test hypotheses based around these insights.
  • Analyse test results in depth – whether winners or losers – so you can really understand why you saw what you saw.

Putting testing into practise for Asda.com

The programme we ran when relaunching the Asda.com homepage was an intensive set of tests, all backed up by evidence from data and user research.

By observing the way that users were interacting with the current website we came up with the following hypothesis: "Because we saw customers not navigating to key shops (George Clothing, Money) from ASDA.com, we believe an updated site map will significantly increase click-through rate."

We then devised a number of tests to validate this hypothesis. After several rounds of tree testing, we created a sitemap that we then used to create new prototypes to test directly in front of customers. The initial results were positive, which demonstrated to the client that reorganising the way content was labelled and structured would have a massive impact.

We’re still regularly iteratively testing the new site and refining the navigation to increase click through rates, and the results have been great: we’ve seen a 24% increase in onward traffic, a 21% increase in revenues per month, and 37% reduction in bounce rate.


Why the web metrics you’re measuring are wrong