Skip to navigation Skip to main content

Getting Started with Conversion Rate Optimisation

Conversion Rate Optimisation (CRO) is a process for optimising aspects of your website’s design, content or overall user experience to convert the greatest number of visitors to completing some action that you have determined is important to your business.

Conversion rate optimisation

Designing with data

CRO fits perfectly with an iterative, data-driven approach to website development. By taking small steps forward and carefully measuring the impact on your key performance indicators (KPIs) as you do, you can be constantly optimising for an ever-improving conversion rate. The changes you make don’t have to be small, but an evolutionary rather than revolutionary approach is often favoured because it reduces risk, delivers improvements much more quickly and is easier to sustain.

One of the most common tools in the conversion rate optimiser’s utility belt is multivariate testing. First decide what design change you are going to test – it could be something as small as the label on a button or as significant as an entirely different layout and content for a page – Then decide what you will measure to determine the success of the change and how it can be measured accurately. Ideally it will be some conversion action that the user will perform directly on the test page itself as this will help limit the number of other influencing factors.

Arriving at results that are statistically significant is particularly important if we are going to put our design decisions in the hands of data.

The numbers count

Because there are often so many potential factors at play it is perfectly possible to test two identical designs against each other and over the short term see very different conversion rates. So while it can be tempting to rush to a conclusion as soon as the numbers start pointing one way on a test, to reach statistically significant findings we need to ensure that our sample is large enough and also adequately reflects any variables that might impact the test. For example time of day, day of week etc. The more variations you test the longer you will need to run your experiment to get any statistically meaningful results, so often a straight A/B test of old vs. new also works best.

A/B conversion rate testing

There are two main variables when it comes to determining our sample size:

  • Our accepted level of statistical significance – i.e. how sure you want to be that the results are accurate – typically 95% is used as the threshold.
  • The size of the change we want to be able to measure – the smaller the impact we want to detect with confidence, the bigger the sample size will need to be.

To take a practical example if we have a web page that currently delivers a 5% conversion rate and we want to be able to measure a 20% improvement with 95% confidence we would need to test each design variation with 6,900 visitors.

Depending on how much traffic the particular page you are testing receives you could be running that split test for a very long time. So it pays to be selective about what you test and aim for aspects with a high baseline conversion rate or where you are confident the impact of the change will be significant. Returning to our example, if the baseline conversion rate was 50% then we would only need to test each variation with 190 visitors to detect the save 20% lift in conversions.

If that all sounds like a bit too much maths, luckily there are plenty of tools out there specifically geared around facilitating this type of testing, including free options such as Google Experiments, which is part of Google Analytics. These tools usually manage the mechanics of the test, i.e. randomly serving different versions of the design to different users, and will crunch the numbers as the experiment runs to declare a winning variation if one emerges.

Running a test continuously over the course of a week or two is usually long enough to factor out variables such as time of day and day of week. But it is worth thinking about whether more long term variables might also impact the results. For example a hero image that tests well in summer might not be the best option in winter. If in doubt there is no harm in testing again in the future to re-validate your findings.

Knowing what to optimise

Of course the hard part is knowing what to test. This is where experience can really help, but you can also look to your existing analytics data for some clues. Using something like Google’s in-page analytics or a heat mapping service like CrazyEgg can assist in identifying parts of a page that aren’t working, for example a key call-to-action that isn’t getting the click-throughs you would hope.

Knowing what to test

Having identified a potential issue it can be a good idea to try and dig into the why a bit before jumping to a solution, of which there might be many. For example a quick bit of Usability Testing, could provide the answer. This more qualitative data, “I just didn’t see it” or “I thought that did something else”, might tell you whether a change of colour, moving or simply re-labelling your call-to-action is likely to be the most fruitful option to try.

Sometimes your optimisations might be more speculative – If we reduce the number of form fields what impact will that have on conversions? If we add a reading time to posts will people spend longer on the page? You might also look to the design patterns used by other sites for ideas, particularly those in your industry and take inspiration from how they have approached similar design challenges.

Make CRO part of your process

Conversion Rate Optimisation doesn’t need to be something distinct from your existing development process. In fact it is best if you try to make it part of your normal workflow. Email marketing is a good illustration of how you can do this in practice. Every time you send out a campaign try to test something different. Time of day, day of week, subject line, the labelling of a button. Email marketing providers like MailChimp make setting up this kind of split test incredibly simple and can even automatically identify a winning variation part way through sending and deliver any remaining recipients that optimum version of the email.

Identify one or two potential conversion optimisations to test each month, prioritising those you can test quickly, accurately and that could deliver the greatest returns.

Offline conversion rate optimisation can be more challenging as the results aren’t as easy to measure with accuracy, but it is possible. Unique phone numbers, URLs or offer codes are the most common methods of introducing some basic tracking and these can be twinned with more complex methods such as response curves to help attribute conversions back to a particular source.

We have written before that designing with data doesn’t remove the need for experience and a good design sensibility to provide direction. It is fair to say that you can’t test your way from a poor design to a great one. But through this kind of continuous measuring and testing you can identify some specific optimisations and also over time build up a more general picture of your audience and what does/doesn’t work with them. This accumulated knowledge can really help you make more informed design decisions in the future and the process of testing regularly definitely tunes you to thinking about your users, complementing any other forms of user research you might already be carrying out.

Nick Barron

Written by

Nick Barron

In his role as UX Director Nick ensures that everything we do reflects a clear understanding of our clients’ aims as well the expectations of their audiences.