A major retailer recently removed an A/B testing service from its website. Average load time improved by nine per cent. Conversions rose by ten per cent. Bounce rate also improved by four per cent, and engagement (measured by pages per session) increased by two per cent.
The graph below, from our Real User Monitoring service, illustrates the improvement in load times. The black line shows average load time, and the large orange area highlights where the testing service was removed.
Performance Trends in Real User Monitoring: the black line shows average page load time. The blue/grey area around it represents the expected range of load times, based on past data. The orange highlighting is for anomalies – load times falling outside the expected range.
Why testing can have a negative impact on marketing KPIs
This has two implications for marketers.
1) Testing can damage sales
A process designed to make a website more effective could actually be having the opposite effect. Slowing a website down has been shown to damage conversions, increase bounce rates and reduce sales.
This means that if the testing process is continuous, you might never see any benefit. Positive effects from picking the right variations could easily be outweighed by the negative impact of a slow site.
2) You can’t be sure of the results
Imagine you want to find out which version of a page is more effective: A or B. The test slows both versions down by ten per cent on average. Because of this, a lot of visitors give up before seeing either version. Others are frustrated and behave differently once the page has finished loading.
Version A wins. So you halt the test, rolling out version A, and the site speeds up again.
However, the testing conditions were different from the rollout conditions.
It could be that version B would have won if the site had been as fast during the test as it was afterwards.
Testing doesn’t always take performance into account
A related point is that A/B testing often works by hiding both versions of a page before selectively revealing one or the other. This can mean that it fails to account for the effect of slow-loading elements.
For example, say page A contains a large, slow-loading image, while page B contains a smaller, faster loading image. Under normal conditions, page A would be slower to finish displaying, delivering a poorer experience than page B. However, your testing service may hide all content on the page before revealing both versions at the same time.
Testing is important in marketing. Offers, hero images, calls to action – marketers are involved in a constant cycle of testing, refining and retesting. Everything is geared towards ratcheting up conversion, engagement and revenue. For this reason, A/B and multivariate testing are common on ecommerce websites.
It’s just important to remember that there’s a cost too.
So before doing this kind of testing, it’s a good idea to consider the following:
- Understand the impact: find out the extent to which testing will slow your website down – and how this would affect your sales. Real User Monitoring can help you predict this. Other solutions, such as Performance Analyser, can help you understand the impact of a testing service on the user experience (as can our web performance experts!).
- Pick a testing solution that has a minimal impact on site speed.
- Only test if you expect the value of the test to outweigh the cost.
Also in this series:
Ten things every marketer should know about web performance…