The conversion funnel is one of the most vital components of a successful eCommerce strategy. One step can mean the difference between encouraging a customer to convert, or the customer abandoning the cart. Our eCommerce optimization, Matt Beischel, took a few minutes to discuss some common mistakes when optimizing your conversion funnel.
1. Not Looking At the Impact of Each Step On the Full Conversion Funnel
A common mistake during the testing process is that testers become hyper-focused on the specific KPIs and fail to notice other metrics that may be affected by the components being tested. If the only metric you are focusing on is revenue, you may miss that you are losing a number of orders, and that may impact your success in the long term. It is imperative that you track as many relevant metrics as possible, both broad and specific.
Beischel notes, “You have to evaluate holistically, but test in segments. You only test a single location at a time, but you have to form an understanding of how what you’re testing may have an effect further down the line in the funnel. You watch that by tracking all the metrics beyond the point that you’re testing.”
2. Having a Busy Conversion Funnel
“There’s never a single solitary conversion path unless you have a very, very simple site. With more modern, complex eCommerce sites there are a lot of different entry points. There are variable paths.”
In this modern age, there are numerous entry points into our eCommerce sites. As a result, our conversion funnel paths have become more complex and difficult to maintain. Having a busy conversion funnel can cause you to lose conversions. Are there too many steps? Are there too many forms? How many pop-ups do you have? One of the most difficult parts of resolving a busy conversion funnel is determining a starting point.
Beischel suggests that tackling this problem from the bottom-up may be the best course of action. He states, “What we like to do is work from the bottom of the funnel upward because the bottom of the funnel is the point where they’re most likely to convert. You can make easier assumptions about generating changes there.”
3. Focusing Too Much on Conversions and Having Blind Spots
As we mentioned before, it is important to be mindful and aware of all the numbers that are going on. Testers need to ensure that they eliminate blind spots in order to understand the full impact of their changes.
Beischel says, “You can’t put all your eggs in one basket, especially when you’re doing data analysis. You have several numbers and metrics you have to look at. You have numbers from split-testing, numbers you acquire through your regular site analysis platforms like Google Analytics, and hopefully whatever content management system you use for your sites allows you to acquire data by running reports and such (and if it doesn’t you might want to look into a better platform that does). All those different numbers are important.”
4. Not Testing Before Implementation
Not testing new changes before implementing (especially large site wide changes) can result in unforeseen consequences. It is important that retailers test big changes in order to determine potential negative impacts prior to implementation.
One client implemented a third party feature on their site that allows the dynamic changing of images, so that you can see the customization options on images as you make the changes. At face value this sounds like a fantastic idea. However, “We tested the implementation of the feature and found that it was suppressing conversions rather significantly on that page. We’re back-testing that now to make sure that it’s a successful implementation so that they’re making money off of it, rather than wasting money.”
Beischel cautions that it is important to “be mindful of the technology landscape, and being able to test it really helps.”
5. Not Cutting Down on Organizational Bias
By engaging in 50/50 split-testing you can help mitigate organizational bias. The great thing about split-testing is that the user is not aware that the test is being performed, so how they react to a change is genuine. Tests acquire data on aggregate user behavior over time. The result data from the test drives decisions, not opinion or intuition.
Beischel, “In that sense, it is anonymous and the user doesn’t have any kind of bias because they’re not aware of the fact that they’re being tested on.”