Ideas for new site features can originate from many different places. Perhaps you saw something on a competitor’s site or read a case study about how new feature XYZ boosted conversion by 50%. Your immediate thought was, “Wow, that’s a great idea. We should do it too!” so you went ahead and implemented that new feature without testing it. It worked for them; it should work for you too, right? Well, maybe not. Every site is unique with their own design, interface, demographic, etc. What’s good for the goose is not necessarily always good for the gander. The following is a cautionary tale about the dangers of reactive implementation without testing.
The average mid-market eCommerce site has about a 2% order conversion rate. Increasing that conversion rate is almost always a continuous goal to work for. Customer retention is one way to accomplish that goal. Historically, we’ve seen about an 18% conversion rate for returning shoppers who already have an item in their cart. That’s about 16% higher than the site as a whole. Increasing your
add-to-cart rate and then enticing those customers that abandon to return can have a significant effect on overall site conversion rate.
Abandoned cart email programs have proven to be a strong source of retention. The shopper has already shown interest in your product, so they’re much more likely to convert if you can get them to return. However, once they’ve returned, what then? How do you get them into the checkout funnel? What kind of on-page feature can encourage a returning shopper to complete their order?
The fewer pages and interface a user has to wade through, the more likely they are to convert. Building around this concept (as a direct corollary to an abandoned cart email program), we developed an “abandoned cart dialog”. This dialog acts as a reminder by displaying a shopper’s cart items and driving conversion by having strong call-to-action button links to the cart and checkout pages. This dialog is shown to any shopper who is returning to the site and already has items in their cart. In order to avoid “popup annoyance”, the dialog has several limiters in place:
- Shown only on the entry page and not subsequent pages
- Shown only on the home, category, or product pages
- Shown only once every 30 days, reset if customer completes the order
Screenshot of an abandoned cart dialog:
The abandoned cart dialog will drive returning customers into the checkout funnel, thus increasing order conversions.
The experiment was run as a 50/50 split for 40 days. The audience was limited to only visitors who were returning to the site and had an item in their cart.
Each key goal showed consistent improvement over the duration of the experiment:
- Cart Pageviews 2% conversion rate (+7.1%)
- Checkout Pageviews 43% conversion rate (+6.3%)
- Order Conversion 83% conversion rate (+4.6%)
- Revenue $99.66 revenue per visitor (+3.5%)
The order conversion of 24.83% is almost 7% higher than the expected average of 18%. That’s a significant win over previous performance expectations! These results were so good that we decided to run this experiment for other sites. Same experiment, same configuration, same run duration:
- Cart Pageviews 22% conversion rate (+20.5%)
- Checkout Pageviews 94% conversion rate (+36.8%)
- Order Conversion 10% conversion rate (+12%)
- Revenue $86.10 revenue per visitor (+7.7%)
All the metrics increased significantly. Awesome, two for two! We’re really onto something here. Now, here’s where things get interesting. Running the experiment on a third site, we saw:
- Cart Pageviews 16% conversion rate (-6.9%)
- Checkout Pageviews 46% conversion rate (-4.0%)
- Order Conversion 62% conversion rate (-0.0%)
- Revenue $72.44 revenue per visitor (+10.3%)
All conversion metrics were noticeably down except for revenue. Well, that’s still good, right? Digging deeper, we found 2 atypically large outlier orders that had both been attributed to the dialog variation (which we were able to identify by looking at the revenue over time graph in Optimizely and seeing 2 large spikes). Discounting those showed that revenue was actually awash at $65.64 revenue per visitor, a -0.1% loss.
The same experiment ran on three different sites, but only two saw a positive effect. These results really hammer home the importance of testing site changes. If something works for 9 out of 10 sites, well… your site might just be that 1 where it doesn’t work. Every site is unique; there is no single “best practice” that can guarantee success. Always test your way through changes.