Tuesday, March 20, 2007

Proving the Value of Site Optimization

Eric’s comment on yesterday’s post, to the effect that “There shouldn’t be much debate here. Both full and fractional designs have their place in the testing cycle” is a useful reminder that it’s easy to get distracted by technical details and miss the larger perspective of the value provided by testing systems. This in turn raises the question posed implicitly by Friday’s post and Demi’s comment, of why so few companies have actually adopted these systems despite the proven benefits.

My personal theory is it has less to do with a reluctance to be measured than a lack of time and skills to conduct the testing itself. You can outsource the skills part: most if not all of the site testing vendors have staff to do this for you. But time is harder to come by. I suspect that most Web teams are struggling to keep up with demands for operational changes, such as accommodating new features, products and promotions. Optimization simply takes a lower priority.

(I’m tempted to add that optimization implies a relatively stable platform, whereas things are constantly changing on most sites. But plenty of areas, such as landing pages and check out processes, are usually stable enough that optimization is possible.)

Time can be expanded by adding more staff, either in-house or outsourced. This comes down to a question of money. Measuring the financial value of optimization comes back to last Wednesday's post on the credibility of marketing metrics.

Most optimization tests seem to focus on simple goals such as conversion rates, which have the advantage of being easy to measure but don’t capture the full value of an improvement. As I’ve argued many times in this blog, that value is properly defined as change in lifetime value. Calculating this is difficult and convincing others to accept the result is harder still. Marketing analysts therefore shy away from the problem unless pushed to engage it by senior management. The senior managers themselves will not be willing to invest the necessary resources unless they believe there is some benefit.

This is a chicken-and-egg problem, since the benefit from lifetime value analysis comes from shifting resources into more productive investments, but the only way to demonstrate this is possible is to do the lifetime value calculations in the first place. The obstacle is not insurmountable, however. One-off projects can illustrate the scope of the opportunity without investing in a permanent, all-encompassing LTV system. The series of “One Big Button” posts culminating last Monday described some approaches to this sort of analysis.

Which brings us back to Web site testing. Short term value measures will at best understate the benefits of an optimization project, and at worst lead to changes that destroy rather than increase long term value. So it makes considerable sense for a site testing trial project to include a pilot LTV estimate. It’s almost certain that the estimated value of the test benefit will be higher when based on LTV than when based on immediate results alone. This higher value can then justify expanded resources for both site testing and LTV.

And you thought last week’s posts were disconnected.

No comments: