Wednesday, February 07, 2007

Uses of Lifetime Value - Part 2: Component Analysis

Yesterday I began discussing the uses of Lifetime Value models. The first set of applications use the model outputs themselves—the actual estimates of Lifetime Value for individual customers or customer groups. But in many ways, the components that go into those estimates are more useful than the final values. Today we’ll look at them.

All lifetime value calculations ultimately boil down to the same formula: lifetime revenue minus lifetime costs. These in turn are always built from the same customer interactions—promotions, purchases, support requests, and so on. If you only want to look at the final LTV number, it doesn’t matter how these elements are used to get it. But if you want to understand what went into the number, the model must be built with components that make sense in your particular business.

For example, magazine publishers think primarily in terms of copies sold. The revenue portion of a publisher’s lifetime value model will therefore be: copies sold x revenue per copy. Product, service and most other costs will also be stated in terms of cost per copy. The primary exception is promotion costs, which are typically listed separately for initial acquisition. Renewal promotions can also be listed separately, although they are sometimes so negligible that they are simply lumped into the per copy cost with the rest of customer service. In sum, then, a publisher’s lifetime value model might look like:

LTV = (Acquisition cost) – (number of initial copies x (initial price per copy – initial cost per copy))
+ (number of renewal copies x (renewal price per copy – renewal cost per copy))

Note that number of orders, value per order, and average years per customer—all seemingly natural components for a lifetime value model—do not even appear.

In practice, some of those details may well be used in the model. For example, the number of orders is needed to calculate order processing costs accurately. Similarly, the timing of the events is needed to do discounted cash flow analysis. But those details are not necessarily useful to managers trying to understand the general state of their business. This means they can be hidden within the lifetime value calculation and not displayed in most reports.

Some traditional industry metrics may not fit naturally into the lifetime value calculation. Sticking with magazines, publishers traditionally look at renewal rates as a key indicator of business health. You could restructure the previous model to include renewal rates, but it’s not clear this gives more insight than average renewal copies per customer. In fact, there’s a good argument that renewal rates are actually a less useful measure because they are impacted by extraneous factors such as changes in the renewal offers.

The point here is simply that the components of the lifetime value model are intelligible only if they match the industry at hand. More specifically, they must show the key performance factors that determine the health of the business.

The special advantage of using model components as key performance indicators is that you can show the impact of any change in terms of actual business value.

This is a key point. It’s easy to come up with a list of important business metrics. But it’s not necessarily clear what a change in, say, a customer satisfaction rating actually means in terms of profit. At best there may be some rules of thumb based on historical correlations. This may carry some weight with decision-makers, but it is nowhere near as compelling as a statement that lifetime revenue per customer has dropped 2%, which translates into $145 million in future value. Even though the number is known to be an estimate, it has a specific value that immediately indicates its approximate importance and therefore how urgently managers should react to it.

The other advantage of dealing with model components is that the connections among them are clear. If the 2% revenue decline is accompanied by a 5% decrease in acquisition costs, or perhaps a 10% increase in number of new customers, managers can see immediately whether there is really a problem, or in fact something has gone very right. Although using the model for what-if modeling is a topic for another day, simply laying out the relationships among the key performance indicators improves the ability of everyone in the company to understand how the business works.

Of course, interpreting the values of LTV components is difficult in isolation. Is an average life of 2.5 years good or bad? Experienced managers will have some sense of reasonable values based on their own backgrounds. But even they need to look at the numbers in comparison with something.

The two major bases for comparison are time periods and customer segments. Trends in measures over time are easy to understand—either they’re up or down, and depending on whether they are revenues or costs that’s either a good or a bad thing. Again, one virtue of model-based components is you can see the changes in context: if revenue went down but cost went down more, maybe things are really okay.

The interval at which you measure trends will depend on the business—it could be yesterday vs. today or it could be this year vs. last year. But since lifetime value is a long-term measure, you have to be careful not to react to random swings over short time periods. The amount of time you need to wait to detect statistically significant differences will depend mostly on the volume of data available. You also need to be sensitive to external influences such as seasonality.

Customer segments are more complicated than time periods simply because there are so many more possible definitions. The segments could be based on customer demographics, purchase behavior, start date, acquisition source, initial offer, initial product, or just about anything else. There’s no need to pick just one set: different segmentations will matter for different purposes. Whatever definitions you use, you’ll compare different segments to each other, and to themselves over time.

In fact, the first explanation to consider for many changes between time periods is that the mix of customer segments has changed. This will change aggregate lifetime value for the business even if behavior within each segment is the same. This in itself is a useful finding, of course, since it immediately points the rest of the analysis towards understanding how and why the customer mix changed.

And that is exactly the point: we look at the value of LTV components not because they’re fascinating in themselves or because we want to know whether to draw a happy face or sad face on the cover of the report (anyone who does that should be fired immediately anyway, so it should never be an issue.) We look at them because they indicate at a high level what’s happening in the business, giving us hints of what needs to be examined more closely.

A good LTV system enables this closer examination as well. Drill-downs should permit us both to examine the basic model components for different time periods and customer segments, and to explore the details within the components themselves. At some point you will reach the finest level of detail captured in the LTV system and have to look elsewhere for additional explanations—it doesn’t make sense for the LTV system to incorporate every bit of information in the company. But making large amounts of detail accessible without leaving the system is definitely a Good Thing.

Important as drill-downs are, they rely on a manager or analyst to do the drilling. A really good LTV system does some of that drilling automatically, identifying trends or variances that warrant further exploration. Now we’re entering the realms of automated data mining, but this doesn’t have to be particularly esoteric. Since the LTV model captures the relationships among components within the LTV calculation, an LTV system can easily calculate the impact of a change in any one component on the final value itself. Multiplied by the number of customers, this gives a dollar amount that can be used to rank all observed changes by importance. Where the number of customers itself changes between periods, the system can further divide the variance into rate, volume and joint variances—a classic analysis that is easy to do and understand.

Doing this sort of automated analysis on figures for the company as a whole is probably overkill. After all, the top-line LTV model will probably have just a dozen or so components. Managers can eyeball changes in those pretty easily. More important, stable figures for the company as a whole can easily mask significant changes in behavior of particular segments. It’s therefore more important for the LTV system to automatically examine changes in component vales for a standard set of customer segments and to identify any significant variances at that level. The ranking mechanism—number of customers x change in value per customer—is exactly the one already described. A really advanced system would even find patterns among the variances, such as a weakness in retention rates across multiple segments. That one might require some serious artificial intelligence.

One problem with any automated detection system is false alarms. If the company is purposely managing a change in its business—say, by increasing prices—a system that simply compared past vs. current periods might find significant variances that are totally expected. Although these can easily be ignored, the real problem is comparisons against the past won’t tell whether the observed changes are actually in line with the changes anticipated in the business plan. This means that comparisons by time period and customer segment must be joined by a third dimension: comparisons against forecasted values. I’ll talk about forecasts tomorrow.

No comments:

Post a Comment