Yesterday’s post discussed how values for LTV components can be compared across time and customer segments to generate insights into business performance. But even though such comparisons may uncover trends worth exploring, they do not tell managers what they really need to know: is the business running as planned? To do this, actual LTV figures must be compared with a forecast.
The mechanics of this comparison are easy enough and pretty much identical to comparisons against time or customer segments. The real question is where the forecast values will come from.
You’re expecting me to say they’ll be generated by the lifetime value model itself, aren’t you? Well, maybe. The problem is that business plans aren’t built around LTV models. They’re built around projects: marketing campaigns, sales programs, product introductions, plant openings, system deployments, and the rest. (Of course, some companies just plan by projecting from last year’s figures. It’s easy to calculate the expected LTV changes implicit in such a plan, since there is no program detail to worry about.)
The trick, then, is to convert project plans into LTV forecasts. In a sense, this is easy: all you have to do is estimate the change in LTV components that will result from each project. But building such estimates is hard.
It’s hard for two reasons. First, most business projects are not conceived in LTV terms. They are based on adding new customers or increasing retention or cutting costs or whatever. To build them into an LTV forecast, these objectives must be restated as changes in LTV components.
Much of information needed to define the component changes will have already been assembled during the original project analysis. With this as a base, creating the component forecast is more a matter of reconfiguring existing information than developing anything new. One exception is the difference in time frame: many project plans are aimed at short term results, while LTV by definition includes behavior over a long horizon. This is actually a benefit of doing the LTV forecast, since it forces managers to consider the long term effects of their actions. But it also requires more work as part of the planning process. Companies will need to develop a reasonable approach to this issue and then train managers to apply it consistently.
The work is somewhat reduced by the fact that most projects are really focused on a just a few LTV components. For example, a retention project is mostly about increasing the length of the customer’s lifetime. This means the LTV impact can be defined as changes in only the affected components, without considering the others. Even though this is oversimplifying a bit, it’s a reasonable shortcut to take for practical purposes. (On the other hand, one of the benefits of using LTV as a company-wide management metric is that it encourages everyone to consider the impact that the efforts of their group have on other departments and the customer experience. So you do want managers to at least consider the effects of their projects across all LTV components.)
The second and even more challenging problem with defining the LTV impact of individual projects is that nearly all projects affect only a subset of the entire customer base. An acquisition program in marketing only affects the new customers it attracts; a change in customer service only affects people who call in with problems; an improvement to a product only affects people who buy it.
Counting the number of customers affected by a program isn’t that difficult. That number will always be part of the project plan to begin with. But the LTV analysis needs to know who these people are so it can determine their baseline LTV component values. Many project plans do not go into this level of detail.
Some attributes of the affected customers will be obvious. They are customers from a particular source or users of a particular product or customers in a particular channel. But it’s also important to remember that those affected will be at different stages in their life cycle: that is, some will be newer than others. (New customer acquisition programs are the obvious exception.)
Since future LTV usually changes as customers stay around longer (generally increasing, sometimes decreasing), it would be a big mistake to use the new customer LTV as a baseline. Instead, you have to identify the future LTV for each set of customers affected by the program, segmenting them on tenure in addition to whatever other attributes you’ve identified. As discussed yesterday, a good LTV system should provide these segmentation capabilities.
Once you’ve calculated the baseline LTV components for the major segments, it’s tempting to aggregate them into a single figure before proceeding. But this is an area where averages can be misleading. Assume you’re planning a program that will yield a 5% increase in retention. This will probably be most effective among newer customers. But those customers probably have lower-than-average future LTVs (precisely because they are more likely to leave). This means the actual value gained from the program will be less than if its impact were spread evenly across the entire universe. In terms of LTV components, the expected future tenure of the newer customers is smaller than average, so the anticipated change in tenure (in absolute terms such as years per customer) would be smaller as well. (Of course, the actual impact is an empirical question—perhaps the retained customers will turn into fanatic loyalists who stay forever. Though I doubt it.)
The point here is that once you’ve identified the customer segments affected by a program, you need to calculate their baseline components and expected changes separately for each segment. Only then can you aggregate them for reporting purposes. And of course you’ll want to retain the segment detail to compare against the actuals once these start coming in.
You’ll also want the segment detail to help in the final stage of consolidation, when expected changes from each program are combined into an over-all LTV forecast. The issue here is that many programs will impact overlapping sets of customers, and it would be unrealistic to expect their incremental effects to be purely additive. So managers need a way to calculate the consolidated impact of all expected changes and to reduce those which seem excessive. Doing this at a segment level is essential—and it requires that definitions be standardized across plans, since you otherwise won’t be able to consolidate the results cleanly.
An alternative consolidation method would work at the level of individual customers. Under this approach, the company would associate plans with individuals and then calculate aggregate results for each person. This is an intriguing possibility but probably beyond the capabilities of most firms.
Daunting as the consolidation process may seem, it’s worth recognizing that even conventional, project-based planning systems should do something similar. Which is not, of course, to say that they actually do.
It should be clear by now that a bottom-up approach to creating LTV forecasts is a substantial project. A much simpler approach would be to first create the traditional consolidated business forecast, and then derive the LTV components from that. The component forecasts could be created down to the same level of detail as the business forecasts: by division, product line, country, or whatever. This approach wouldn’t provide LTV impact forecasts for individual programs or customer segments. Nor would it force managers to view their programs in LTV terms while building those forecasts. But it’s an easier place to start.
However the forecasts are created, you need them to judge whether actual results are consistent with expectations. Again, this is no different from any other business management system: comparisons against history are interesting, but what really counts is performance against plan.
Thursday, February 08, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment