Wednesday, January 17, 2007

Still More on Lifetime Value Models

A comment on yesterday’s post on lifetime vale models makes the perfectly reasonable suggestion that people “develop a quick and dirty model, see what it can (and can't) do for you.. and iterate to the next level.” I made the foolish mistake of replying before my morning cup of coffee (not Starbucks, of course. But, come to mention it, signs all over my gym this week proclaim that they now “proudly brew” Starbucks coffee. I took this as aimed at me personally. The only thing missing is creepy posters with eyes that follow you everywhere. I digress.) In a caffeine-deprived state of orneriness, I replied that different types of lifetime value models use different technical approaches, so learning to build simple models may not teach very much about building complicated ones.

Whether or not this is correct, it does seem to contradict my comment in the original post that “there is really a continuum, so one type of model can evolve into another.” So I suppose a closer look at the topic is in order.

To focus the discussion a bit, note that I’m mostly concerned here with model complexity (the computational methods used to calculate lifetime value) rather than model scope (the range of data sources). Clearly data sources can be added incrementally, so continuity of scope is simply not in question. But can model complexity also grow incrementally, or do simpler techniques—say, what you can do in a spreadsheet—eventually run out of steam, so that you must switch to significantly different methods requiring different tools and user skills?

I still think the answer is yes, and say this based on considerable experience. I’ve built many simple lifetime value models over the years, actually dating back to the days before spreadsheet software, when we used real paper spreadsheets—huge, green analysis pads with tiny grids that spread across an entire desk. Ah, those were the days, and we didn’t have any of your fancy Starbucks coffee then either. Again I digress.

The point is, a model using a spreadsheet, whether paper or electronic, can only get so complex before it becomes unwieldy. In particular, it gets difficult to model contingent behaviors: customers who act differently depending on their past experiences. The best you can do on a spreadsheet is divide the original group of customers into increasing numbers of segments, each with a different experience history. But the number grows exponentially: if you had just seven customer experiences, each with three possible outcomes (good, bad, indifferent), that would yield 2,187 segments. And it’s really worse than that, because people can have the same experience more than once and you need to model multiple periods. Trust me, you don’t want to go there—I’ve tried.

The next step is to use some sort of database and programming language. You can get pretty far with this sort of thing—I’ve done some of these as well—but it takes a whole different level of skill than using a spreadsheet. Most business analysts don’t have the training, time or inclination to do this sort of development. Even if you have one who does, it’s not good business practice to rely on their undocumented, not-necessarily-well-tested efforts. So at this point you’re looking at a real development project, either for an IT department, advanced analytics group (i.e., statisticians) or business intelligence staff. Certainly if you’re going to use the model in an enterprise reporting system, such as measuring the results of customer experience management, you wouldn’t want anything less.

But, as I hope I’ve convinced you in the past few days, a model accurate enough to guide customer experience management has to incorporate contingencies and other subtle relationships that capture the impact of one experience on future behaviors. It would be very tough for an in-house group to build such a model from scratch. More likely, they’d end up using external software designed to handle such things. Acquiring the software and building the models would indeed take many months. It would probably result in scrapping any internally-built predecessor systems, although the data gathering processes built for those systems could likely be reused.

In the sense of pure technology, therefore, I do see three pretty much discontinuous levels to lifetime value modeling. (I can actually think of other approaches, but they have limited applications.) The simpler techniques have their uses, but can’t support metrics for enterprise-wide customer experience management. That you can’t “start simple” to support an application isn’t unusual: think of customized book recommendations, which are only possible if there’s a sophisticated technology to back them up. Or consider just-in-time manufacturing, which requires a sophisticated enterprise resource management system. Even Web search engines are useless if they don’t meet a minimum level of performance. Or…you get my point.

But customer experience metrics are just one use for lifetime value models. Plenty of other applications can be supported with simpler models. That’s what I wrote about yesterday. A logical corporate evolution would be to start with a simple value model and add complexity over time. Eventually the model becomes so cumbersome that it must be replaced with the next type of system. I suppose this scenario resembles what biologists call “punctuated evolution:” things grow slowly and incrementally for long periods, and then there is a sudden burst of major change. It may relate it to similar concepts from chaos theory. Makes a good title for a conference presentation, at any rate.

So I guess I was right both yesterday and today (somehow you knew I’d conclude that, didn’t you?) Companies can indeed evolve from one lifetime value model to another, even though the modeling techniques themselves are discontinuous.

This has some interesting management implications: you need to watch for evidence that you are approaching a conversion point, and recognize that you may need to shift control of the modeling process from one department to another when a conversion happens. You may even decide to let several different levels of modeling technology coexist in the organization, if they are suitable for different uses. Of course, this opens up the possibility of inconsistencies—“different versions of the truth”—familiar from other business intelligence areas. But trying to capture every aspect of an organization in one “great model in the sky” has drawbacks of its own. There are no simple solutions—but at least understanding the options can help you manage them better.

No comments: