As readers of this blog are aware, I’ve reluctantly backed away from arguing that lifetime value should be the central metric for business management. I still think it should, but haven’t found managers ready to agree.
But even if LTV isn’t the primary metric, it can still provide a powerful analytical tool. Consider, for example, data quality. One of the challenges facing a data quality initiative is how to justify the expense. Lifetime value provides a framework for doing just that.
The method is pretty straightforward: break lifetime value in its components and quantify the impact of a proposed change on whichever components will be affected. Roll this up to business value, and there you have it.
Specifically, such a breakdown would look like this:
Business value = sum of future cash flows = number of customers x lifetime value per customer
Number of customers would be further broken down into segments, with the number of customers in each segment. Many companies have a standard segmentation scheme that would apply to all analyses of this sort. Others would create custom segmentations depending on the nature of the project. Where a specific initiative such as data quality is concerned, it would make sense to isolate the customer segments affected by the initiative and just focus on them. (This may seem self-evident, but it’s easy for people to ignore the fact that only some customers will be affected, and apply estimated benefits to everybody. This gives nice big numbers but is often quite unrealistic.)
Lifetime value per customer can be calculated many ways, but a pretty common approach is to break it into three major factors:
- acquisition value, further divided into the marketing cost of acquiring a new customer, the revenue from that initial purchase, and the fulfillment costs (product, service, etc.) related to that purchase. All these values are calculated separately for each customer segment.
- future value, which is the number of active years per customers times the value per year. Years per customer can be derived from a retention rate or a more advanced approach such as a survivor curve (showing number of customers remaining at the end of each year). Value per year can be broken into the number of orders per year times the value per order , or the average mix of products times the value per product). Value per order or product can itself be broken into revenue, marketing cost and fulfillment cost.
Laid out more formally, this comes to nine key factors:
- number of customers
- acquisition marketing cost per customer
- acquisition revenue per customer
- acquisition fulfillment cost per customer
- number of years per customer
- orders per year
- revenue per order
- marketing cost per order
- fulfillment cost per order
This approach may seem a little too customer-centric: after all, many data quality initiatives relate to things like manufacturing and internal business processes (e.g., payroll processing). Well, as my grandmother would have said, feh! (Rhymes with ‘heh’, in case you’re wondering, and signifies disdain.) First of all, you can never be too customer-centric, and shame on you for even thinking otherwise. Second of all, if you need it: every business process ultimately affects a customer, even if all it does is impact overhead costs (which affect prices and profit margins). Such items are embedded in the revenue and fulfillment cost figures above.
I could easily list examples of data quality changes that would affect each of the nine factors, but, like the margin of Fermat’s book, this blog post is too small to contain them. What I will say is that many benefits come from being able to do more precise segmentation, which will impact revenue, marketing costs, and numbers of customers, years, and orders per customer. Other benefits, impacting primarily fulfillment costs (using my broad definition), will involve more efficient back-office processes such as manufacturing, service and administration.
One additional point worth noting is many of the benefits will be discontinuous. That is, data that's currently useless because of poor quality or total absence does not become slightly useful because it becomes slightly better or partially available. A major change like targeted offers based on demographics can only be justified if accurate demographic data is available for a large portion of the customer base. The value of the data therefore remains at zero until a sufficient volume is obtained: then, it suddenly jumps to something significant. Of course, there are other cases, such as avoidance of rework or duplicate mailings, where each incremental improvement in quality does bring a small but immediate reduction in cost.
Once the business value of a particular data quality effort has been calculated, it’s easy to prepare a traditional return on investment calculation. All you need to add is the cost of improvement itself.
Naturally, the real challenge here is estimating the impact of a particular improvement. There’s no shortcut to make this easy: you simply have to work through the specifics of each case. But having a standard set of factors makes it easier to identify the possible benefits and to compare alternative projects. Perhaps more important, the framework makes it easy to show how improvements will affect conventional financial measurements. These will often make sense to managers who are unfamiliar with the details of the data and processes involved. Finally, the framework and related financial measurements provide benchmarks that can later be compared with actual results to show whether the expected benefits were realized. Although such accountability can be somewhat frightening, proof of success will ultimately build credibility. This, in turn, will help future projects gain easier approval.
Wednesday, June 20, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment