I’ve had a series of conversations over the past few days regarding the distinction I made last November between “behavioral targeting” systems and “multivariate testing” systems. Both types of products tailor Web contents to individual visitors. Both work similarly: place code snippets in slots on the Web page where the personalized content will appear; when the page is loaded, send visitor information to a hosted server; run server-side rules to select the content; then return the selection to the Web page for display. The difference is in how they select the contents.
The testing systems I’ve looked at closely (Optimost, Offermatica, Memetrics) rely on users to define customer segments and assign the contents shown to each segment. They usually assign multiple content items to test their performance. But the systems can also send targeted contents in non-test situations. It’s just a matter of specifying the default contents to serve each segment.
By contrast, the behavioral systems (Certona, Touch Clarity [recently purchased by Omniture], [x+1]) automatically build their own segments. Specifically, they create groups of visitors which are likely to respond to different contents. Thus both the segment definitions and segment-to-content match-ups evolve over time as the system gains more experience and, perhaps, as user behavior evolves.
What was interesting, and frustrating, about my recent conversations was that my counterpart kept insisting that the testing systems do not allow segment-based targeting. He even showed me a recent analyst report that said this. Having personally researched Memetrics and Offermatica and taken a close look at the Optimost Web site, I know this is wrong. Of course, I’ve been around long enough to know you can’t trust anything but your own eyes where software is concerned (and sometimes not even those!) So the misinformation—which I’m certain was unintentional—was no surprise.
More intriguing was the realization that sophisticated testing systems could probably charge more if they positioned themselves as targeting tools. Apparently the prices for behavioral targeting products are higher—perhaps because at least some of them base their fees on incremental profits earned for their clients (I know Certona does this). Maybe the testing systems are missing other functions needed for targeting. But if they really could raise their fees by repositioning themselves, it looks like a missed opportunity.
Tuesday, February 27, 2007
Friday, February 23, 2007
Why Customer Experience Management Depends on Metadata
I’ve written a great deal recently about the importance of Lifetime Value as a measure to guide customer experience management. Let’s assume I’ve made my case, or at least no one has any interest in arguing about it. The next question would be, what blocks companies from making Lifetime Value calculations?
In one sense, the answer is nothing. You can generate a crude LTV figure with nothing more than annual profit per customer and attrition rate. But that type of calculation isn’t precise enough to measure the effect of changing a particular customer treatment. If we want to use LTV as a customer experience metric, we need a LTV calculation that works at the level of customer experiences.
This calculation has two components: the model and the data used as input. Designing the model takes some skill but isn’t that hard for people who do such things. The real challenge is assembling the experience data itself.
Some of this data just isn’t directly available. Experiences such as cash purchases at retail, anonymous Web site visits, and viewing of TV advertisements can’t be linked to individual customers. They must either be inferred or left out of the model altogether.
But in many industries today, the majority of significant experiences are captured in a computer system. The information might be in structured data such as a purchase record, or it might be something less structured such as an email message or Web page. Taking these together, I think most businesses capture enough experience data to build a detailed LTV model.
This data must be processed before it is usable. The processing involves three basic tasks: extracting the data from the source systems; linking records that belong to the same customer; and classifying the data so it can be used in a model.
None of these tasks is trivial. But the first two are well understood problems with a long history of effort at solving them. In comparison, classification for LTV models has received relatively little attention. This is simply because the models themselves have not been a priority. Other types of classification—say, for regulatory compliance or fraud detection—are quite common.
The classification issue boils down to tagging. Each experience record must be assigned attributes that fit into the LTV model input categories. At Client X Client, we do this in terms of the Customer Experience Matrix. The Matrix has channel and life stage as its two primary dimensions, although the underlying structure also includes locations, systems, slots, products, offers, messages, customer, and context. Tagging each event with these attributes lets us build the Lifetime Value model and make other analyses to understand and optimize the customer experience. (Incidentally, although I’ve used the terms classification and tagging here, you can also think of this as application of metadata.)
My point is that while tagging may seem a trivial technical issue, it is actually a critical missing link in the chain of customer experience management success. And that’s why I just spent 500 words writing about it.
In one sense, the answer is nothing. You can generate a crude LTV figure with nothing more than annual profit per customer and attrition rate. But that type of calculation isn’t precise enough to measure the effect of changing a particular customer treatment. If we want to use LTV as a customer experience metric, we need a LTV calculation that works at the level of customer experiences.
This calculation has two components: the model and the data used as input. Designing the model takes some skill but isn’t that hard for people who do such things. The real challenge is assembling the experience data itself.
Some of this data just isn’t directly available. Experiences such as cash purchases at retail, anonymous Web site visits, and viewing of TV advertisements can’t be linked to individual customers. They must either be inferred or left out of the model altogether.
But in many industries today, the majority of significant experiences are captured in a computer system. The information might be in structured data such as a purchase record, or it might be something less structured such as an email message or Web page. Taking these together, I think most businesses capture enough experience data to build a detailed LTV model.
This data must be processed before it is usable. The processing involves three basic tasks: extracting the data from the source systems; linking records that belong to the same customer; and classifying the data so it can be used in a model.
None of these tasks is trivial. But the first two are well understood problems with a long history of effort at solving them. In comparison, classification for LTV models has received relatively little attention. This is simply because the models themselves have not been a priority. Other types of classification—say, for regulatory compliance or fraud detection—are quite common.
The classification issue boils down to tagging. Each experience record must be assigned attributes that fit into the LTV model input categories. At Client X Client, we do this in terms of the Customer Experience Matrix. The Matrix has channel and life stage as its two primary dimensions, although the underlying structure also includes locations, systems, slots, products, offers, messages, customer, and context. Tagging each event with these attributes lets us build the Lifetime Value model and make other analyses to understand and optimize the customer experience. (Incidentally, although I’ve used the terms classification and tagging here, you can also think of this as application of metadata.)
My point is that while tagging may seem a trivial technical issue, it is actually a critical missing link in the chain of customer experience management success. And that’s why I just spent 500 words writing about it.
Thursday, February 22, 2007
Assessing an Array of Analytics Application Acquisitions
To an industry analyst, one event is interesting, two are a coincidence, and three makes a trend. Last week has (at least) three acquisitions of analytics vendors: TouchClarity by Omniture, Decisioneering by Hyperion, and Pilot Software by SAP. So what trend are we witnessing?
Actually, the answer is quite obvious: companies are trying to add more intelligence to their products. This particular trend has been under way for a long time. What’s important about it from a customer experience management viewpoint is it shows the vendors believe their customers are looking for more advanced analytic solutions.
This wasn’t always the case. Although Pilot is a fairly generic “operational performance management” system (nothing wrong with that), both Decisioneering and TouchClarity involve some pretty sophisticated forecasting. Until recently, most managers who were not themselves statisticians have been leery of such tools. It’s possible they still are, but presumably Hyperion and Omniture are responding to some sort of demonstrated demand. I’ll be optimistic and assume this means that managers are now more willing to employ advanced analytic software even if they may not quite understand what is going on under the hood.
This is unusual. Most managers are control-centered and risk-averse. Presumably they have been convinced to overcome these tendencies based on hard evidence that systems like this bring real business benefits.
This is good for customer experience management because CEM value calculations also rely on complicated forecasts. Managers’ willingness to employ systems that do such forecasts suggests they will be more receptive to accepting CEM forecasts as valid. Managers’ interest in these systems also suggests a more analytical orientation in general. This is another hopeful sign that they will accept CEM measurements such as Lifetime Value as tools for guiding business decisions.
Actually, the answer is quite obvious: companies are trying to add more intelligence to their products. This particular trend has been under way for a long time. What’s important about it from a customer experience management viewpoint is it shows the vendors believe their customers are looking for more advanced analytic solutions.
This wasn’t always the case. Although Pilot is a fairly generic “operational performance management” system (nothing wrong with that), both Decisioneering and TouchClarity involve some pretty sophisticated forecasting. Until recently, most managers who were not themselves statisticians have been leery of such tools. It’s possible they still are, but presumably Hyperion and Omniture are responding to some sort of demonstrated demand. I’ll be optimistic and assume this means that managers are now more willing to employ advanced analytic software even if they may not quite understand what is going on under the hood.
This is unusual. Most managers are control-centered and risk-averse. Presumably they have been convinced to overcome these tendencies based on hard evidence that systems like this bring real business benefits.
This is good for customer experience management because CEM value calculations also rely on complicated forecasts. Managers’ willingness to employ systems that do such forecasts suggests they will be more receptive to accepting CEM forecasts as valid. Managers’ interest in these systems also suggests a more analytical orientation in general. This is another hopeful sign that they will accept CEM measurements such as Lifetime Value as tools for guiding business decisions.
Wednesday, February 21, 2007
JetBlue's Problems from a Customer Experience Management Perspective
It feels kicking them while they’re down, but the JetBlue story continues to fascinate me. Like other well-regarded brands facing a crisis, they’ve responded forcefully. This in itself is good, since it means they are controlling the story rather than leaving the media to dig around for more horror tales. And JetBlue’s specific response—to promulgate a “Customer Bill of Rights”—is very much in line with their core brand position as customer champions. If they pull this off, their problems last week may actually end up reinforcing rather than diffusing that image.
So it seems that JetBlue is handling the public relations part of this quite admirably. But looking at their actions from a customer experience management perspective leads me to some questions.
The substance of JetBlue’s Bill of Rights, and certainly the part that’s receiving most of the press coverage, is to offer vouchers for future travel when passengers face delays. There is some legalistic hedging that limits the compensation to a “Controllable Irregularity”, which I think means problems that are JetBlue’s fault. That already seems like evading responsibility. But my broader question is whether credits against future purchases are really the best response when a customer has been treated poorly.
Yes I know this is a very common practice. The underlying logic is it gives the customer a reason to come back for another sample of the product. Nor does it hurt that the actual cost is much lower than face value of the vouchers. Still, from a customer experience viewpoint, the last thing you want at the end of a horrific plane ride is the opportunity to go on another one. It’s even worse if, instead of being handed the coupon upon arrival, you are simply given a verbal promise and then have to wait to receive it in the mail. This adds another level of stress to the experience: will I get it? will the amount be correct? What will the fine print say?. JetBlue’s sliding scale, where the size of the voucher is based on the length of the delay, seems to ensure the latter will be the case.
Personally, I’d rather they hand me a coupon for a stiff drink, which is what I really need after that trip. Or, since there are often children involved, treat the family to meal at McDonald’s or Pizza Hut. Of course, this runs the risk of insulting people with the paucity of the compensation: I can already see the t-shirt “my flight was stuck on the runway for nine hours and all I got was some lousy French fries”. (Come to think of it, how about showing a sense of humor and handing out a custom-printed t-shirt? “I survived nine hours on the tarmac on JetBlue flight 1099, February 14, 2007.” On second thought, maybe not.)
JetBlue would do better to focus on improving the experience itself. In this case, it means both avoiding delays and, when they do happen, making them as bearable as possible. This comes back to my comment of the other day that ultimately these are operational, not marketing, issues. JetBlue has acknowledged that its problems were greatly exacerbated by cascading operational failures, particularly in mobilizing customer service staff and flight crews. It has also promised to address those problems. No doubt will do so.
But JetBlue should also publicize its efforts—showing the great lengths it goes to serve its customers. This will do more than compensatory vouchers to reinforce JetBlue’s core positioning as a customer service champion. In fact, a good advertising campaign along these lines can create a perception of a significant difference from other airlines that goes deeper than leather seats and in-flight TV. It will also show JetBlue’s employees that the problems are being fixed and encourage them to do whatever they can personally to make passengers’ lives better when problems occur.
I can’t claim this is an original idea. It’s precisely what Federal Express and United Parcel Service have done for years. (Need I mention that airlines and package delivery are both part of the transportation industry? Who hasn’t wished their air travel could be as reliable as overnight shipping?)
In short, JetBlue can focus on improving its product or on repairing its mistakes. The better option is clear. Will they take it?
So it seems that JetBlue is handling the public relations part of this quite admirably. But looking at their actions from a customer experience management perspective leads me to some questions.
The substance of JetBlue’s Bill of Rights, and certainly the part that’s receiving most of the press coverage, is to offer vouchers for future travel when passengers face delays. There is some legalistic hedging that limits the compensation to a “Controllable Irregularity”, which I think means problems that are JetBlue’s fault. That already seems like evading responsibility. But my broader question is whether credits against future purchases are really the best response when a customer has been treated poorly.
Yes I know this is a very common practice. The underlying logic is it gives the customer a reason to come back for another sample of the product. Nor does it hurt that the actual cost is much lower than face value of the vouchers. Still, from a customer experience viewpoint, the last thing you want at the end of a horrific plane ride is the opportunity to go on another one. It’s even worse if, instead of being handed the coupon upon arrival, you are simply given a verbal promise and then have to wait to receive it in the mail. This adds another level of stress to the experience: will I get it? will the amount be correct? What will the fine print say?. JetBlue’s sliding scale, where the size of the voucher is based on the length of the delay, seems to ensure the latter will be the case.
Personally, I’d rather they hand me a coupon for a stiff drink, which is what I really need after that trip. Or, since there are often children involved, treat the family to meal at McDonald’s or Pizza Hut. Of course, this runs the risk of insulting people with the paucity of the compensation: I can already see the t-shirt “my flight was stuck on the runway for nine hours and all I got was some lousy French fries”. (Come to think of it, how about showing a sense of humor and handing out a custom-printed t-shirt? “I survived nine hours on the tarmac on JetBlue flight 1099, February 14, 2007.” On second thought, maybe not.)
JetBlue would do better to focus on improving the experience itself. In this case, it means both avoiding delays and, when they do happen, making them as bearable as possible. This comes back to my comment of the other day that ultimately these are operational, not marketing, issues. JetBlue has acknowledged that its problems were greatly exacerbated by cascading operational failures, particularly in mobilizing customer service staff and flight crews. It has also promised to address those problems. No doubt will do so.
But JetBlue should also publicize its efforts—showing the great lengths it goes to serve its customers. This will do more than compensatory vouchers to reinforce JetBlue’s core positioning as a customer service champion. In fact, a good advertising campaign along these lines can create a perception of a significant difference from other airlines that goes deeper than leather seats and in-flight TV. It will also show JetBlue’s employees that the problems are being fixed and encourage them to do whatever they can personally to make passengers’ lives better when problems occur.
I can’t claim this is an original idea. It’s precisely what Federal Express and United Parcel Service have done for years. (Need I mention that airlines and package delivery are both part of the transportation industry? Who hasn’t wished their air travel could be as reliable as overnight shipping?)
In short, JetBlue can focus on improving its product or on repairing its mistakes. The better option is clear. Will they take it?
Tuesday, February 20, 2007
What's Really Holding Back Customer Experience Management?
Epsilon’s Ron Shevlin is worried about the future of Customer Experience Managment. In a post today, he concludes, “If proponents don’t come together to reconcile the conflicts in their frameworks and provide credible high-profile case studies that capture the attention of senior execs — before the end of 2008 — then CEM won’t be a term we’ll hear a lot about come 2010.”
I actually think there are plenty of successful cases for Customer Experience Management: Amazon.com, Disney, Dell, Federal Express, Starbucks, JetBlue (until last week), and Apple iPod spring to mind. It’s interesting that the bloom is off many of those roses but that doesn’t negate their past successes. Nor do the conflicting frameworks bother me. I see them as a sign of commercial interest and continued conceptual evolution.
My own theory is that CEM has failed to become a must-have management technique because it seems optional. CEM is viewed as one possible strategy among many, and one that’s harder to pull off than other choices such as cutting costs. What we really need is not success stories but failure stories: tales of companies driven out of business by the superior customer experience of a competitor. Human nature being what it is, fear of loss is a more powerful motivator than a chance for gain.
But fear must be accompanied by a path to success. This is where methodologies, frameworks and, yes, even consultants become important. There is no lack of these in the world of Customer Experience Management. I do think there is shortage of metrics that connect customer experience to business profits. Profits, as you may have noticed, are what managers ultimately care about. This is why I spend so much time on Lifetime Value, which I see as the best candidate for this role.
But I'm not foolish enough to think that better metrics will make CEM the Next Big Thing. That will take senior managers becoming scared that if they don't pay attention, their business is in jeopardy. In that sense, maybe JetBlue's recent troubles are really the best thing that could happen.
I actually think there are plenty of successful cases for Customer Experience Management: Amazon.com, Disney, Dell, Federal Express, Starbucks, JetBlue (until last week), and Apple iPod spring to mind. It’s interesting that the bloom is off many of those roses but that doesn’t negate their past successes. Nor do the conflicting frameworks bother me. I see them as a sign of commercial interest and continued conceptual evolution.
My own theory is that CEM has failed to become a must-have management technique because it seems optional. CEM is viewed as one possible strategy among many, and one that’s harder to pull off than other choices such as cutting costs. What we really need is not success stories but failure stories: tales of companies driven out of business by the superior customer experience of a competitor. Human nature being what it is, fear of loss is a more powerful motivator than a chance for gain.
But fear must be accompanied by a path to success. This is where methodologies, frameworks and, yes, even consultants become important. There is no lack of these in the world of Customer Experience Management. I do think there is shortage of metrics that connect customer experience to business profits. Profits, as you may have noticed, are what managers ultimately care about. This is why I spend so much time on Lifetime Value, which I see as the best candidate for this role.
But I'm not foolish enough to think that better metrics will make CEM the Next Big Thing. That will take senior managers becoming scared that if they don't pay attention, their business is in jeopardy. In that sense, maybe JetBlue's recent troubles are really the best thing that could happen.
Monday, February 19, 2007
Where Is JetBlue's Brand Value of Yesterday?
JetBlue Airways’ highly publicized problems in recovering from last week’s snow storm raise a fundamental issue about the value of brands. JetBlue had built a strong image for being customer-friendly in an industry that is notoriously customer-hostile. Perhaps it will recover this image and perhaps its recovery will be helped by that reservoir of good will known as brand equity. But perhaps it won’t recover: if enough attention is focused on the recent problems, this will eventually become the dominant impression of JetBlue. And in that case, all the previous brand equity will have melted like, well, a snow flake.
The psychology of this is interesting. People tend to perceive what they expect, which means they reject contrary evidence (or, more precisely, ignore it). But if enough contrary news accumulates, it eventually overrides the old expectations and forces people to adopt new ones. Future information then is accepted or ignored in line with the new expectations, reinforcing them just as previous selective perceptions had reinforced the old ones. Of course, in JetBlue’s case, this is all magnified by the media, which will now look for and report on problems that it would previously have ignored. When this sort of switch happens, the old brand equity is lost.
But if brand equity can vanish overnight, does it make sense to treat brand marketing as a long-term investment? Frankly I’ve never been very comfortable with this treatment, simply because it’s so obvious that the public attention span is so short. In a world where many ads are ignored altogether and the impressions that do register are quickly overwhelmed by whatever is presented next, how much long-term impact can we really expect from advertising?
I think the answer is quite little, but definitely some. Analyses such as marketing mix models do find residual effects from brand advertising, although these are measured in months not decades. So it does make sense to amortize the cost of brand advertisements over a period beyond the campaign itself, although perhaps not a very long one.
Nor is this changed by the fact that a catastrophe can erase brand value overnight. A factory can burn down too, ending its future value. You wouldn’t refuse to treat it as an asset for that reason.
Incidentally, there is another moral to the JetBlue story: operational performance is more important than customer friendliness. It’s all part of the customer experience. I know you knew that—but other people occasionally need reminding.
The psychology of this is interesting. People tend to perceive what they expect, which means they reject contrary evidence (or, more precisely, ignore it). But if enough contrary news accumulates, it eventually overrides the old expectations and forces people to adopt new ones. Future information then is accepted or ignored in line with the new expectations, reinforcing them just as previous selective perceptions had reinforced the old ones. Of course, in JetBlue’s case, this is all magnified by the media, which will now look for and report on problems that it would previously have ignored. When this sort of switch happens, the old brand equity is lost.
But if brand equity can vanish overnight, does it make sense to treat brand marketing as a long-term investment? Frankly I’ve never been very comfortable with this treatment, simply because it’s so obvious that the public attention span is so short. In a world where many ads are ignored altogether and the impressions that do register are quickly overwhelmed by whatever is presented next, how much long-term impact can we really expect from advertising?
I think the answer is quite little, but definitely some. Analyses such as marketing mix models do find residual effects from brand advertising, although these are measured in months not decades. So it does make sense to amortize the cost of brand advertisements over a period beyond the campaign itself, although perhaps not a very long one.
Nor is this changed by the fact that a catastrophe can erase brand value overnight. A factory can burn down too, ending its future value. You wouldn’t refuse to treat it as an asset for that reason.
Incidentally, there is another moral to the JetBlue story: operational performance is more important than customer friendliness. It’s all part of the customer experience. I know you knew that—but other people occasionally need reminding.
Friday, February 16, 2007
Web Analytics In One Hour? I Don't Think So.
I’m still reflecting on speed-trap's promise to provide “all the data you need” in “under an hour”. It’s not so much that I’m skeptical about the time required—maybe it’s really possible in a simple situation, and people don’t take such claims too seriously anyway. Nor does it bother me that speed-trap turns out to rely on cookies for visitor identification: although that's definitely an imperfect solution, it's still the best one available.
What really concerns me is the notion that speed-trap can create a meaningful analytical data set without human involvement. Speed-trap doesn’t quite say this, but it’s implied in the claim that the system can be ready in an hour. Yet a closer look at speed-trap’s own document shows it isn’t at all the case. As with any solution, the raw interaction data must be processed to become useful. Speed-trap does this by having users write “nano-programs” that search for patterns, classify sessions, apply labels, group related items, and aggregate results across time periods. Users must also define customer segments, set privacy rules, and set up connections to reference databases. It's a safe bet no one does all this within sixty minutes.
There's nothing wrong with that. This sort of work actually adds value by forcing users to seriously examine their data and analysis methods. Speed-trap should make these points more openly to avoid setting unrealistic expectations about its own system and about analytic technology in general.
What really concerns me is the notion that speed-trap can create a meaningful analytical data set without human involvement. Speed-trap doesn’t quite say this, but it’s implied in the claim that the system can be ready in an hour. Yet a closer look at speed-trap’s own document shows it isn’t at all the case. As with any solution, the raw interaction data must be processed to become useful. Speed-trap does this by having users write “nano-programs” that search for patterns, classify sessions, apply labels, group related items, and aggregate results across time periods. Users must also define customer segments, set privacy rules, and set up connections to reference databases. It's a safe bet no one does all this within sixty minutes.
There's nothing wrong with that. This sort of work actually adds value by forcing users to seriously examine their data and analysis methods. Speed-trap should make these points more openly to avoid setting unrealistic expectations about its own system and about analytic technology in general.
Thursday, February 15, 2007
Speed-trap and SAS Promise More Accurate Web Analytics
Here’s an intriguing claim: a February 2 press release from SAS UK touts “SAS for Customer Experience Analytics” as linking “online and off-line customer data with world class business intelligence technology to deliver new levels of actionable insight for multi-channel organisations.” But the release and related brochure make clear that heart of the offering is Web analytics technology from a British firm named speed-trap.
Speed-trap employs what it says is a uniquely accurate technology to capture detailed customer behavior on a Web site. It works by providing a standard piece of code in each Web page. This code activates when the page is loaded and sends a record of a displayed contents to a server, where it is stored for analysis.
Although this sounds similar to other page tagging approaches to Web data collection, speed-trap goes to great lengths to distinguish itself from such competitors. The advantage it promotes most aggressively is that the same tag is used in all pages, and this tag does not need to pre-specify the attributes to be collected. Yet this is not truly unique: ClickTracks also uses a single tag without attribute pre-specification, and there may be other vendors who do the same.
But there does seem to be something more to the speed-trap solution, since speed-trap captures details about the contents displayed and user events such as mouse clicks. My understanding (which could be wrong) is that ClickTracks only records the url strings sent by site visitors. The level of detail captured by speed-trap seems more similar to TeaLeaf Technology, although TeaLeaf uses packet sniffing rather than page tags.
Speed-trap’s white paper “Choosing a data collection strategy” provides a detailed comparison against page tagging and log file analysis. As with any vendor white paper, its description of competitive products must be taken with a large grain of salt.
Back to the SAS UK press release. Apart from speed-trap, it seems that what’s being offered is the existing collection of SAS analytical tools. These are fine, of course, but don’t provide anything new for analysis of multi-channel customer experiences. In particular, one would hope for some help in correlating activities across channels to better understand customer behavior patterns. Maybe it’s just that I’ve been drinking our Client X Client Kool-Aid, but I’d like to see channel-independent ways to classify events—gathering information, making purchases, searching for support, etc.--so their purpose and continuity from one channel to another become more obvious. Plus I think that most people would want real-time predictions and optimized recommendations as part of “actionable insight”—something that is notably lacking from SAS’s description of what its solution provides.
Bottom line: speed-trap is interesting, but this is far from the ultimate analytical offering for multi-channel customer experience management.
Speed-trap employs what it says is a uniquely accurate technology to capture detailed customer behavior on a Web site. It works by providing a standard piece of code in each Web page. This code activates when the page is loaded and sends a record of a displayed contents to a server, where it is stored for analysis.
Although this sounds similar to other page tagging approaches to Web data collection, speed-trap goes to great lengths to distinguish itself from such competitors. The advantage it promotes most aggressively is that the same tag is used in all pages, and this tag does not need to pre-specify the attributes to be collected. Yet this is not truly unique: ClickTracks also uses a single tag without attribute pre-specification, and there may be other vendors who do the same.
But there does seem to be something more to the speed-trap solution, since speed-trap captures details about the contents displayed and user events such as mouse clicks. My understanding (which could be wrong) is that ClickTracks only records the url strings sent by site visitors. The level of detail captured by speed-trap seems more similar to TeaLeaf Technology, although TeaLeaf uses packet sniffing rather than page tags.
Speed-trap’s white paper “Choosing a data collection strategy” provides a detailed comparison against page tagging and log file analysis. As with any vendor white paper, its description of competitive products must be taken with a large grain of salt.
Back to the SAS UK press release. Apart from speed-trap, it seems that what’s being offered is the existing collection of SAS analytical tools. These are fine, of course, but don’t provide anything new for analysis of multi-channel customer experiences. In particular, one would hope for some help in correlating activities across channels to better understand customer behavior patterns. Maybe it’s just that I’ve been drinking our Client X Client Kool-Aid, but I’d like to see channel-independent ways to classify events—gathering information, making purchases, searching for support, etc.--so their purpose and continuity from one channel to another become more obvious. Plus I think that most people would want real-time predictions and optimized recommendations as part of “actionable insight”—something that is notably lacking from SAS’s description of what its solution provides.
Bottom line: speed-trap is interesting, but this is far from the ultimate analytical offering for multi-channel customer experience management.
Wednesday, February 14, 2007
Cellphone Ads Miss the Point
I know I promised to stop writing about mobile phones, but the headline “The Ad-Free Cellphone May Soon Be Extinct” in today’s The New York Times (February 14, 2007, Business Day, page c5) is irresistible. It seems that 60,000 people have assembled in Barcelona, Spain for the industry’s main annual conference and nearly all of them are salivating over the ad revenue they might earn. The article grasped several key attributes that make mobile phones special—detailed information about users, location awareness, built-in payment mechanism, and always being turned on.
But the article, and presumably the industry leaders it was reporting on, still seemed stuck in the conventional model of advertising as messages you push at consumers. Yes there is potential for sponsored content that could reduce consumer costs by getting advertisers to pay some of the freight. Yes those advertisements can be targeted with fantastic precision given the information available about cell phone users.
But a cell phone is not a tiny portable television. It is a two-way communication device that can be linked with computers and other humans. This makes possible applications that are inherently engaging, whether they help people to buy stuff or help them connect with others. I wrote about this in more detail here, here, and in other posts.
Yet the point bears repeating: cell phones, and their smart phone successors, represent the greatest opportunity since the Internet itself to enhance your customers’ experience. It’s not a question of whether this will happen, but which firms will be smart enough to get the benefit.
Will yours be one of them?
But the article, and presumably the industry leaders it was reporting on, still seemed stuck in the conventional model of advertising as messages you push at consumers. Yes there is potential for sponsored content that could reduce consumer costs by getting advertisers to pay some of the freight. Yes those advertisements can be targeted with fantastic precision given the information available about cell phone users.
But a cell phone is not a tiny portable television. It is a two-way communication device that can be linked with computers and other humans. This makes possible applications that are inherently engaging, whether they help people to buy stuff or help them connect with others. I wrote about this in more detail here, here, and in other posts.
Yet the point bears repeating: cell phones, and their smart phone successors, represent the greatest opportunity since the Internet itself to enhance your customers’ experience. It’s not a question of whether this will happen, but which firms will be smart enough to get the benefit.
Will yours be one of them?
Tuesday, February 13, 2007
Uses of Lifetime Value - Part 6: Final Thoughts
These past five posts have been a jolly romp through the delights of lifetime value. I think the main applications have been laid out clearly enough that I need not review them here in any detail. (For the record, they are: LTV itself, LTV components, comparisons of components, forecasts based on components, and, somewhat tangentially, simulation models.) But there have been several underlying themes that might benefit from explicit elucidation (plus I just like the word ‘elucidation’).
The Lifetime Value measure that counts is really aggregate Lifetime Value. That is, when we talk about trying to increase lifetime value, we usually want to increase the sum of the lifetime values of all customers (i.e., average lifetime value x number of customers). Since lifetime value is really future cash flow, this is simply saying we want to maximize the future cash flow of the company. Nothing radical about that. But it’s one of those things that are so obvious that you sometimes forget about them. Then you find yourself watching the wrong metric, such as LTV per customer. If the difference isn’t clear, think about acquisition programs: assuming the same cost, a program that brings in 100 customers worth $5 each ($500 LTV) is more valuable than one that brings in 50 customers worth $6 each ($300 LTV).
There are many ways to define Lifetime Value and the value of each is constantly changing. I went through this in some detail yesterday so I won’t repeat myself here. What’s important is to recognize that lifetime value is highly dynamic, so you can’t safely put a single figure in your head and apply it in many situations. Instead, you have to consider the purpose of each analysis and make sure you apply a definition that is appropriate. Then you have to remember that the value based on that definition will almost certainly change tomorrow—so don’t fool yourself into thinking any LTV figure is meaningfully precise.
Lifetime Value figures can be broken down by year. Again, see yesterday’s discussion of value buckets for the details. Sometimes a consolidated long-term figure such as “five year net present value” is the right one to look at, but often you really want to know about a shorter time frame. You don’t need to ignore lifetime value in those situations; you simply need to expose the time-sliced details that were used to generate the long-term value. The obvious example here is comparing actual LTV components against planned values: by looking at the one year component within the LTV calculations, you can isolate results during the past year. Otherwise, you risk being confused by figures that also include estimates for activities in the future.
You need a LTV system. I’ve referred to this numerous times without describing it explicitly. But it’s obvious you need some way to generate all those permutations of LTV: the different definitions (past, future, total); the different time periods; the different segments (defined ad hoc, no less); the time slices within each value; and so on. Basically, the LTV system I have in mind (and, yes, I’ve built them) captures historical information at the customer or granular segment level, and then lets you combine the data on demand to generate the various LTV values and components for whatever segments you create. Reports should include and compare multiple segments, such as customers from different sources or start years. Users should also be able to import plan or forecast values generated by an external modeling system, and compare actual results against these as well. The key to all this is that the LTV calculations must be performed automatically, without hand-tuning by an analyst. This places some limits on the sophistication of the calculations themselves, but that’s a small price to pay for the flexibility of on-demand results.
A simulation model is separate from the LTV system. Simulation models are very important for building business plans, creating forecasts, understanding relationships among interactions, and optimizing customer treatments. Model outputs can also include LTV figures and components. Within some constraints, a simulation model could also use LTV values as inputs and calculate the model assumptions (e.g. attrition rates) needed to generate those values. But despite these connections, the simulation model is quite distinct from the LTV system: it does different things in different ways. My earlier posts may have clouded this separation.
Project assessments must be based on incremental impact. This applies more to simulation models than LTV analysis, but it’s still important. The value of any project—marketing, customer service, manufacturing, whatever—is judged in comparison with the result of not doing that project. This is a particular challenge where customer treatments are concerned, since many factors affect customer behavior and it’s hard to isolate the effect of one change. The connection with LTV is two-fold: first, change in LTV should be the ultimate metric to judge the value of any change; and, second, a good LTV system may make it easier to compare the performance of treated vs. non-treated segments (assuming these can be isolated in the first place). But the main reason to bring up this issue is simply to point out that predicting or measuring incremental impacts is inherently difficult, regardless of whether you use LTV.
LTV components are good key performance indicators, but not the only ones. LTV components are good KPIs because they can be linked to real financial value (aggregate LTV). This means the impact of any change can be calculated directly, simplifying prioritization and helping managers to understand how different components are related. But it doesn’t follow that every KPI should have a measurable impact on LTV. Some important measures have no direct connection: imagine, for example, tracking the status of system development projects or employee training programs. These activities will eventually have a financial impact, but cannot be directly tied to their results. So, despite the superficial attractiveness of the notion, there is in fact no reason to insist that only LTV components be used as KPIs.
Multiple LTV applications reinforce the value of LTV as a management tool. This is the ultimate point of this series of posts. The more ways LTV is used throughout the company, the more everyone will focus on customer value. On a financial level, broader usage of an LTV system will help to justify its costs. But the real benefit is having people throughout the company share an understanding of the importance of customer value and of how their activities contribute to it. The more they care about customer value, the more successful the company will be.
The Lifetime Value measure that counts is really aggregate Lifetime Value. That is, when we talk about trying to increase lifetime value, we usually want to increase the sum of the lifetime values of all customers (i.e., average lifetime value x number of customers). Since lifetime value is really future cash flow, this is simply saying we want to maximize the future cash flow of the company. Nothing radical about that. But it’s one of those things that are so obvious that you sometimes forget about them. Then you find yourself watching the wrong metric, such as LTV per customer. If the difference isn’t clear, think about acquisition programs: assuming the same cost, a program that brings in 100 customers worth $5 each ($500 LTV) is more valuable than one that brings in 50 customers worth $6 each ($300 LTV).
There are many ways to define Lifetime Value and the value of each is constantly changing. I went through this in some detail yesterday so I won’t repeat myself here. What’s important is to recognize that lifetime value is highly dynamic, so you can’t safely put a single figure in your head and apply it in many situations. Instead, you have to consider the purpose of each analysis and make sure you apply a definition that is appropriate. Then you have to remember that the value based on that definition will almost certainly change tomorrow—so don’t fool yourself into thinking any LTV figure is meaningfully precise.
Lifetime Value figures can be broken down by year. Again, see yesterday’s discussion of value buckets for the details. Sometimes a consolidated long-term figure such as “five year net present value” is the right one to look at, but often you really want to know about a shorter time frame. You don’t need to ignore lifetime value in those situations; you simply need to expose the time-sliced details that were used to generate the long-term value. The obvious example here is comparing actual LTV components against planned values: by looking at the one year component within the LTV calculations, you can isolate results during the past year. Otherwise, you risk being confused by figures that also include estimates for activities in the future.
You need a LTV system. I’ve referred to this numerous times without describing it explicitly. But it’s obvious you need some way to generate all those permutations of LTV: the different definitions (past, future, total); the different time periods; the different segments (defined ad hoc, no less); the time slices within each value; and so on. Basically, the LTV system I have in mind (and, yes, I’ve built them) captures historical information at the customer or granular segment level, and then lets you combine the data on demand to generate the various LTV values and components for whatever segments you create. Reports should include and compare multiple segments, such as customers from different sources or start years. Users should also be able to import plan or forecast values generated by an external modeling system, and compare actual results against these as well. The key to all this is that the LTV calculations must be performed automatically, without hand-tuning by an analyst. This places some limits on the sophistication of the calculations themselves, but that’s a small price to pay for the flexibility of on-demand results.
A simulation model is separate from the LTV system. Simulation models are very important for building business plans, creating forecasts, understanding relationships among interactions, and optimizing customer treatments. Model outputs can also include LTV figures and components. Within some constraints, a simulation model could also use LTV values as inputs and calculate the model assumptions (e.g. attrition rates) needed to generate those values. But despite these connections, the simulation model is quite distinct from the LTV system: it does different things in different ways. My earlier posts may have clouded this separation.
Project assessments must be based on incremental impact. This applies more to simulation models than LTV analysis, but it’s still important. The value of any project—marketing, customer service, manufacturing, whatever—is judged in comparison with the result of not doing that project. This is a particular challenge where customer treatments are concerned, since many factors affect customer behavior and it’s hard to isolate the effect of one change. The connection with LTV is two-fold: first, change in LTV should be the ultimate metric to judge the value of any change; and, second, a good LTV system may make it easier to compare the performance of treated vs. non-treated segments (assuming these can be isolated in the first place). But the main reason to bring up this issue is simply to point out that predicting or measuring incremental impacts is inherently difficult, regardless of whether you use LTV.
LTV components are good key performance indicators, but not the only ones. LTV components are good KPIs because they can be linked to real financial value (aggregate LTV). This means the impact of any change can be calculated directly, simplifying prioritization and helping managers to understand how different components are related. But it doesn’t follow that every KPI should have a measurable impact on LTV. Some important measures have no direct connection: imagine, for example, tracking the status of system development projects or employee training programs. These activities will eventually have a financial impact, but cannot be directly tied to their results. So, despite the superficial attractiveness of the notion, there is in fact no reason to insist that only LTV components be used as KPIs.
Multiple LTV applications reinforce the value of LTV as a management tool. This is the ultimate point of this series of posts. The more ways LTV is used throughout the company, the more everyone will focus on customer value. On a financial level, broader usage of an LTV system will help to justify its costs. But the real benefit is having people throughout the company share an understanding of the importance of customer value and of how their activities contribute to it. The more they care about customer value, the more successful the company will be.
Monday, February 12, 2007
Uses of Lifetime Value - Part 5: Trend Analysis
This series of posts has followed what should seem like a logical progression: using the Lifetime Value figure; using components within that figure; comparing component values over time, across segments, and against forecasts; creating the forecast values with models; and using the models for simulation, planning, and optimization. Since optimization is the ultimate goal of management, that should be the end of the discussion.
But it isn’t.
You probably noticed that the focus of the discussion shifted midstream from LTV to modeling. That’s not wrong: you do need models to predict the customer behaviors that determine LTV and to do the simulations for planning and optimization. But the discussion of forecasts can also lead in another direction, centered not on models but on building forecasts from the LTV components themselves. So let’s backtrack a bit.
The first point, which may seem a bit pedantic, is that I’ve been somewhat sloppy when describing the comparison of current values against forecasts. As the context makes clear, the forecasts I had in mind were estimates of future component values prepared during a planning process. Most businesspeople would probably refer to those as “plan” values, leaving the term “forecast” to refer to revised estimates based on the latest available data.
I don’t think my ambiguity caused any serious harm. But it does highlight the potential for confusion when the term “forecast” is used in conjunction with lifetime value. This confusion exists on several levels.
The first is simply that LTV sometimes refers to the value since the start of a customer relationship, sometimes to future value, and sometimes to a combination of previous and future value. These distinctions were described earlier and are easy to understand. They only create confusion when people don’t know which definition is under discussion. For most applications, future LTV is the appropriate one, but I (and others) often don’t bother to state that explicitly.
Of course, the future LTV is a forecasted value, even though it’s based on current data. This is the second type of confusion: not recognizing that even “current” LTV figures incorporate projections of future activity. Again, no particular harm done, except to perhaps give an impression of more certainty than actually exists.
The third level of confusion relates to which set of customers is being measured. The average LTV (past, future or total) of your existing customers is almost certainly not the same as the LTV you’d calculate using current component values. Put another way, historical results yield different component values than recent results. Easy to understand, once you think about it. But which numbers do you define as “current” when comparing them against the plan or forecast?
As all these examples make clear, there is no one right answer to the question of what’s the “real” LTV. Different calculation methods will make sense in different circumstances. This isn’t a problem, but it does mean you have be conscious of which method you’ve chosen, ensure comparative calculations are consistent, and document your method in case anybody wants to know the details. It’s true that few casual users will ever actually ask. But that just means you have to try still harder to use a method that is consistent with their intuitive expectations.
To help understand the calculation options more clearly, let’s step back and take a look at where LTV figures come from. The basic definition of LTV is the net cash flows associated with a customer. This is usually limited to a specific time period and discounted for net present value. Thus, you can think of LTV as a series of buckets, one for each year, where the level of water in the bucket represents the cash brought in during that year. The buckets themselves may be subdivided—perhaps, like the Internet, they are a series of tubes—into marketing cost, revenue, cost of goods, service expenses, and so on. (It’s not exactly clear how you deal with expenses in this metaphor. Perhaps they are holes beneath the buckets, or perhaps some kind of water-absorbing material. Let’s not get too literal.)
The critical thing to realize is that each group of customers who start during a given period has its own series of buckets. Buckets that relate to past years are “filled” with actual values; buckets for future years are “filled” with estimates. Every year, one bucket switches from estimated to actual. To calculate a group’s LTV, you combine the values of all buckets for that group.
Even though each group gains just one “actual” bucket per year, there are many “actual” buckets that get filled (belonging to different groups). For example, customers in the group that started one year ago get their “year 1” actual bucket filled; customers who started two years ago get their “year 2” bucket filled, and so on. One definition of “current” LTV is the combination of the values in all the “actual” buckets that have just been filled. This makes sense because it uses the most recent data, but it does involve mixing results from different starting groups.
Such mixing can be problematic, particularly if the nature of your customers has changed over time. One partial solution is to identify homogeneous segments within the customer base and track results for each segment within each start group. You can them combine the most recent results for members of the same segment with different start years to calculate an estimated LTV at the segment level. Segment values can then be combined in a weighted average to get an over-all LTV. Of course, now you have to decide what quantities to use for your weights.
Hopefully this clarifies why there is no one “right” method to calculate LTV. But there’s a further lesson: LTV figures are always changing. New buckets are always being added and some buckets are always changing from estimated to actual. So LTV is dynamic indeed.
In a way, this is no different from other business measures. “Profit” is also always changing as new transactions are recorded. LTV is a little worse because profit stays fixed once the books for a period are closed, whereas LTV for a previous period includes forecasted values (depending of course on the calculation method) that could require later adjustment as subsequent actuals are received. As with profit, you may choose to restate the previous period figures if there is a major discrepancy, or you may choose to book an adjustment in a subsequent period. Either way, it’s important to realize just how fluid an LTV figure can be.
That was a long digression but hopefully it clarified the roles of forecasts in LTV calculations. Now we can turn to using LTV components as forecasting tools.
This is not the same as estimating future business results using the current LTV component values. That would be done with the simulation models described earlier. Rather, this is about estimating the future of the component values themselves based on trends in their changes to date. For example, you might find that this year’s acquisition cost is $50 per customer and it has been increasing $5 per year over the past three years. After controlling for source mix, volume, and whatever other factors you can identify, you might expect that trend to continue. You would therefore estimate next year’s LTV using an acquisition cost of $55 per customer. Trends in other components would yield similar estimates for next year’s values. From these, you would derive other figures (income statement, cash flow, etc.) and estimate the LTV itself.
The advantage of this approach is that you don’t need an elaborate simulation model or trend identification system. Simply identifying the changes in LTV components lets you calculate the impact of those changes on aggregate lifetime value (LTV per customer x number of customers). You can then rank those impact values to determine which changes are most important to examine more deeply. Even the most sophisticated simulation model doesn’t do this, since it can only calculate the outcomes based on the assumptions you feed into it.
The first step in examining LTV trends is controlling for known factors. A change in source mix could easily change the aggregate component values even if behavior within each source remained stable. (A stable aggregate value could also mask significant changes within particular segments—another reason to do segment-level analysis.) Some changes might also reflect discernable causes, such as a price increase, whose future impact can be estimated directly. But even after the known factors are considered, there may be other changes which cannot be explained. If these represent significant trends, their continuation should be built into estimates of future behavior.
To get back to the progression of applications I mentioned earlier: we can now revise it to be LTV; LTV components; comparisons of components across segments; and forecasts based on trends in components. Simulation modeling for planning, optimization and to estimate LTV component values represents a stream of related activity, but perhaps is not an LTV application in itself.
Then again, tomorrow is another day.
But it isn’t.
You probably noticed that the focus of the discussion shifted midstream from LTV to modeling. That’s not wrong: you do need models to predict the customer behaviors that determine LTV and to do the simulations for planning and optimization. But the discussion of forecasts can also lead in another direction, centered not on models but on building forecasts from the LTV components themselves. So let’s backtrack a bit.
The first point, which may seem a bit pedantic, is that I’ve been somewhat sloppy when describing the comparison of current values against forecasts. As the context makes clear, the forecasts I had in mind were estimates of future component values prepared during a planning process. Most businesspeople would probably refer to those as “plan” values, leaving the term “forecast” to refer to revised estimates based on the latest available data.
I don’t think my ambiguity caused any serious harm. But it does highlight the potential for confusion when the term “forecast” is used in conjunction with lifetime value. This confusion exists on several levels.
The first is simply that LTV sometimes refers to the value since the start of a customer relationship, sometimes to future value, and sometimes to a combination of previous and future value. These distinctions were described earlier and are easy to understand. They only create confusion when people don’t know which definition is under discussion. For most applications, future LTV is the appropriate one, but I (and others) often don’t bother to state that explicitly.
Of course, the future LTV is a forecasted value, even though it’s based on current data. This is the second type of confusion: not recognizing that even “current” LTV figures incorporate projections of future activity. Again, no particular harm done, except to perhaps give an impression of more certainty than actually exists.
The third level of confusion relates to which set of customers is being measured. The average LTV (past, future or total) of your existing customers is almost certainly not the same as the LTV you’d calculate using current component values. Put another way, historical results yield different component values than recent results. Easy to understand, once you think about it. But which numbers do you define as “current” when comparing them against the plan or forecast?
As all these examples make clear, there is no one right answer to the question of what’s the “real” LTV. Different calculation methods will make sense in different circumstances. This isn’t a problem, but it does mean you have be conscious of which method you’ve chosen, ensure comparative calculations are consistent, and document your method in case anybody wants to know the details. It’s true that few casual users will ever actually ask. But that just means you have to try still harder to use a method that is consistent with their intuitive expectations.
To help understand the calculation options more clearly, let’s step back and take a look at where LTV figures come from. The basic definition of LTV is the net cash flows associated with a customer. This is usually limited to a specific time period and discounted for net present value. Thus, you can think of LTV as a series of buckets, one for each year, where the level of water in the bucket represents the cash brought in during that year. The buckets themselves may be subdivided—perhaps, like the Internet, they are a series of tubes—into marketing cost, revenue, cost of goods, service expenses, and so on. (It’s not exactly clear how you deal with expenses in this metaphor. Perhaps they are holes beneath the buckets, or perhaps some kind of water-absorbing material. Let’s not get too literal.)
The critical thing to realize is that each group of customers who start during a given period has its own series of buckets. Buckets that relate to past years are “filled” with actual values; buckets for future years are “filled” with estimates. Every year, one bucket switches from estimated to actual. To calculate a group’s LTV, you combine the values of all buckets for that group.
Even though each group gains just one “actual” bucket per year, there are many “actual” buckets that get filled (belonging to different groups). For example, customers in the group that started one year ago get their “year 1” actual bucket filled; customers who started two years ago get their “year 2” bucket filled, and so on. One definition of “current” LTV is the combination of the values in all the “actual” buckets that have just been filled. This makes sense because it uses the most recent data, but it does involve mixing results from different starting groups.
Such mixing can be problematic, particularly if the nature of your customers has changed over time. One partial solution is to identify homogeneous segments within the customer base and track results for each segment within each start group. You can them combine the most recent results for members of the same segment with different start years to calculate an estimated LTV at the segment level. Segment values can then be combined in a weighted average to get an over-all LTV. Of course, now you have to decide what quantities to use for your weights.
Hopefully this clarifies why there is no one “right” method to calculate LTV. But there’s a further lesson: LTV figures are always changing. New buckets are always being added and some buckets are always changing from estimated to actual. So LTV is dynamic indeed.
In a way, this is no different from other business measures. “Profit” is also always changing as new transactions are recorded. LTV is a little worse because profit stays fixed once the books for a period are closed, whereas LTV for a previous period includes forecasted values (depending of course on the calculation method) that could require later adjustment as subsequent actuals are received. As with profit, you may choose to restate the previous period figures if there is a major discrepancy, or you may choose to book an adjustment in a subsequent period. Either way, it’s important to realize just how fluid an LTV figure can be.
That was a long digression but hopefully it clarified the roles of forecasts in LTV calculations. Now we can turn to using LTV components as forecasting tools.
This is not the same as estimating future business results using the current LTV component values. That would be done with the simulation models described earlier. Rather, this is about estimating the future of the component values themselves based on trends in their changes to date. For example, you might find that this year’s acquisition cost is $50 per customer and it has been increasing $5 per year over the past three years. After controlling for source mix, volume, and whatever other factors you can identify, you might expect that trend to continue. You would therefore estimate next year’s LTV using an acquisition cost of $55 per customer. Trends in other components would yield similar estimates for next year’s values. From these, you would derive other figures (income statement, cash flow, etc.) and estimate the LTV itself.
The advantage of this approach is that you don’t need an elaborate simulation model or trend identification system. Simply identifying the changes in LTV components lets you calculate the impact of those changes on aggregate lifetime value (LTV per customer x number of customers). You can then rank those impact values to determine which changes are most important to examine more deeply. Even the most sophisticated simulation model doesn’t do this, since it can only calculate the outcomes based on the assumptions you feed into it.
The first step in examining LTV trends is controlling for known factors. A change in source mix could easily change the aggregate component values even if behavior within each source remained stable. (A stable aggregate value could also mask significant changes within particular segments—another reason to do segment-level analysis.) Some changes might also reflect discernable causes, such as a price increase, whose future impact can be estimated directly. But even after the known factors are considered, there may be other changes which cannot be explained. If these represent significant trends, their continuation should be built into estimates of future behavior.
To get back to the progression of applications I mentioned earlier: we can now revise it to be LTV; LTV components; comparisons of components across segments; and forecasts based on trends in components. Simulation modeling for planning, optimization and to estimate LTV component values represents a stream of related activity, but perhaps is not an LTV application in itself.
Then again, tomorrow is another day.
Friday, February 09, 2007
Uses of Lifetime Value - Part 4: Optimization and What-If Modeling
Yesterday’s post on forecasting the values of LTV components may have been a little frightening. Most managers would have a hard time translating their conventional business plans into LTV terms. The connections between the two are simply not intuitive. And how would you know if you got the right answer?
Part of the solution is technical. Given a sufficiently detailed LTV model, it is possible to plug in the expected changes in customer behavior and have a system calculate the corresponding values for the LTV components. Such a model needs three things:
- relationships among different kinds of transactions (for example: there are 0.15 customer service calls for each item sold)
- inventory of existing customers (purchase history, segment membership, etc.—whatever predicts future behavior)
- assumptions about future inputs (promotions sent, new customers added, etc.)
These can be combined to project future results, which can in turn be summarized into lifetime value components. A projection using current values gives baseline LTV components. A projection using the behavior changes expected from a particular project gives the revised LTV components.
Translation from a conventional project plan cannot be completely mechanical because some inputs are needed that a conventional plan will not include. To stick with yesterday’s example of a retention program, a typical project plan might project that 5% more customers will be retained during the year the plan is in effect. But it probably won’t estimate those customers’ behavior in the following year: will they continue to be retained at a higher rate, revert to the previous rates, or leave faster until aggregate rate returns to normal? A LTV forecast requires managers to make that prediction or, perhaps, to rely on a company-wide policy so all such predictions are consistent. Either way, it’s more work for somebody and introduces new subjectivity into the process.
This need not be an overwhelming problem. Because the LTV model will break out its projections by time period, it is possible to focus analysis on current year results (forecast vs. actual). A separate analysis can look at a longer time horizon.
The other main issue cannot be solved technically. It is the challenge of estimating the true incremental impact of a project, on its own and in combination with other projects. The LTV approach highlights the importance of this by looking at changes in all behavior components across all projects. Yet, in fact, any project justification should be based on incremental changes and should consider the effects of other projects. So these objections are less a problem with LTV than a complaint about the cruel nature of world itself. Get over it.
Let’s get back to that magical lifetime value model I mentioned earlier. It’s really a conventional business simulation model: you define the relationships among business inputs (new customers, product prices, retention rates, costs, etc.) and it comes up with projected results. These results are broken down by period so each period’s output can become the next period’s input. Typical outputs include income statement, cash flow, and balance sheet. Once you have all that, it’s not much more work to create the estimated values for the LTV components. The LTV itself is nothing other than a Net Present Value figure from a discounted cash flow analysis.
I’m not saying this model is easy to create. Getting it right means understanding the subtle relationships between components. For example, although we know intuitively that better customer service should result in improved retention, what is the exact relationship between those elements and how do you build it into the model? Most people would argue for an intervening variable such as a customer satisfaction score. But that just adds another level of complexity: what creates that score, and how does the score itself relate to retention? Ultimately you need to look at attributes of the customer service transactions themselves, such as response time and resolution rate. These may not be captured in a conventional business simulation since they are not standard financial measures. And they are only one set of contributors to ultimate customer satisfaction, which is why an intervening variable for customer satisfaction score may actually make sense.
So clearly some additional effort is required beyond what’s needed for the models typically used by corporate finance. Considerable research may be needed to accurately understand the relationships that drive model results. The model may also include non-financial measures like customer satisfaction. But both of these are valuable requirements: companies really should understand what drives their results, and linking non-financial measures to financial results allow them to be incorporated into financial models. In other words, the added research needed for the LTV model is worth the effort.
The LTV model can be applied to traditional business forecasting: given this set of assumptions, what results will we get? It can also be used for what-if scenarios: calculate the results of different sets of assumptions either to find optimal resource allocations or for risk analysis of different contingencies. The forecasts can be applied to strategic decisions such as a major investment or to tactical choices such as alternative marketing campaigns and business rules. Of course, more tactical decisions require increasing levels of detail in the model itself.
Forecasts can also be help to understand the implications of a scenario: since more sales will mean more calls to customer service, do we have the call center capacity to handle them? This information can be used for resource planning and to highlight potential bottlenecks. A sophisticated system would incorporate capacity figures for such resources and issue warnings when they are be exceeded. A more sophisticated system would project the results of exceeding capacity (diversion of customers from the call center to the Web in the short term; lower satisfaction and higher attrition in the long run). An even more sophisticated system would look at all these factors and identify optimal investment decisions.
Sophisticated modeling systems of this type do exist, although they’re rare. But much simpler models can also create forecasts of LTV components. This is enough to generate forecast values to compare with actual results. Even if the predictions are less than precise, they’ll help managers understand what the LTV components mean, how they fit together, and how their actions affect them. This in turn will build a deeper understanding of the business, a shared frame of reference, and continued focus on building customer value: the key benefits of an LTV-oriented organization.
Part of the solution is technical. Given a sufficiently detailed LTV model, it is possible to plug in the expected changes in customer behavior and have a system calculate the corresponding values for the LTV components. Such a model needs three things:
- relationships among different kinds of transactions (for example: there are 0.15 customer service calls for each item sold)
- inventory of existing customers (purchase history, segment membership, etc.—whatever predicts future behavior)
- assumptions about future inputs (promotions sent, new customers added, etc.)
These can be combined to project future results, which can in turn be summarized into lifetime value components. A projection using current values gives baseline LTV components. A projection using the behavior changes expected from a particular project gives the revised LTV components.
Translation from a conventional project plan cannot be completely mechanical because some inputs are needed that a conventional plan will not include. To stick with yesterday’s example of a retention program, a typical project plan might project that 5% more customers will be retained during the year the plan is in effect. But it probably won’t estimate those customers’ behavior in the following year: will they continue to be retained at a higher rate, revert to the previous rates, or leave faster until aggregate rate returns to normal? A LTV forecast requires managers to make that prediction or, perhaps, to rely on a company-wide policy so all such predictions are consistent. Either way, it’s more work for somebody and introduces new subjectivity into the process.
This need not be an overwhelming problem. Because the LTV model will break out its projections by time period, it is possible to focus analysis on current year results (forecast vs. actual). A separate analysis can look at a longer time horizon.
The other main issue cannot be solved technically. It is the challenge of estimating the true incremental impact of a project, on its own and in combination with other projects. The LTV approach highlights the importance of this by looking at changes in all behavior components across all projects. Yet, in fact, any project justification should be based on incremental changes and should consider the effects of other projects. So these objections are less a problem with LTV than a complaint about the cruel nature of world itself. Get over it.
Let’s get back to that magical lifetime value model I mentioned earlier. It’s really a conventional business simulation model: you define the relationships among business inputs (new customers, product prices, retention rates, costs, etc.) and it comes up with projected results. These results are broken down by period so each period’s output can become the next period’s input. Typical outputs include income statement, cash flow, and balance sheet. Once you have all that, it’s not much more work to create the estimated values for the LTV components. The LTV itself is nothing other than a Net Present Value figure from a discounted cash flow analysis.
I’m not saying this model is easy to create. Getting it right means understanding the subtle relationships between components. For example, although we know intuitively that better customer service should result in improved retention, what is the exact relationship between those elements and how do you build it into the model? Most people would argue for an intervening variable such as a customer satisfaction score. But that just adds another level of complexity: what creates that score, and how does the score itself relate to retention? Ultimately you need to look at attributes of the customer service transactions themselves, such as response time and resolution rate. These may not be captured in a conventional business simulation since they are not standard financial measures. And they are only one set of contributors to ultimate customer satisfaction, which is why an intervening variable for customer satisfaction score may actually make sense.
So clearly some additional effort is required beyond what’s needed for the models typically used by corporate finance. Considerable research may be needed to accurately understand the relationships that drive model results. The model may also include non-financial measures like customer satisfaction. But both of these are valuable requirements: companies really should understand what drives their results, and linking non-financial measures to financial results allow them to be incorporated into financial models. In other words, the added research needed for the LTV model is worth the effort.
The LTV model can be applied to traditional business forecasting: given this set of assumptions, what results will we get? It can also be used for what-if scenarios: calculate the results of different sets of assumptions either to find optimal resource allocations or for risk analysis of different contingencies. The forecasts can be applied to strategic decisions such as a major investment or to tactical choices such as alternative marketing campaigns and business rules. Of course, more tactical decisions require increasing levels of detail in the model itself.
Forecasts can also be help to understand the implications of a scenario: since more sales will mean more calls to customer service, do we have the call center capacity to handle them? This information can be used for resource planning and to highlight potential bottlenecks. A sophisticated system would incorporate capacity figures for such resources and issue warnings when they are be exceeded. A more sophisticated system would project the results of exceeding capacity (diversion of customers from the call center to the Web in the short term; lower satisfaction and higher attrition in the long run). An even more sophisticated system would look at all these factors and identify optimal investment decisions.
Sophisticated modeling systems of this type do exist, although they’re rare. But much simpler models can also create forecasts of LTV components. This is enough to generate forecast values to compare with actual results. Even if the predictions are less than precise, they’ll help managers understand what the LTV components mean, how they fit together, and how their actions affect them. This in turn will build a deeper understanding of the business, a shared frame of reference, and continued focus on building customer value: the key benefits of an LTV-oriented organization.
Thursday, February 08, 2007
Uses of Lifetime Value - Part 3: Forecasts
Yesterday’s post discussed how values for LTV components can be compared across time and customer segments to generate insights into business performance. But even though such comparisons may uncover trends worth exploring, they do not tell managers what they really need to know: is the business running as planned? To do this, actual LTV figures must be compared with a forecast.
The mechanics of this comparison are easy enough and pretty much identical to comparisons against time or customer segments. The real question is where the forecast values will come from.
You’re expecting me to say they’ll be generated by the lifetime value model itself, aren’t you? Well, maybe. The problem is that business plans aren’t built around LTV models. They’re built around projects: marketing campaigns, sales programs, product introductions, plant openings, system deployments, and the rest. (Of course, some companies just plan by projecting from last year’s figures. It’s easy to calculate the expected LTV changes implicit in such a plan, since there is no program detail to worry about.)
The trick, then, is to convert project plans into LTV forecasts. In a sense, this is easy: all you have to do is estimate the change in LTV components that will result from each project. But building such estimates is hard.
It’s hard for two reasons. First, most business projects are not conceived in LTV terms. They are based on adding new customers or increasing retention or cutting costs or whatever. To build them into an LTV forecast, these objectives must be restated as changes in LTV components.
Much of information needed to define the component changes will have already been assembled during the original project analysis. With this as a base, creating the component forecast is more a matter of reconfiguring existing information than developing anything new. One exception is the difference in time frame: many project plans are aimed at short term results, while LTV by definition includes behavior over a long horizon. This is actually a benefit of doing the LTV forecast, since it forces managers to consider the long term effects of their actions. But it also requires more work as part of the planning process. Companies will need to develop a reasonable approach to this issue and then train managers to apply it consistently.
The work is somewhat reduced by the fact that most projects are really focused on a just a few LTV components. For example, a retention project is mostly about increasing the length of the customer’s lifetime. This means the LTV impact can be defined as changes in only the affected components, without considering the others. Even though this is oversimplifying a bit, it’s a reasonable shortcut to take for practical purposes. (On the other hand, one of the benefits of using LTV as a company-wide management metric is that it encourages everyone to consider the impact that the efforts of their group have on other departments and the customer experience. So you do want managers to at least consider the effects of their projects across all LTV components.)
The second and even more challenging problem with defining the LTV impact of individual projects is that nearly all projects affect only a subset of the entire customer base. An acquisition program in marketing only affects the new customers it attracts; a change in customer service only affects people who call in with problems; an improvement to a product only affects people who buy it.
Counting the number of customers affected by a program isn’t that difficult. That number will always be part of the project plan to begin with. But the LTV analysis needs to know who these people are so it can determine their baseline LTV component values. Many project plans do not go into this level of detail.
Some attributes of the affected customers will be obvious. They are customers from a particular source or users of a particular product or customers in a particular channel. But it’s also important to remember that those affected will be at different stages in their life cycle: that is, some will be newer than others. (New customer acquisition programs are the obvious exception.)
Since future LTV usually changes as customers stay around longer (generally increasing, sometimes decreasing), it would be a big mistake to use the new customer LTV as a baseline. Instead, you have to identify the future LTV for each set of customers affected by the program, segmenting them on tenure in addition to whatever other attributes you’ve identified. As discussed yesterday, a good LTV system should provide these segmentation capabilities.
Once you’ve calculated the baseline LTV components for the major segments, it’s tempting to aggregate them into a single figure before proceeding. But this is an area where averages can be misleading. Assume you’re planning a program that will yield a 5% increase in retention. This will probably be most effective among newer customers. But those customers probably have lower-than-average future LTVs (precisely because they are more likely to leave). This means the actual value gained from the program will be less than if its impact were spread evenly across the entire universe. In terms of LTV components, the expected future tenure of the newer customers is smaller than average, so the anticipated change in tenure (in absolute terms such as years per customer) would be smaller as well. (Of course, the actual impact is an empirical question—perhaps the retained customers will turn into fanatic loyalists who stay forever. Though I doubt it.)
The point here is that once you’ve identified the customer segments affected by a program, you need to calculate their baseline components and expected changes separately for each segment. Only then can you aggregate them for reporting purposes. And of course you’ll want to retain the segment detail to compare against the actuals once these start coming in.
You’ll also want the segment detail to help in the final stage of consolidation, when expected changes from each program are combined into an over-all LTV forecast. The issue here is that many programs will impact overlapping sets of customers, and it would be unrealistic to expect their incremental effects to be purely additive. So managers need a way to calculate the consolidated impact of all expected changes and to reduce those which seem excessive. Doing this at a segment level is essential—and it requires that definitions be standardized across plans, since you otherwise won’t be able to consolidate the results cleanly.
An alternative consolidation method would work at the level of individual customers. Under this approach, the company would associate plans with individuals and then calculate aggregate results for each person. This is an intriguing possibility but probably beyond the capabilities of most firms.
Daunting as the consolidation process may seem, it’s worth recognizing that even conventional, project-based planning systems should do something similar. Which is not, of course, to say that they actually do.
It should be clear by now that a bottom-up approach to creating LTV forecasts is a substantial project. A much simpler approach would be to first create the traditional consolidated business forecast, and then derive the LTV components from that. The component forecasts could be created down to the same level of detail as the business forecasts: by division, product line, country, or whatever. This approach wouldn’t provide LTV impact forecasts for individual programs or customer segments. Nor would it force managers to view their programs in LTV terms while building those forecasts. But it’s an easier place to start.
However the forecasts are created, you need them to judge whether actual results are consistent with expectations. Again, this is no different from any other business management system: comparisons against history are interesting, but what really counts is performance against plan.
The mechanics of this comparison are easy enough and pretty much identical to comparisons against time or customer segments. The real question is where the forecast values will come from.
You’re expecting me to say they’ll be generated by the lifetime value model itself, aren’t you? Well, maybe. The problem is that business plans aren’t built around LTV models. They’re built around projects: marketing campaigns, sales programs, product introductions, plant openings, system deployments, and the rest. (Of course, some companies just plan by projecting from last year’s figures. It’s easy to calculate the expected LTV changes implicit in such a plan, since there is no program detail to worry about.)
The trick, then, is to convert project plans into LTV forecasts. In a sense, this is easy: all you have to do is estimate the change in LTV components that will result from each project. But building such estimates is hard.
It’s hard for two reasons. First, most business projects are not conceived in LTV terms. They are based on adding new customers or increasing retention or cutting costs or whatever. To build them into an LTV forecast, these objectives must be restated as changes in LTV components.
Much of information needed to define the component changes will have already been assembled during the original project analysis. With this as a base, creating the component forecast is more a matter of reconfiguring existing information than developing anything new. One exception is the difference in time frame: many project plans are aimed at short term results, while LTV by definition includes behavior over a long horizon. This is actually a benefit of doing the LTV forecast, since it forces managers to consider the long term effects of their actions. But it also requires more work as part of the planning process. Companies will need to develop a reasonable approach to this issue and then train managers to apply it consistently.
The work is somewhat reduced by the fact that most projects are really focused on a just a few LTV components. For example, a retention project is mostly about increasing the length of the customer’s lifetime. This means the LTV impact can be defined as changes in only the affected components, without considering the others. Even though this is oversimplifying a bit, it’s a reasonable shortcut to take for practical purposes. (On the other hand, one of the benefits of using LTV as a company-wide management metric is that it encourages everyone to consider the impact that the efforts of their group have on other departments and the customer experience. So you do want managers to at least consider the effects of their projects across all LTV components.)
The second and even more challenging problem with defining the LTV impact of individual projects is that nearly all projects affect only a subset of the entire customer base. An acquisition program in marketing only affects the new customers it attracts; a change in customer service only affects people who call in with problems; an improvement to a product only affects people who buy it.
Counting the number of customers affected by a program isn’t that difficult. That number will always be part of the project plan to begin with. But the LTV analysis needs to know who these people are so it can determine their baseline LTV component values. Many project plans do not go into this level of detail.
Some attributes of the affected customers will be obvious. They are customers from a particular source or users of a particular product or customers in a particular channel. But it’s also important to remember that those affected will be at different stages in their life cycle: that is, some will be newer than others. (New customer acquisition programs are the obvious exception.)
Since future LTV usually changes as customers stay around longer (generally increasing, sometimes decreasing), it would be a big mistake to use the new customer LTV as a baseline. Instead, you have to identify the future LTV for each set of customers affected by the program, segmenting them on tenure in addition to whatever other attributes you’ve identified. As discussed yesterday, a good LTV system should provide these segmentation capabilities.
Once you’ve calculated the baseline LTV components for the major segments, it’s tempting to aggregate them into a single figure before proceeding. But this is an area where averages can be misleading. Assume you’re planning a program that will yield a 5% increase in retention. This will probably be most effective among newer customers. But those customers probably have lower-than-average future LTVs (precisely because they are more likely to leave). This means the actual value gained from the program will be less than if its impact were spread evenly across the entire universe. In terms of LTV components, the expected future tenure of the newer customers is smaller than average, so the anticipated change in tenure (in absolute terms such as years per customer) would be smaller as well. (Of course, the actual impact is an empirical question—perhaps the retained customers will turn into fanatic loyalists who stay forever. Though I doubt it.)
The point here is that once you’ve identified the customer segments affected by a program, you need to calculate their baseline components and expected changes separately for each segment. Only then can you aggregate them for reporting purposes. And of course you’ll want to retain the segment detail to compare against the actuals once these start coming in.
You’ll also want the segment detail to help in the final stage of consolidation, when expected changes from each program are combined into an over-all LTV forecast. The issue here is that many programs will impact overlapping sets of customers, and it would be unrealistic to expect their incremental effects to be purely additive. So managers need a way to calculate the consolidated impact of all expected changes and to reduce those which seem excessive. Doing this at a segment level is essential—and it requires that definitions be standardized across plans, since you otherwise won’t be able to consolidate the results cleanly.
An alternative consolidation method would work at the level of individual customers. Under this approach, the company would associate plans with individuals and then calculate aggregate results for each person. This is an intriguing possibility but probably beyond the capabilities of most firms.
Daunting as the consolidation process may seem, it’s worth recognizing that even conventional, project-based planning systems should do something similar. Which is not, of course, to say that they actually do.
It should be clear by now that a bottom-up approach to creating LTV forecasts is a substantial project. A much simpler approach would be to first create the traditional consolidated business forecast, and then derive the LTV components from that. The component forecasts could be created down to the same level of detail as the business forecasts: by division, product line, country, or whatever. This approach wouldn’t provide LTV impact forecasts for individual programs or customer segments. Nor would it force managers to view their programs in LTV terms while building those forecasts. But it’s an easier place to start.
However the forecasts are created, you need them to judge whether actual results are consistent with expectations. Again, this is no different from any other business management system: comparisons against history are interesting, but what really counts is performance against plan.
Wednesday, February 07, 2007
Uses of Lifetime Value - Part 2: Component Analysis
Yesterday I began discussing the uses of Lifetime Value models. The first set of applications use the model outputs themselves—the actual estimates of Lifetime Value for individual customers or customer groups. But in many ways, the components that go into those estimates are more useful than the final values. Today we’ll look at them.
All lifetime value calculations ultimately boil down to the same formula: lifetime revenue minus lifetime costs. These in turn are always built from the same customer interactions—promotions, purchases, support requests, and so on. If you only want to look at the final LTV number, it doesn’t matter how these elements are used to get it. But if you want to understand what went into the number, the model must be built with components that make sense in your particular business.
For example, magazine publishers think primarily in terms of copies sold. The revenue portion of a publisher’s lifetime value model will therefore be: copies sold x revenue per copy. Product, service and most other costs will also be stated in terms of cost per copy. The primary exception is promotion costs, which are typically listed separately for initial acquisition. Renewal promotions can also be listed separately, although they are sometimes so negligible that they are simply lumped into the per copy cost with the rest of customer service. In sum, then, a publisher’s lifetime value model might look like:
LTV = (Acquisition cost) – (number of initial copies x (initial price per copy – initial cost per copy))
+ (number of renewal copies x (renewal price per copy – renewal cost per copy))
Note that number of orders, value per order, and average years per customer—all seemingly natural components for a lifetime value model—do not even appear.
In practice, some of those details may well be used in the model. For example, the number of orders is needed to calculate order processing costs accurately. Similarly, the timing of the events is needed to do discounted cash flow analysis. But those details are not necessarily useful to managers trying to understand the general state of their business. This means they can be hidden within the lifetime value calculation and not displayed in most reports.
Some traditional industry metrics may not fit naturally into the lifetime value calculation. Sticking with magazines, publishers traditionally look at renewal rates as a key indicator of business health. You could restructure the previous model to include renewal rates, but it’s not clear this gives more insight than average renewal copies per customer. In fact, there’s a good argument that renewal rates are actually a less useful measure because they are impacted by extraneous factors such as changes in the renewal offers.
The point here is simply that the components of the lifetime value model are intelligible only if they match the industry at hand. More specifically, they must show the key performance factors that determine the health of the business.
The special advantage of using model components as key performance indicators is that you can show the impact of any change in terms of actual business value.
This is a key point. It’s easy to come up with a list of important business metrics. But it’s not necessarily clear what a change in, say, a customer satisfaction rating actually means in terms of profit. At best there may be some rules of thumb based on historical correlations. This may carry some weight with decision-makers, but it is nowhere near as compelling as a statement that lifetime revenue per customer has dropped 2%, which translates into $145 million in future value. Even though the number is known to be an estimate, it has a specific value that immediately indicates its approximate importance and therefore how urgently managers should react to it.
The other advantage of dealing with model components is that the connections among them are clear. If the 2% revenue decline is accompanied by a 5% decrease in acquisition costs, or perhaps a 10% increase in number of new customers, managers can see immediately whether there is really a problem, or in fact something has gone very right. Although using the model for what-if modeling is a topic for another day, simply laying out the relationships among the key performance indicators improves the ability of everyone in the company to understand how the business works.
Of course, interpreting the values of LTV components is difficult in isolation. Is an average life of 2.5 years good or bad? Experienced managers will have some sense of reasonable values based on their own backgrounds. But even they need to look at the numbers in comparison with something.
The two major bases for comparison are time periods and customer segments. Trends in measures over time are easy to understand—either they’re up or down, and depending on whether they are revenues or costs that’s either a good or a bad thing. Again, one virtue of model-based components is you can see the changes in context: if revenue went down but cost went down more, maybe things are really okay.
The interval at which you measure trends will depend on the business—it could be yesterday vs. today or it could be this year vs. last year. But since lifetime value is a long-term measure, you have to be careful not to react to random swings over short time periods. The amount of time you need to wait to detect statistically significant differences will depend mostly on the volume of data available. You also need to be sensitive to external influences such as seasonality.
Customer segments are more complicated than time periods simply because there are so many more possible definitions. The segments could be based on customer demographics, purchase behavior, start date, acquisition source, initial offer, initial product, or just about anything else. There’s no need to pick just one set: different segmentations will matter for different purposes. Whatever definitions you use, you’ll compare different segments to each other, and to themselves over time.
In fact, the first explanation to consider for many changes between time periods is that the mix of customer segments has changed. This will change aggregate lifetime value for the business even if behavior within each segment is the same. This in itself is a useful finding, of course, since it immediately points the rest of the analysis towards understanding how and why the customer mix changed.
And that is exactly the point: we look at the value of LTV components not because they’re fascinating in themselves or because we want to know whether to draw a happy face or sad face on the cover of the report (anyone who does that should be fired immediately anyway, so it should never be an issue.) We look at them because they indicate at a high level what’s happening in the business, giving us hints of what needs to be examined more closely.
A good LTV system enables this closer examination as well. Drill-downs should permit us both to examine the basic model components for different time periods and customer segments, and to explore the details within the components themselves. At some point you will reach the finest level of detail captured in the LTV system and have to look elsewhere for additional explanations—it doesn’t make sense for the LTV system to incorporate every bit of information in the company. But making large amounts of detail accessible without leaving the system is definitely a Good Thing.
Important as drill-downs are, they rely on a manager or analyst to do the drilling. A really good LTV system does some of that drilling automatically, identifying trends or variances that warrant further exploration. Now we’re entering the realms of automated data mining, but this doesn’t have to be particularly esoteric. Since the LTV model captures the relationships among components within the LTV calculation, an LTV system can easily calculate the impact of a change in any one component on the final value itself. Multiplied by the number of customers, this gives a dollar amount that can be used to rank all observed changes by importance. Where the number of customers itself changes between periods, the system can further divide the variance into rate, volume and joint variances—a classic analysis that is easy to do and understand.
Doing this sort of automated analysis on figures for the company as a whole is probably overkill. After all, the top-line LTV model will probably have just a dozen or so components. Managers can eyeball changes in those pretty easily. More important, stable figures for the company as a whole can easily mask significant changes in behavior of particular segments. It’s therefore more important for the LTV system to automatically examine changes in component vales for a standard set of customer segments and to identify any significant variances at that level. The ranking mechanism—number of customers x change in value per customer—is exactly the one already described. A really advanced system would even find patterns among the variances, such as a weakness in retention rates across multiple segments. That one might require some serious artificial intelligence.
One problem with any automated detection system is false alarms. If the company is purposely managing a change in its business—say, by increasing prices—a system that simply compared past vs. current periods might find significant variances that are totally expected. Although these can easily be ignored, the real problem is comparisons against the past won’t tell whether the observed changes are actually in line with the changes anticipated in the business plan. This means that comparisons by time period and customer segment must be joined by a third dimension: comparisons against forecasted values. I’ll talk about forecasts tomorrow.
All lifetime value calculations ultimately boil down to the same formula: lifetime revenue minus lifetime costs. These in turn are always built from the same customer interactions—promotions, purchases, support requests, and so on. If you only want to look at the final LTV number, it doesn’t matter how these elements are used to get it. But if you want to understand what went into the number, the model must be built with components that make sense in your particular business.
For example, magazine publishers think primarily in terms of copies sold. The revenue portion of a publisher’s lifetime value model will therefore be: copies sold x revenue per copy. Product, service and most other costs will also be stated in terms of cost per copy. The primary exception is promotion costs, which are typically listed separately for initial acquisition. Renewal promotions can also be listed separately, although they are sometimes so negligible that they are simply lumped into the per copy cost with the rest of customer service. In sum, then, a publisher’s lifetime value model might look like:
LTV = (Acquisition cost) – (number of initial copies x (initial price per copy – initial cost per copy))
+ (number of renewal copies x (renewal price per copy – renewal cost per copy))
Note that number of orders, value per order, and average years per customer—all seemingly natural components for a lifetime value model—do not even appear.
In practice, some of those details may well be used in the model. For example, the number of orders is needed to calculate order processing costs accurately. Similarly, the timing of the events is needed to do discounted cash flow analysis. But those details are not necessarily useful to managers trying to understand the general state of their business. This means they can be hidden within the lifetime value calculation and not displayed in most reports.
Some traditional industry metrics may not fit naturally into the lifetime value calculation. Sticking with magazines, publishers traditionally look at renewal rates as a key indicator of business health. You could restructure the previous model to include renewal rates, but it’s not clear this gives more insight than average renewal copies per customer. In fact, there’s a good argument that renewal rates are actually a less useful measure because they are impacted by extraneous factors such as changes in the renewal offers.
The point here is simply that the components of the lifetime value model are intelligible only if they match the industry at hand. More specifically, they must show the key performance factors that determine the health of the business.
The special advantage of using model components as key performance indicators is that you can show the impact of any change in terms of actual business value.
This is a key point. It’s easy to come up with a list of important business metrics. But it’s not necessarily clear what a change in, say, a customer satisfaction rating actually means in terms of profit. At best there may be some rules of thumb based on historical correlations. This may carry some weight with decision-makers, but it is nowhere near as compelling as a statement that lifetime revenue per customer has dropped 2%, which translates into $145 million in future value. Even though the number is known to be an estimate, it has a specific value that immediately indicates its approximate importance and therefore how urgently managers should react to it.
The other advantage of dealing with model components is that the connections among them are clear. If the 2% revenue decline is accompanied by a 5% decrease in acquisition costs, or perhaps a 10% increase in number of new customers, managers can see immediately whether there is really a problem, or in fact something has gone very right. Although using the model for what-if modeling is a topic for another day, simply laying out the relationships among the key performance indicators improves the ability of everyone in the company to understand how the business works.
Of course, interpreting the values of LTV components is difficult in isolation. Is an average life of 2.5 years good or bad? Experienced managers will have some sense of reasonable values based on their own backgrounds. But even they need to look at the numbers in comparison with something.
The two major bases for comparison are time periods and customer segments. Trends in measures over time are easy to understand—either they’re up or down, and depending on whether they are revenues or costs that’s either a good or a bad thing. Again, one virtue of model-based components is you can see the changes in context: if revenue went down but cost went down more, maybe things are really okay.
The interval at which you measure trends will depend on the business—it could be yesterday vs. today or it could be this year vs. last year. But since lifetime value is a long-term measure, you have to be careful not to react to random swings over short time periods. The amount of time you need to wait to detect statistically significant differences will depend mostly on the volume of data available. You also need to be sensitive to external influences such as seasonality.
Customer segments are more complicated than time periods simply because there are so many more possible definitions. The segments could be based on customer demographics, purchase behavior, start date, acquisition source, initial offer, initial product, or just about anything else. There’s no need to pick just one set: different segmentations will matter for different purposes. Whatever definitions you use, you’ll compare different segments to each other, and to themselves over time.
In fact, the first explanation to consider for many changes between time periods is that the mix of customer segments has changed. This will change aggregate lifetime value for the business even if behavior within each segment is the same. This in itself is a useful finding, of course, since it immediately points the rest of the analysis towards understanding how and why the customer mix changed.
And that is exactly the point: we look at the value of LTV components not because they’re fascinating in themselves or because we want to know whether to draw a happy face or sad face on the cover of the report (anyone who does that should be fired immediately anyway, so it should never be an issue.) We look at them because they indicate at a high level what’s happening in the business, giving us hints of what needs to be examined more closely.
A good LTV system enables this closer examination as well. Drill-downs should permit us both to examine the basic model components for different time periods and customer segments, and to explore the details within the components themselves. At some point you will reach the finest level of detail captured in the LTV system and have to look elsewhere for additional explanations—it doesn’t make sense for the LTV system to incorporate every bit of information in the company. But making large amounts of detail accessible without leaving the system is definitely a Good Thing.
Important as drill-downs are, they rely on a manager or analyst to do the drilling. A really good LTV system does some of that drilling automatically, identifying trends or variances that warrant further exploration. Now we’re entering the realms of automated data mining, but this doesn’t have to be particularly esoteric. Since the LTV model captures the relationships among components within the LTV calculation, an LTV system can easily calculate the impact of a change in any one component on the final value itself. Multiplied by the number of customers, this gives a dollar amount that can be used to rank all observed changes by importance. Where the number of customers itself changes between periods, the system can further divide the variance into rate, volume and joint variances—a classic analysis that is easy to do and understand.
Doing this sort of automated analysis on figures for the company as a whole is probably overkill. After all, the top-line LTV model will probably have just a dozen or so components. Managers can eyeball changes in those pretty easily. More important, stable figures for the company as a whole can easily mask significant changes in behavior of particular segments. It’s therefore more important for the LTV system to automatically examine changes in component vales for a standard set of customer segments and to identify any significant variances at that level. The ranking mechanism—number of customers x change in value per customer—is exactly the one already described. A really advanced system would even find patterns among the variances, such as a weakness in retention rates across multiple segments. That one might require some serious artificial intelligence.
One problem with any automated detection system is false alarms. If the company is purposely managing a change in its business—say, by increasing prices—a system that simply compared past vs. current periods might find significant variances that are totally expected. Although these can easily be ignored, the real problem is comparisons against the past won’t tell whether the observed changes are actually in line with the changes anticipated in the business plan. This means that comparisons by time period and customer segment must be joined by a third dimension: comparisons against forecasted values. I’ll talk about forecasts tomorrow.
Tuesday, February 06, 2007
Uses of Lifetime Value - Part 1
Yesterday I came down firmly in the middle in the great debate between Return on Investment and Lifetime Value as the primary measure for business decisions. My heart lies with Lifetime Value, but the realist in me knows you have to consider both.
The realist in me also knows that most of work with either measure is projecting future customer behavior. This provides the inputs needed for both types of calculations. I suggested yesterday that a company might build one comprehensive lifetime value model or many project-specific ROI models, but suppose on reflection that you could build comprehensive or project-specific models under either approach. The advantage of a comprehensive model is obvious: you can apply it to many projects, thereby ensuring consistent analysis and justifying the investment needed to build a sophisticated model. Sophistication is critical because making the right choice depends increasingly on understanding the long-term effects of a business decision and it takes a sophisticated model to estimate these accurately.
The good news is that a sophisticated model has many applications. I think it’s worth laying these out in some detail to encourage the required investment. These applications fall into five main groups:
- conventional LTV applications
- drill-down into LTV components
- forecasting
- optimization
- what-if modeling
Over the next few days I’ll be looking at each of these in detail. Let’s start with the conventional applications.
The conventional applications of LTV all use the value figure itself. That is, they essentially answer the question, “What is a customer worth?”
Probably the most common conventional application is setting allowable acquisition costs. These are usually applied as targets for marketing campaigns, although they can also help in setting purchase prices when acquiring a company. Indeed, cost per subscriber is a typical yardstick in evaluating acquisitions in industries including telecommunications (landline, mobile, cable) and utilities (gas, water, electric). Although attrition is a much bigger issue in communications than utilities, consumer behavior in both industries is highly predictable, so accurate lifetime value models are fairly easy to build. (This has not prevented many telecommunications firms from paying more for a company than its customers are actually worth. This is not because they can’t calculate LTV, but because they hope the value will change due to some combination of scale economies, new product sales, and perhaps higher prices made possible by reduced competition. A certain amount of corporate gamesmanship is also often involved, such as a desire to block the expansion of other potential acquirers or to grow big enough to avoid being acquired.)
Although corporate acquirers often pay more than the estimated value of a customer, most marketers take the opposite tack and set the allowable acquisition cost at considerably less than the new customers’ expected LTV. This reflects a very realistic understanding that lifetime value figures are inherently subject to risk because they include assumptions about future behavior that may or may not come true. The most conservative approach is to not rely on these behaviors at all and to justify acquisition efforts based only on the initial sale. But in many businesses this would greatly reduce long-term profits by choking off new customer acquisitions. A more common approach is to apply a fairly steep discount rate to the estimated value of future profits, or, more simply, to build in a cushion of safety through a business rule such as “acquisition cost should never exceed 60% of estimated lifetime value”.
Competent marketers also intuitively recognize that customer value can differ greatly by source. They therefore calculate lifetime value separately for each source of customers and sometimes for more subtle distinctions such as the nature of the initial offer (for example, sweepstakes vs. non-sweepstakes in magazine subscriptions). They can then set appropriate allowable acquisition costs for each group. This helps to ensure a more productive allocation of promotion budgets, yielding better long-term results even though average acquisition costs may actually increase. (Incidentally, when making these sorts of comparisons, adjusting the allowable acquisition cost by applying a fixed cushion such as 60% of estimated value does not give the same result as applying a higher discount rate. In a sample calculation, I found the fixed cushion favored a higher value source—that is, when the allowable acquisition for a lower value source was held constant, the cushion method yielded a higher allowable acquisition cost for the higher value source than the increased discount rate. Yes I know the preceding sentence may be incomprehensible. The point is that you should probably use the adjusted discount rate method, which is more theoretically justified. If you’re not comfortable with discount rates in general, at least do the calculation both ways and consider the difference.)
Lifetime value is also used as a guide to investments in retention of existing customers. The trick here is to recognize the relevant figure is future lifetime value. What a customer has already spent with you may be a good indication of their future behavior, but the value of those past sales has already been received. There are (at least) two possible errors here. One is to use past value as a measure of future value; the other is to estimate total value and then subtract past value from it, treating the difference is what’s expected in the future.
The fallacy of using past value is self-evident. The fallacy of the other error is that total lifetime value as estimated at the start of a customer’s life is an average that includes many customers who will leave over time. The estimated value of the remaining customers has to be recalculated as time progresses since those customers are part of a survivor group which have already lasted longer than some of the original starters. This calculation itself must be done correctly: if the standard lifetime value calculation looks at, say, a five year period, the projected value of these current customers should look out five years from today, not from their acquisition date.
In most cases, the expected future value of an existing customer will be higher than the expected future value of a new customer, but there can be exceptions (life insurance or, less morbidly, an auto lease). In any case, the customer’s future value indicates the amount that can be spent to retain this customer. Because acquisition costs have already been spent, the future value is usually relatively large. But while it’s technically correct to ignore the past costs, there is a danger in looking solely at the future: it’s possible to spend so much on retention that, even though you make an incremental profit on the retention effort, you lose money on the customer as a whole. Imagine, for example, that you just spent $50 to acquiring a customer with a value of $60, but just after you send them their first bill, you find you have to spend another $50 to collect it. It’s worth spending $50 to collect $60, but you will have spent $100 in total on that customer. If this happens too often, you’re out of business. So even though it’s too late for that particular customer, it’s important to consider the retention costs when estimating allowable cost for future acquisition campaigns.
A high retention cost is also hint that you should look carefully at the estimated future value of the customer at hand: will this one expenditure do it, or are they going to need similar retention incentives in the future? One good thing about current customers is you have a lot of information about them, at least relative to most prospects. This means you can predict their behavior with greater precision. A customer who needs a major retention incentive is certainly not average in that sense, so her future lifetime value is probably not average either. In fact, it’s a safe bet that many companies lose considerable amounts of money on such retention-intensive customers precisely because they set their guidelines for allowable retention costs based on average future values. Precisely the same logic applies to customer service costs, which in a way are retention costs too. It’s easy to lose money on service-intensive customers, so at some point you have to find ways to change their behavior or to charge them for what they’re costing you.
Of course, retention and service costs are components within the LTV figure. I’ll talk more about tomorrow about making use of them.
The realist in me also knows that most of work with either measure is projecting future customer behavior. This provides the inputs needed for both types of calculations. I suggested yesterday that a company might build one comprehensive lifetime value model or many project-specific ROI models, but suppose on reflection that you could build comprehensive or project-specific models under either approach. The advantage of a comprehensive model is obvious: you can apply it to many projects, thereby ensuring consistent analysis and justifying the investment needed to build a sophisticated model. Sophistication is critical because making the right choice depends increasingly on understanding the long-term effects of a business decision and it takes a sophisticated model to estimate these accurately.
The good news is that a sophisticated model has many applications. I think it’s worth laying these out in some detail to encourage the required investment. These applications fall into five main groups:
- conventional LTV applications
- drill-down into LTV components
- forecasting
- optimization
- what-if modeling
Over the next few days I’ll be looking at each of these in detail. Let’s start with the conventional applications.
The conventional applications of LTV all use the value figure itself. That is, they essentially answer the question, “What is a customer worth?”
Probably the most common conventional application is setting allowable acquisition costs. These are usually applied as targets for marketing campaigns, although they can also help in setting purchase prices when acquiring a company. Indeed, cost per subscriber is a typical yardstick in evaluating acquisitions in industries including telecommunications (landline, mobile, cable) and utilities (gas, water, electric). Although attrition is a much bigger issue in communications than utilities, consumer behavior in both industries is highly predictable, so accurate lifetime value models are fairly easy to build. (This has not prevented many telecommunications firms from paying more for a company than its customers are actually worth. This is not because they can’t calculate LTV, but because they hope the value will change due to some combination of scale economies, new product sales, and perhaps higher prices made possible by reduced competition. A certain amount of corporate gamesmanship is also often involved, such as a desire to block the expansion of other potential acquirers or to grow big enough to avoid being acquired.)
Although corporate acquirers often pay more than the estimated value of a customer, most marketers take the opposite tack and set the allowable acquisition cost at considerably less than the new customers’ expected LTV. This reflects a very realistic understanding that lifetime value figures are inherently subject to risk because they include assumptions about future behavior that may or may not come true. The most conservative approach is to not rely on these behaviors at all and to justify acquisition efforts based only on the initial sale. But in many businesses this would greatly reduce long-term profits by choking off new customer acquisitions. A more common approach is to apply a fairly steep discount rate to the estimated value of future profits, or, more simply, to build in a cushion of safety through a business rule such as “acquisition cost should never exceed 60% of estimated lifetime value”.
Competent marketers also intuitively recognize that customer value can differ greatly by source. They therefore calculate lifetime value separately for each source of customers and sometimes for more subtle distinctions such as the nature of the initial offer (for example, sweepstakes vs. non-sweepstakes in magazine subscriptions). They can then set appropriate allowable acquisition costs for each group. This helps to ensure a more productive allocation of promotion budgets, yielding better long-term results even though average acquisition costs may actually increase. (Incidentally, when making these sorts of comparisons, adjusting the allowable acquisition cost by applying a fixed cushion such as 60% of estimated value does not give the same result as applying a higher discount rate. In a sample calculation, I found the fixed cushion favored a higher value source—that is, when the allowable acquisition for a lower value source was held constant, the cushion method yielded a higher allowable acquisition cost for the higher value source than the increased discount rate. Yes I know the preceding sentence may be incomprehensible. The point is that you should probably use the adjusted discount rate method, which is more theoretically justified. If you’re not comfortable with discount rates in general, at least do the calculation both ways and consider the difference.)
Lifetime value is also used as a guide to investments in retention of existing customers. The trick here is to recognize the relevant figure is future lifetime value. What a customer has already spent with you may be a good indication of their future behavior, but the value of those past sales has already been received. There are (at least) two possible errors here. One is to use past value as a measure of future value; the other is to estimate total value and then subtract past value from it, treating the difference is what’s expected in the future.
The fallacy of using past value is self-evident. The fallacy of the other error is that total lifetime value as estimated at the start of a customer’s life is an average that includes many customers who will leave over time. The estimated value of the remaining customers has to be recalculated as time progresses since those customers are part of a survivor group which have already lasted longer than some of the original starters. This calculation itself must be done correctly: if the standard lifetime value calculation looks at, say, a five year period, the projected value of these current customers should look out five years from today, not from their acquisition date.
In most cases, the expected future value of an existing customer will be higher than the expected future value of a new customer, but there can be exceptions (life insurance or, less morbidly, an auto lease). In any case, the customer’s future value indicates the amount that can be spent to retain this customer. Because acquisition costs have already been spent, the future value is usually relatively large. But while it’s technically correct to ignore the past costs, there is a danger in looking solely at the future: it’s possible to spend so much on retention that, even though you make an incremental profit on the retention effort, you lose money on the customer as a whole. Imagine, for example, that you just spent $50 to acquiring a customer with a value of $60, but just after you send them their first bill, you find you have to spend another $50 to collect it. It’s worth spending $50 to collect $60, but you will have spent $100 in total on that customer. If this happens too often, you’re out of business. So even though it’s too late for that particular customer, it’s important to consider the retention costs when estimating allowable cost for future acquisition campaigns.
A high retention cost is also hint that you should look carefully at the estimated future value of the customer at hand: will this one expenditure do it, or are they going to need similar retention incentives in the future? One good thing about current customers is you have a lot of information about them, at least relative to most prospects. This means you can predict their behavior with greater precision. A customer who needs a major retention incentive is certainly not average in that sense, so her future lifetime value is probably not average either. In fact, it’s a safe bet that many companies lose considerable amounts of money on such retention-intensive customers precisely because they set their guidelines for allowable retention costs based on average future values. Precisely the same logic applies to customer service costs, which in a way are retention costs too. It’s easy to lose money on service-intensive customers, so at some point you have to find ways to change their behavior or to charge them for what they’re costing you.
Of course, retention and service costs are components within the LTV figure. I’ll talk more about tomorrow about making use of them.
Monday, February 05, 2007
Return on Investment is Only Part of the Solution
I was reading a paper on measuring return on investment for marketers this weekend and thought that the author had misclassified a set of expenses in one of the examples. The specific issue was whether a gift certificate given to respondents is part of the marketing investment or the cost of sales. It matters because including the gift in the marketing investment increases the denominator in the return on investment ratio (profit / investment), thereby lowering the ROI. Profit is not affected because marketing investment and cost of sales are both expenses.
You could argue this case either way. If the real definition of marketing investment is the amount that will be spent regardless of whether anyone responds, as James Lenskold argues in Marketing ROI, then the gift would not be part of the marketing investment. But in many companies, the gift would be charged to the marketing budget. This would reduce the funds available for other marketing efforts and really should be considered a marketing cost.
I don’t have a strong feeling about the answer. To me what this highlights is the difficulty of working with any return on investment calculation. Not only must you estimate cash flows correctly, as with any business analysis, but you face the added challenge of classifying those flows properly. As Lenskold’s thoughtful book illustrates, this gets quite complicated when you start looking at the details.
The question is whether it’s worth the trouble. The fundamental benefit of ROI is it gives a single number that can rank investment opportunities in terms of how productively they use capital. The problem is the ROI value doesn’t indicate the total amount of return. A small investment with high ROI might make less sense than a larger investment with slightly lower ROI. (The answer depends on what you would do with the capital remaining after the smaller investment.) Things get especially complicated when you start looking at customer investments, where many options interact, conflict, or are mutually exclusive.
This is why I keep coming back to the notion of a customer value model that incorporates all the interactions a customer has with the company and calculates the resulting cash flows. Return on investment can be one output from such a model, but it isn’t the main focus. Rather, such a model attempts to estimate the net value of different combinations of customer treatments, and find the one which gives the best results. In other words, the focus is on return per customer, which in most businesses today is arguably a more constrained resource than capital.
In some ways, this is simply a difference in focus. You still have to consider return on investment, but in combination with customer value. Similarly, you still have to model specific business projects (marketing campaigns, customer service policies, new products), but also customer behaviors. If anything, the customer-based models will be even more complicated than the models that estimate project ROI. But you’ll build one customer-based model rather than many project models, and you’ll explicitly include interactions among projects rather than trying to integrate them after the fact. I believe this gives a more meaningful and ultimately more relevant view of your business, one which is likely to focus attention in the proper direction (customer experience) and result in better long-term decisions.
You could argue this case either way. If the real definition of marketing investment is the amount that will be spent regardless of whether anyone responds, as James Lenskold argues in Marketing ROI, then the gift would not be part of the marketing investment. But in many companies, the gift would be charged to the marketing budget. This would reduce the funds available for other marketing efforts and really should be considered a marketing cost.
I don’t have a strong feeling about the answer. To me what this highlights is the difficulty of working with any return on investment calculation. Not only must you estimate cash flows correctly, as with any business analysis, but you face the added challenge of classifying those flows properly. As Lenskold’s thoughtful book illustrates, this gets quite complicated when you start looking at the details.
The question is whether it’s worth the trouble. The fundamental benefit of ROI is it gives a single number that can rank investment opportunities in terms of how productively they use capital. The problem is the ROI value doesn’t indicate the total amount of return. A small investment with high ROI might make less sense than a larger investment with slightly lower ROI. (The answer depends on what you would do with the capital remaining after the smaller investment.) Things get especially complicated when you start looking at customer investments, where many options interact, conflict, or are mutually exclusive.
This is why I keep coming back to the notion of a customer value model that incorporates all the interactions a customer has with the company and calculates the resulting cash flows. Return on investment can be one output from such a model, but it isn’t the main focus. Rather, such a model attempts to estimate the net value of different combinations of customer treatments, and find the one which gives the best results. In other words, the focus is on return per customer, which in most businesses today is arguably a more constrained resource than capital.
In some ways, this is simply a difference in focus. You still have to consider return on investment, but in combination with customer value. Similarly, you still have to model specific business projects (marketing campaigns, customer service policies, new products), but also customer behaviors. If anything, the customer-based models will be even more complicated than the models that estimate project ROI. But you’ll build one customer-based model rather than many project models, and you’ll explicitly include interactions among projects rather than trying to integrate them after the fact. I believe this gives a more meaningful and ultimately more relevant view of your business, one which is likely to focus attention in the proper direction (customer experience) and result in better long-term decisions.
Friday, February 02, 2007
Even Merchandisers Must Be Customer-Centric
For no particular reason, I’ve been reading a lot of retail white papers recently. Most have to do with customer-centric marketing, and describe the usual plethora of clever things you can do to improve relationships with individuals. But one—it happened to be about business intelligence systems—highlighted the importance of merchandise managers. This isn’t news to me: merchandisers are the kings and queens in most retail organizations, for the excellent reason that nothing else matters if people don’t want to buy what a store is selling.
The question I asked myself (well, technically, I asked the cat because otherwise I’d be talking to myself and that would mean I’m crazy) was whether merchandise analysis is outside the scope of customer experience management. This matters because our premise at Client X Client is that everything should be analyzed through the prism of customer value. So it’s important to check out possible exceptions.
Traditional merchandise analysis is certainly not customer-based. It looks at sales by product, and then summarizes or subdivides the results by product group, region, season, and so on. This is an art in itself, so if adding a customer dimension doesn’t provide any more value, you wouldn’t want to do it.
But looking at products independently of customers can lead to some major mistakes. Grocery stores know this intuitively: they carry low-margin or even loss-creating items like milk to get customers into the doors and make their profits elsewhere. I suspect every category of retail has similar product relationships. These are generally uncovered through market basket analysis, which is really a form of customer-centric analysis if you think about it. Looking at the same customers’ market baskets over time would probably yield additional insights, and that’s exactly what true customer-level analysis does.
There are probably ways that merchandisers would benefit from a more direct appreciation of the importance of analyzing behavior at the customer level. This isn’t exactly a unique insight—see Paco Underhill’s Why We Buy: The Science of Shopping for an impassioned and entertaining review of the importance of studying customer behavior. Of course, Underhill’s point is that you have to actually watch consumers to understand them; just looking at purchase data isn’t enough. He’s absolutely correct. But purchase data is the place to start, and merchandisers who don’t analyze purchases by customer are missing something important.
The question I asked myself (well, technically, I asked the cat because otherwise I’d be talking to myself and that would mean I’m crazy) was whether merchandise analysis is outside the scope of customer experience management. This matters because our premise at Client X Client is that everything should be analyzed through the prism of customer value. So it’s important to check out possible exceptions.
Traditional merchandise analysis is certainly not customer-based. It looks at sales by product, and then summarizes or subdivides the results by product group, region, season, and so on. This is an art in itself, so if adding a customer dimension doesn’t provide any more value, you wouldn’t want to do it.
But looking at products independently of customers can lead to some major mistakes. Grocery stores know this intuitively: they carry low-margin or even loss-creating items like milk to get customers into the doors and make their profits elsewhere. I suspect every category of retail has similar product relationships. These are generally uncovered through market basket analysis, which is really a form of customer-centric analysis if you think about it. Looking at the same customers’ market baskets over time would probably yield additional insights, and that’s exactly what true customer-level analysis does.
There are probably ways that merchandisers would benefit from a more direct appreciation of the importance of analyzing behavior at the customer level. This isn’t exactly a unique insight—see Paco Underhill’s Why We Buy: The Science of Shopping for an impassioned and entertaining review of the importance of studying customer behavior. Of course, Underhill’s point is that you have to actually watch consumers to understand them; just looking at purchase data isn’t enough. He’s absolutely correct. But purchase data is the place to start, and merchandisers who don’t analyze purchases by customer are missing something important.
Thursday, February 01, 2007
Vista Makes It Easier to Build Ad Hoc Display Networks
In case you’ve been stuck under a rock or trapped on an American Airlines flight, Tuesday was the official launch of Microsoft Vista. I chose not to stand in line for a copy, but BusinessWeek tells me that Vista’s greatest consumer benefit will be easier access to digital content (“The Real Value of Vista”, BusinessWeek, February 5, 2007). One example they give is a wireless application that automatically discovers digital picture frames and sends them images to display.
I’ll freely admit that my first thought on reading this was, “There’s a perfect example of the Client X Client notion of slots.” OK, I’m obsessed. But my mental image of a slot is pretty much a rectangle floating in space waiting for content, so free-standing picture frames are about as close to that as you could come in the physical world. (Still closer: flexible LCDs that you can paste onto any surface—a technology that itself is apparently pretty close to realization.)
People probably are not interested in displaying advertisements on their personal picture frames. (Never say never: what about an anti-drunk driving ad your teenager’s bedroom? Or how about subsidizing the cost of a big screen TV with ads sold by the set manufacturer? The set maker’s ads could override those included in the broadcast content, so the number of ads seen by the viewer wouldn’t increase.)
Even ignoring personal space, the picture frame technology could be adapted to digital signage in public and commercial locations. What’s different from existing digital signage capabilities (see recent Cisco announcement) is the flexibility gained when signs can be automatically added when they announce their presence.
These signs could report their physical location via GPS sensing. The server would translate this into more meaningful information (in a store, near a road, etc.) by matching against reference data. A not very large step beyond that would be to sense the surrounding environment—using cell phone or RFID signatures to determine traffic volume and movement information (already used in automated traffic reporting), if not also specific individuals.
With all this information available, an intelligent system could automatically dispatch the most suitable content to each display. Different content providers might bid for each display, auction-style. Build a bit more intelligence into the display device, and it might even accept bids from several different servers. This would be in the device owner’s interest since revenues would be higher from a blind auction.
Just to be clear: you could do all this today with fixed signs or other fixed locations such as ads on Web pages. What’s new is the ability to add new signs without any setup: they simply announce their availability and the server starts working with them. Of course, this requires more intelligent systems to figure out which content to send, and that's what we do at Client X Client.
Thank you, Bill Gates.
I’ll freely admit that my first thought on reading this was, “There’s a perfect example of the Client X Client notion of slots.” OK, I’m obsessed. But my mental image of a slot is pretty much a rectangle floating in space waiting for content, so free-standing picture frames are about as close to that as you could come in the physical world. (Still closer: flexible LCDs that you can paste onto any surface—a technology that itself is apparently pretty close to realization.)
People probably are not interested in displaying advertisements on their personal picture frames. (Never say never: what about an anti-drunk driving ad your teenager’s bedroom? Or how about subsidizing the cost of a big screen TV with ads sold by the set manufacturer? The set maker’s ads could override those included in the broadcast content, so the number of ads seen by the viewer wouldn’t increase.)
Even ignoring personal space, the picture frame technology could be adapted to digital signage in public and commercial locations. What’s different from existing digital signage capabilities (see recent Cisco announcement) is the flexibility gained when signs can be automatically added when they announce their presence.
These signs could report their physical location via GPS sensing. The server would translate this into more meaningful information (in a store, near a road, etc.) by matching against reference data. A not very large step beyond that would be to sense the surrounding environment—using cell phone or RFID signatures to determine traffic volume and movement information (already used in automated traffic reporting), if not also specific individuals.
With all this information available, an intelligent system could automatically dispatch the most suitable content to each display. Different content providers might bid for each display, auction-style. Build a bit more intelligence into the display device, and it might even accept bids from several different servers. This would be in the device owner’s interest since revenues would be higher from a blind auction.
Just to be clear: you could do all this today with fixed signs or other fixed locations such as ads on Web pages. What’s new is the ability to add new signs without any setup: they simply announce their availability and the server starts working with them. Of course, this requires more intelligent systems to figure out which content to send, and that's what we do at Client X Client.
Thank you, Bill Gates.
Subscribe to:
Posts (Atom)