“IT Struggles to Show BI Value” reads the headline in this week’s Computerworld. The article itself says pretty much the same thing.
This spawns three quick thoughts:
- It’s not IT’s job to show the value of Business Intelligence. It’s the job of the users. If IT is pushing BI on users who don’t want it, then something is wrong.
- Even though BI is an indirect expense, users can and should still justify it based on improving customer value. I wrote about this in detail here back in September. Basically, BI can directly improve customer experiences or indirectly improve business economics.
- No, I don’t really think you can measure the precise impact of each decision on customer value. This points back to yesterday’s comment about the value of Mini Cooper’s personalized billboards. You can’t measure that precisely either. But you can at least describe in general how it would translate into increased customer value and make a back-of-the-envelope calculation to see whether there is any hope of recouping the investment. (In Mini Cooper’s case, the expected value was more purchases and referrals from more enthusiastic owners.) You can also assess whether alternative investments are likely to be more effective: if it cost the same, would it be better to have the personalized billboards, send every Mini Cooper owner a personalized Mini Cooper jacket, or use the RFID to pay their highway tolls for a week? Beats the heck out of me, but if you accept something like Net Promoter Score as a reasonable way to measure customer enthusiasm, this is now something you can test, not just a bunch of bright ideas to pick from randomly.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Wednesday, January 31, 2007
Tuesday, January 30, 2007
How Do You Justify a Personalized Billboard?
Yesterday’s The New York Times reported on personalized billboard messages directed at Mini Cooper owners identified by RFID tags in their key fobs (“Billboards That Know You By Name”, January 29, 2007, Business Day, page 3). Although this is the first I’ve heard of such an application, it has been so obviously on the way that its actual arrival is barely worth noting. But it does raise some interesting issues.
I'm not talking about safety and privacy. Those are obvious, and the article raises and dismisses them. Since Mini Cooper owners sign up to participate, and not much beyond their name is displayed, it’s hard to consider the program invasive. The obvious fact that the same key fobs allow their movements to be captured for other purposes in other situations apparently doesn’t bother them either. One might suspect that people who drive Mini Coopers want to stand out from the crowd to begin with, so perhaps this a particularly privacy-insensitive segment.
I’m actually more interested in the business justification for this program. In Client X Client terms, the personalized message on a billboard is a “slot” that can be filled with a variety of messages. Each slot has an optimal use with an associated financial value. Any use that generates less than that value is suboptimal. So the question is, given the opportunity to have so many people see that billboard, is the personalized message to a single Mini Cooper owner really the best use for it?
First of all, it’s important to clarify that the personalized messages only appear when a participating Mini Cooper is detected. This will be a tiny fraction of the time, so the billboards will usually display their conventional messages, which are worth whatever they’re worth. Thus the opportunity cost of the personalized messages is very low—probably close to zero, if you assume the standard message will be seen anyway by everyone who also sees the personalized message. In fact, the very fact that the sign sometimes changes may attract more attention, so the standard messages might actually seen more even though they are displayed less. (The same technology could probably detect other RFID tags, and therefore the slots could be personalized with other messages, perhaps sold to the highest bidder. But let’s not worry about that right now.)
What about the messages themselves? The article says Mini Cooper’s advertising agency hopes to “intensify the already strong ‘tribal’ feeling among Mini owners and stimulate their desire to support the brand.” In other words, it’s not about stimulating the owners’ purchases, but about making them more enthusiastic brand advocates. Big Brother connotations aside, the whimsical nature of the messages does match the light-hearted image of the Mini Cooper brand. They certainly should reinforce the owners’ attachment to it.
Beyond warm and fuzzies, the financial payoff would be measured in referrals—something that’s difficult to capture, but not impossible in the case of an auto purchase. There may also be some easily-measured behavior changes among the customers themselves for things like using the dealers for service. Given the obscure nature of the actual messages, it’s hard to imagine they would have much impact on non-Mini Cooper owners. At best, some people with a pre-existing interest in Mini Coopers might be encouraged to explore further.
It probably doesn’t take many incremental auto sales to pay for the incremental costs of this program. (This assumes the costs are they’re truly incremental; that is, that you’d rent those billboards anyway). And it may take quite a while for the results to appear, since automobiles are not an impulse purchase. But my point is simply that even a soft program like this, aimed largely at brand image (and of course generating publicity), can be measured as a real investment and eventually judged accordingly. Whatever image Mini Cooper chooses to project, this is not just fun and games.
I'm not talking about safety and privacy. Those are obvious, and the article raises and dismisses them. Since Mini Cooper owners sign up to participate, and not much beyond their name is displayed, it’s hard to consider the program invasive. The obvious fact that the same key fobs allow their movements to be captured for other purposes in other situations apparently doesn’t bother them either. One might suspect that people who drive Mini Coopers want to stand out from the crowd to begin with, so perhaps this a particularly privacy-insensitive segment.
I’m actually more interested in the business justification for this program. In Client X Client terms, the personalized message on a billboard is a “slot” that can be filled with a variety of messages. Each slot has an optimal use with an associated financial value. Any use that generates less than that value is suboptimal. So the question is, given the opportunity to have so many people see that billboard, is the personalized message to a single Mini Cooper owner really the best use for it?
First of all, it’s important to clarify that the personalized messages only appear when a participating Mini Cooper is detected. This will be a tiny fraction of the time, so the billboards will usually display their conventional messages, which are worth whatever they’re worth. Thus the opportunity cost of the personalized messages is very low—probably close to zero, if you assume the standard message will be seen anyway by everyone who also sees the personalized message. In fact, the very fact that the sign sometimes changes may attract more attention, so the standard messages might actually seen more even though they are displayed less. (The same technology could probably detect other RFID tags, and therefore the slots could be personalized with other messages, perhaps sold to the highest bidder. But let’s not worry about that right now.)
What about the messages themselves? The article says Mini Cooper’s advertising agency hopes to “intensify the already strong ‘tribal’ feeling among Mini owners and stimulate their desire to support the brand.” In other words, it’s not about stimulating the owners’ purchases, but about making them more enthusiastic brand advocates. Big Brother connotations aside, the whimsical nature of the messages does match the light-hearted image of the Mini Cooper brand. They certainly should reinforce the owners’ attachment to it.
Beyond warm and fuzzies, the financial payoff would be measured in referrals—something that’s difficult to capture, but not impossible in the case of an auto purchase. There may also be some easily-measured behavior changes among the customers themselves for things like using the dealers for service. Given the obscure nature of the actual messages, it’s hard to imagine they would have much impact on non-Mini Cooper owners. At best, some people with a pre-existing interest in Mini Coopers might be encouraged to explore further.
It probably doesn’t take many incremental auto sales to pay for the incremental costs of this program. (This assumes the costs are they’re truly incremental; that is, that you’d rent those billboards anyway). And it may take quite a while for the results to appear, since automobiles are not an impulse purchase. But my point is simply that even a soft program like this, aimed largely at brand image (and of course generating publicity), can be measured as a real investment and eventually judged accordingly. Whatever image Mini Cooper chooses to project, this is not just fun and games.
Monday, January 29, 2007
Apple Chose Customer Experience Over Verizon for iPhone
I wrote last week about the challenge that retailers face when sharing a customer relationship with manufacturers. This morning’s USA Today reports that Apple—one of the most brand-savvy companies around—rejected an iPhone alliance with Verizon Wireless over this very issue. ( “Verizon rejected Apple iPhone deal”, USA Today, January 28, 2007, available
here.)
According to the article, Apple was unwilling to allow the iPhone at Verizon’s usual retail partners, such as Wal-Mart and Best Buy. In addition, “Customer care was another hitch: If an iPhone went haywire, Apple wanted sole discretion over whether to replace or repair the phone. ‘They would have been stepping in between us and our customers to the point where we would have almost had to take a back seat … on hardware and service support,’ [Verizon Wireless vice president Jim] Gerace says.”
It’s no surprise that a firm as obsessed with customer experience as Apple would want close control over how iPhone customers were treated. Still, it’s impressive that both Apple and Verizon would feel strongly enough to reject a major distribution deal over the issue.
Apple’s decision to open its own retail outlets shows the same dedication. (See this article at ifoAppleStore.com for a detailed history of Apple’s retail efforts.) Most personal computer manufacturers have avoided owning stores, particularly since Gateway Computer’s high-profile retail failure in 2004. Dell announced a limited retail trial in 2006, perhaps more a sign of desperation than anything else.
Only the strongest manufacturing brands can afford their own retail outlets. The very uniqueness of Apple’s product makes it easier to limit retail distribution without losing sales to competitors. The universal availability of products via the Internet also minimizes the negative impact of limited physical distribution. (If I can’t make it to an Apple Store, I can still buy the product on line.) Assuming that Apple does a good job servicing its on-line buyers, it can actually deliver a better and more controlled experience remotely than by relying on other companies’ employees to deliver it in person.
The real question is what manufacturers who cannot afford their own stores can do to improve the brand experience of customers who buy through third-party retailers. This is where product design and packaging become so important, to communicate to the customer that the manufacturer is ready and eager to help with any product questions or problems. Old-fashioned brand-building advertising also plays a major role in convincing consumers that the manufacturer really cares. Naturally this must be backed up by quality customer service when customers do call. For manufacturers that are unable or unwilling to deliver a quality experience, hiding behind the retailer’s brand may actually be a better approach.
here.)
According to the article, Apple was unwilling to allow the iPhone at Verizon’s usual retail partners, such as Wal-Mart and Best Buy. In addition, “Customer care was another hitch: If an iPhone went haywire, Apple wanted sole discretion over whether to replace or repair the phone. ‘They would have been stepping in between us and our customers to the point where we would have almost had to take a back seat … on hardware and service support,’ [Verizon Wireless vice president Jim] Gerace says.”
It’s no surprise that a firm as obsessed with customer experience as Apple would want close control over how iPhone customers were treated. Still, it’s impressive that both Apple and Verizon would feel strongly enough to reject a major distribution deal over the issue.
Apple’s decision to open its own retail outlets shows the same dedication. (See this article at ifoAppleStore.com for a detailed history of Apple’s retail efforts.) Most personal computer manufacturers have avoided owning stores, particularly since Gateway Computer’s high-profile retail failure in 2004. Dell announced a limited retail trial in 2006, perhaps more a sign of desperation than anything else.
Only the strongest manufacturing brands can afford their own retail outlets. The very uniqueness of Apple’s product makes it easier to limit retail distribution without losing sales to competitors. The universal availability of products via the Internet also minimizes the negative impact of limited physical distribution. (If I can’t make it to an Apple Store, I can still buy the product on line.) Assuming that Apple does a good job servicing its on-line buyers, it can actually deliver a better and more controlled experience remotely than by relying on other companies’ employees to deliver it in person.
The real question is what manufacturers who cannot afford their own stores can do to improve the brand experience of customers who buy through third-party retailers. This is where product design and packaging become so important, to communicate to the customer that the manufacturer is ready and eager to help with any product questions or problems. Old-fashioned brand-building advertising also plays a major role in convincing consumers that the manufacturer really cares. Naturally this must be backed up by quality customer service when customers do call. For manufacturers that are unable or unwilling to deliver a quality experience, hiding behind the retailer’s brand may actually be a better approach.
Friday, January 26, 2007
Why RFPs Are Not Right for Everyone (and When You Still Need Them)
The traditional process for selecting software involves gathering requirements, embedding these in a Request for Proposal, sending the RFP to qualified vendors, and making a decision based on the replies. I don’t know where the process got started; I suspect in government procurement but perhaps it was large corporate bureaucracies. In any event, RFPs are widely detested by vendors who often do huge amounts of work without knowing whether their efforts will be rewarded or even taken seriously.
From the buyer perspective, a more powerful criticism is that the formalized RFP process prevents buyers and vendors from really understanding each other. Plus the process takes too long, and most buyers don’t understand their real requirements, anyway. (My friend Richard N. Tooker makes this case in his usual colorful fashion in his book The Business of Database Marketing. Another recent criticism is in Avinsh Kaushik’s excellent Occam’s Razor blog here.)
As someone who has written dozens of RFPs over the years, I sometimes think I should feel defensive about these criticisms. But I don’t. It’s partly because a well-constructed RFP process avoids these problems. But it’s mostly because I myself avoid RFPs when other, less formal approaches can be more effective.
The actual approaches vary. Sometimes you can conduct a “bake off” by giving vendors a specified problem and actual data, and seeing how they solve it. This works when requirements are well understood—for example, when seeking a replacement for an existing solution. When requirements are not known, it may make more sense to first work with a demo or free trial system so you can learn about the issues before making a final choice (this is Kaushik’s recommendation for Web analytics). On the rare occasions when both vendors and requirements are familiar, you can simply select a product after informal meetings with the top candidates.
I feel a 2x2 matrix coming on. Looking at what I just wrote, it boils down to how well you know your own requirements and how well you (or your consultants!) know the vendors. A different selection strategy makes sense for each combination:
Unknown vendors, unknown requirements: formal RFP (forces you to define requirements and thoroughly assess each vendor)
Unknown vendors, known requirements: bake-off (lets you quickly understand key vendor strengths and weaknesses; still need additional research into vendor background, etc.)
Known vendors, unknown requirements: free/trial/prototype installation (lets you learn about requirements before making a final selection)
Known vendors, known requirements: informal selection (you can easily assess the leading candidates and make still a good choice)
Of course, there is also a dimension of corporate culture: some companies, particularly the big ones, require a formal RFP process just because that’s how they do things. It’s reasonable for a big firm to act this way, since they must justify their decisions to senior executive and ultimately to investors, and because the stakes are so high that it’s worth some extra investment to avoid mistakes. (This assumes the more formal process actually avoids mistakes—not necessarily true, but it does improve the odds of success.)
On the other hand, a more formal process takes time, which can be very costly when you face an immediate need. In this case, it’s often best to start with an outsourced or hosted solution. This lets you get started quickly while leaving open the possibility of moving elsewhere later. It also gives time to learn more about your needs (moving quickly is usually an big issue for something new, not a replacement system) and to learn from the expertise of the interim solution provider.
I’ve used all four of the strategies in my little matrix at different times. The key point to remember is you're simply trying to find a product that meets your needs. It doesn't have to be the perfect product or even the best product but just one that's good enough that you can get on with your business. Several of the alternatives probably meet this goal.
Although some organizations do indeed award points for the quality of your selection process itself, it's really the outcome--having a suitable system--that's important. Pick the fastest, simplest, cheapest process that finds a good system, and you'll have made the right choice.
From the buyer perspective, a more powerful criticism is that the formalized RFP process prevents buyers and vendors from really understanding each other. Plus the process takes too long, and most buyers don’t understand their real requirements, anyway. (My friend Richard N. Tooker makes this case in his usual colorful fashion in his book The Business of Database Marketing. Another recent criticism is in Avinsh Kaushik’s excellent Occam’s Razor blog here.)
As someone who has written dozens of RFPs over the years, I sometimes think I should feel defensive about these criticisms. But I don’t. It’s partly because a well-constructed RFP process avoids these problems. But it’s mostly because I myself avoid RFPs when other, less formal approaches can be more effective.
The actual approaches vary. Sometimes you can conduct a “bake off” by giving vendors a specified problem and actual data, and seeing how they solve it. This works when requirements are well understood—for example, when seeking a replacement for an existing solution. When requirements are not known, it may make more sense to first work with a demo or free trial system so you can learn about the issues before making a final choice (this is Kaushik’s recommendation for Web analytics). On the rare occasions when both vendors and requirements are familiar, you can simply select a product after informal meetings with the top candidates.
I feel a 2x2 matrix coming on. Looking at what I just wrote, it boils down to how well you know your own requirements and how well you (or your consultants!) know the vendors. A different selection strategy makes sense for each combination:
Unknown vendors, unknown requirements: formal RFP (forces you to define requirements and thoroughly assess each vendor)
Unknown vendors, known requirements: bake-off (lets you quickly understand key vendor strengths and weaknesses; still need additional research into vendor background, etc.)
Known vendors, unknown requirements: free/trial/prototype installation (lets you learn about requirements before making a final selection)
Known vendors, known requirements: informal selection (you can easily assess the leading candidates and make still a good choice)
Of course, there is also a dimension of corporate culture: some companies, particularly the big ones, require a formal RFP process just because that’s how they do things. It’s reasonable for a big firm to act this way, since they must justify their decisions to senior executive and ultimately to investors, and because the stakes are so high that it’s worth some extra investment to avoid mistakes. (This assumes the more formal process actually avoids mistakes—not necessarily true, but it does improve the odds of success.)
On the other hand, a more formal process takes time, which can be very costly when you face an immediate need. In this case, it’s often best to start with an outsourced or hosted solution. This lets you get started quickly while leaving open the possibility of moving elsewhere later. It also gives time to learn more about your needs (moving quickly is usually an big issue for something new, not a replacement system) and to learn from the expertise of the interim solution provider.
I’ve used all four of the strategies in my little matrix at different times. The key point to remember is you're simply trying to find a product that meets your needs. It doesn't have to be the perfect product or even the best product but just one that's good enough that you can get on with your business. Several of the alternatives probably meet this goal.
Although some organizations do indeed award points for the quality of your selection process itself, it's really the outcome--having a suitable system--that's important. Pick the fastest, simplest, cheapest process that finds a good system, and you'll have made the right choice.
Thursday, January 25, 2007
Aberdeen Report Cites Need for Customer Value Models
Aberdeen Group recently released “Creating a Customer-Centric Marketing Organization” available here (registration required). The paper is based on a survey of senior marketing executives at 500 small and mid-sized businesses. It was designed to identify the marketing practices of best-in-class customer-centric companies and to determine whether those companies actually report better results.
No prize for guessing the right answer. They do.
Of course, the devil is in the details. Aberdeen found “companies that adopt closed loop marketing processes are more than three times as likely to report a greater than 50% return on marketing investment (ROMI) than those that do not”. It defines a “closed loop process” as one that “successfully brings together the right metrics, differentiating capabilities, and integrated and sophisticated technologies.”
What they really mean by that will sound very familiar to regular readers of this blog. You know how I’m always droning on about the importance of customer value models? Well, it’s not just me: Aberdeen’s top two action recommendations are “define how to measure customer lifetime value models and preferred customer profiles” and “develop competencies around customer modeling, segmentation, product/channel mix and predictive analytics.” So there.
In fact, most of the paper’s five “key value findings”—structured marketing planning, customer information integration, customer profitability modeling, measuring the effectiveness of each interaction, and responding to customer behaviors with personalized and event-based interactions--mirror Client X Client themes. It’s not that Aberdeen has been listening to us (or vice versa), but that we’re all looking at the same situation and drawing similar conclusions.
Actually, one of the more interesting comments in the paper was another point that we often discuss within Client X Client, but rarely raise in public: that the technology needed for competent customer experience management is often already in place. The real barrier is knowing how to connect the pieces and what to do with them. Or, as Aberdeen puts it: “Lagging and average companies are not ill-equipped with technology products, rather they lack the integration and sophistication to realize higher results.”
I do have some complaints about the paper. It’s a bit more focused on outbound marketing than I’d like, the scope is largely limited to marketing rather than all customer experiences, and it’s hard to understand some of their figures. But it will provide good support for marketers trying to take their company to the next level of customer management, and that’s what really counts.
No prize for guessing the right answer. They do.
Of course, the devil is in the details. Aberdeen found “companies that adopt closed loop marketing processes are more than three times as likely to report a greater than 50% return on marketing investment (ROMI) than those that do not”. It defines a “closed loop process” as one that “successfully brings together the right metrics, differentiating capabilities, and integrated and sophisticated technologies.”
What they really mean by that will sound very familiar to regular readers of this blog. You know how I’m always droning on about the importance of customer value models? Well, it’s not just me: Aberdeen’s top two action recommendations are “define how to measure customer lifetime value models and preferred customer profiles” and “develop competencies around customer modeling, segmentation, product/channel mix and predictive analytics.” So there.
In fact, most of the paper’s five “key value findings”—structured marketing planning, customer information integration, customer profitability modeling, measuring the effectiveness of each interaction, and responding to customer behaviors with personalized and event-based interactions--mirror Client X Client themes. It’s not that Aberdeen has been listening to us (or vice versa), but that we’re all looking at the same situation and drawing similar conclusions.
Actually, one of the more interesting comments in the paper was another point that we often discuss within Client X Client, but rarely raise in public: that the technology needed for competent customer experience management is often already in place. The real barrier is knowing how to connect the pieces and what to do with them. Or, as Aberdeen puts it: “Lagging and average companies are not ill-equipped with technology products, rather they lack the integration and sophistication to realize higher results.”
I do have some complaints about the paper. It’s a bit more focused on outbound marketing than I’d like, the scope is largely limited to marketing rather than all customer experiences, and it’s hard to understand some of their figures. But it will provide good support for marketers trying to take their company to the next level of customer management, and that’s what really counts.
Wednesday, January 24, 2007
Nuances in Customer Experience Management Methodologies (Is that a catchy title or what?)
I’ve been looking at customer experience management methodologies recently. All seem similar at first, but after a while you begin to pick up on the nuances that distinguish them. Here are some of the differences I’ve noticed.
- focus on function vs. emotion. The general idea of customer experience management boils down to identifying and meeting customer needs. Presumably these include both functional and emotional needs, but some methodologies make a particular point of stressing the emotional aspect of experiences. The notion is that, given reasonable functional competency, it will be the emotional components that make an experience memorable. The counter argument would be that frequent small advantages in functional experience add up over time to a positive customer attitude. Personally, I lean to the latter opinion: since most day-to-day transactions (buying groceries, paying a phone bill, making a bank deposit, shipping a package) have little emotional content, I suspect that operational excellence is usually more important than anything else.. Of course, in the handful of truly emotional or stressful situations—say, going to the doctor (but not dentist) or a family vacation (but not a business trip)—focusing on the emotional component may be correct.
- focus on ‘moments of truth’ vs. all experiences. This is similar to the function vs. emotion argument, in that it singles out some experiences as particularly critical. My initial reaction is somewhat the same: it’s important to get everything right. But even within that framework, it’s worth figuring out which portions of the experience really do matter the most to customers and giving them extra attention. Back to banking: it’s nice that my local branch usually has a little plate of cookies and coffee available, but is the quality of those refreshments itself a ‘moment of truth’? Surely it’s more important that the teller process my transactions quickly and correctly. To make the comparison a bit more realistic: let’s say I’m sitting with a bank officer opening a new account and there is something she doesn’t know how to do. Will I consider it a better experience if she offers me a cookie while she tries to figure it out, or she quickly gets help from someone more knowledgeable? I’d vote for the quicker functional resolution because that to me is the real ‘moment of truth’. Unless it’s a really good cookie.
- distinguish static experience components (advertising, signage, store layout) from interactive components (telephone calls, customized Web pages). “Static” may not be quite the right word: what I’m getting at is experiences that can’t be customized or personalized and are therefore the same for everyone. “Interactive” experiences, at least as I’m using the phrase right here, are those that can be tailored to the customer. Some people (specifically, Customer Experience Management author Bernd Schmitt, though there may well be others) define the static components as “brand experience” and the interactive components as “customer interface”. The question I’m asking is, should they truly be considered separately?. It’s getting harder to think of truly static experiences these days—even in-store signage is increasingly changeable (see, for example, a recent eWeek article “Cisco eyes digital signage”): it can easily be adjusted for local conditions, and tailoring to individual customers, as in the movie Minority Report, can’t be far behind. But, more important, the distinction is irrelevant to the customer, who certainly considers both static and interactive components as part of her experience without distinguishing between them. The argument for the distinction is probably one of convenience: static and interactive elements tend to be planned and managed separately, so it makes sense to analyze them separately. But I think this increases the danger of inconsistencies within the customer experience, which are a Very Bad Thing. So I’d argue it’s better to fight against the “natural” division than to accentuate it.
- provide same services for all customers vs. customer’s high-value services only. This is another one that’s hard to encapsulate in a headline. The idea is that some customers don’t really care about some “standard” services, so a company can save money by not providing them to those customers. On one level this is just another way to describe customization, and who could argue against that? Looked at differently, though, it seems to propose a reduction in service levels, which is a little less attractive. But not doing things customers don’t care about really can give them a simpler, and therefore better, experience. So this is a distinction I’d keep. The trick is, of course, to find ways to identify the superfluous experiences and also to be sure that removing them for some people isn’t actually more expensive than keeping them for everyone. Of course, if it truly improves the customer experience, then removing them could be worthwhile even if total cost goes up.
- expected vs. actual experiences. This one really intrigues me. To some extent, it mirrors the static vs. interactive distinction: expectations are created through advertising and other indirect (static) exposures to a company, often before some becomes a customer, while actual experiences are delivered during direct interactions. But there’s also a notion here that customers have a set of expectations derived from both direct and indirect contacts, and that any failure to deliver on those expectations can cause a problem. A corollary point is that inconsistent experiences can be even worse than consistently bad experiences, because they are more frustrating. To use an example provided by a friend of mine in customer service, if customers know to expect long call center hold times, they will adjust by calling only when it’s really important or when they have time to wait; but if hold times vary from instance to instance, they’ll call more often and be very annoyed on those occasions when the wait is long. (And, of course, they’ll forget times when the hold was short.) In sum: I generally consider advertising and similar contacts to be part of the over-all customer experience, rather than in a separate category of expectation-creating experiences (or, perhaps, “brand experiences”). But I agree that identifying customer expectations and comparing them to actual experience is important for managing customers effectively.
Of course, you can agree or disagree with me on any of these issues. What’s important is that you think about them and apply whatever makes sense in your own business situation.
- focus on function vs. emotion. The general idea of customer experience management boils down to identifying and meeting customer needs. Presumably these include both functional and emotional needs, but some methodologies make a particular point of stressing the emotional aspect of experiences. The notion is that, given reasonable functional competency, it will be the emotional components that make an experience memorable. The counter argument would be that frequent small advantages in functional experience add up over time to a positive customer attitude. Personally, I lean to the latter opinion: since most day-to-day transactions (buying groceries, paying a phone bill, making a bank deposit, shipping a package) have little emotional content, I suspect that operational excellence is usually more important than anything else.. Of course, in the handful of truly emotional or stressful situations—say, going to the doctor (but not dentist) or a family vacation (but not a business trip)—focusing on the emotional component may be correct.
- focus on ‘moments of truth’ vs. all experiences. This is similar to the function vs. emotion argument, in that it singles out some experiences as particularly critical. My initial reaction is somewhat the same: it’s important to get everything right. But even within that framework, it’s worth figuring out which portions of the experience really do matter the most to customers and giving them extra attention. Back to banking: it’s nice that my local branch usually has a little plate of cookies and coffee available, but is the quality of those refreshments itself a ‘moment of truth’? Surely it’s more important that the teller process my transactions quickly and correctly. To make the comparison a bit more realistic: let’s say I’m sitting with a bank officer opening a new account and there is something she doesn’t know how to do. Will I consider it a better experience if she offers me a cookie while she tries to figure it out, or she quickly gets help from someone more knowledgeable? I’d vote for the quicker functional resolution because that to me is the real ‘moment of truth’. Unless it’s a really good cookie.
- distinguish static experience components (advertising, signage, store layout) from interactive components (telephone calls, customized Web pages). “Static” may not be quite the right word: what I’m getting at is experiences that can’t be customized or personalized and are therefore the same for everyone. “Interactive” experiences, at least as I’m using the phrase right here, are those that can be tailored to the customer. Some people (specifically, Customer Experience Management author Bernd Schmitt, though there may well be others) define the static components as “brand experience” and the interactive components as “customer interface”. The question I’m asking is, should they truly be considered separately?. It’s getting harder to think of truly static experiences these days—even in-store signage is increasingly changeable (see, for example, a recent eWeek article “Cisco eyes digital signage”): it can easily be adjusted for local conditions, and tailoring to individual customers, as in the movie Minority Report, can’t be far behind. But, more important, the distinction is irrelevant to the customer, who certainly considers both static and interactive components as part of her experience without distinguishing between them. The argument for the distinction is probably one of convenience: static and interactive elements tend to be planned and managed separately, so it makes sense to analyze them separately. But I think this increases the danger of inconsistencies within the customer experience, which are a Very Bad Thing. So I’d argue it’s better to fight against the “natural” division than to accentuate it.
- provide same services for all customers vs. customer’s high-value services only. This is another one that’s hard to encapsulate in a headline. The idea is that some customers don’t really care about some “standard” services, so a company can save money by not providing them to those customers. On one level this is just another way to describe customization, and who could argue against that? Looked at differently, though, it seems to propose a reduction in service levels, which is a little less attractive. But not doing things customers don’t care about really can give them a simpler, and therefore better, experience. So this is a distinction I’d keep. The trick is, of course, to find ways to identify the superfluous experiences and also to be sure that removing them for some people isn’t actually more expensive than keeping them for everyone. Of course, if it truly improves the customer experience, then removing them could be worthwhile even if total cost goes up.
- expected vs. actual experiences. This one really intrigues me. To some extent, it mirrors the static vs. interactive distinction: expectations are created through advertising and other indirect (static) exposures to a company, often before some becomes a customer, while actual experiences are delivered during direct interactions. But there’s also a notion here that customers have a set of expectations derived from both direct and indirect contacts, and that any failure to deliver on those expectations can cause a problem. A corollary point is that inconsistent experiences can be even worse than consistently bad experiences, because they are more frustrating. To use an example provided by a friend of mine in customer service, if customers know to expect long call center hold times, they will adjust by calling only when it’s really important or when they have time to wait; but if hold times vary from instance to instance, they’ll call more often and be very annoyed on those occasions when the wait is long. (And, of course, they’ll forget times when the hold was short.) In sum: I generally consider advertising and similar contacts to be part of the over-all customer experience, rather than in a separate category of expectation-creating experiences (or, perhaps, “brand experiences”). But I agree that identifying customer expectations and comparing them to actual experience is important for managing customers effectively.
Of course, you can agree or disagree with me on any of these issues. What’s important is that you think about them and apply whatever makes sense in your own business situation.
Tuesday, January 23, 2007
Customer Experience Frameworks
One of the interesting things about writing this blog is seeing how visitors find it. I use a simple page-tagging service called StatCounter that tells me which Web page visitors come from. Often these are Google searches for terms that have been used in my posts. When I have a few minutes to spare, I often click on those searches myself to see what else is coming up.
Yesterday someone did one of those searches on “customer experience framework”. (The phrase was in quotes if you want to run the same search yourself.) I was pleased to see both this blog and the Client X Client Web site www.clientxclient.com. Some of the other interesting hits were:
QCi
http://qci.co.uk/public_face/Module.asp?Name=Introduction%20to%20CMAT&Ref=47
Secor Consulting
http://www.secorconsulting.com/documents/Customer%20Experience.pdf
Diamond Consultants
http://www.diamondconsultants.com/PublicSite/ideas/perspectives/downloads/MNVO.Diamond.Single.pdf (see page 19 in particular)
Audrey Carr blog
http://audreycarr.ca/?p=10
Beyond Philosophy
http://www.beyondphilosophy.com/ourtoolsandtechniques/index.html
IBM
http://www-935.ibm.com/services/us/index.wss/ibvstudy/gbs/a1024240?cntxt=a1005261
Peppers & Rogers Group
http://www.pb.com/bv70/en_us/extranet/contentfiles/editorials/downloads/ed_wpaper_silent_css_PeppersRogers.pdf
I won’t comment on these designs in detail. But in general, it’s interesting to see the variety of approaches being taken. Most frameworks are basically ways to organize information about the components of customer experience. Others describe the process of designing a company’s target experience. (Google found several additional frameworks focused specifically on designing Web experiences, but I’ve removed those from the list.) Still others capture the actual sequence of events in a customer life cycle. For what it’s worth, Client X Client’s Customer Experience Matrix falls into that last category.
Yesterday someone did one of those searches on “customer experience framework”. (The phrase was in quotes if you want to run the same search yourself.) I was pleased to see both this blog and the Client X Client Web site www.clientxclient.com. Some of the other interesting hits were:
QCi
http://qci.co.uk/public_face/Module.asp?Name=Introduction%20to%20CMAT&Ref=47
Secor Consulting
http://www.secorconsulting.com/documents/Customer%20Experience.pdf
Diamond Consultants
http://www.diamondconsultants.com/PublicSite/ideas/perspectives/downloads/MNVO.Diamond.Single.pdf (see page 19 in particular)
Audrey Carr blog
http://audreycarr.ca/?p=10
Beyond Philosophy
http://www.beyondphilosophy.com/ourtoolsandtechniques/index.html
IBM
http://www-935.ibm.com/services/us/index.wss/ibvstudy/gbs/a1024240?cntxt=a1005261
Peppers & Rogers Group
http://www.pb.com/bv70/en_us/extranet/contentfiles/editorials/downloads/ed_wpaper_silent_css_PeppersRogers.pdf
I won’t comment on these designs in detail. But in general, it’s interesting to see the variety of approaches being taken. Most frameworks are basically ways to organize information about the components of customer experience. Others describe the process of designing a company’s target experience. (Google found several additional frameworks focused specifically on designing Web experiences, but I’ve removed those from the list.) Still others capture the actual sequence of events in a customer life cycle. For what it’s worth, Client X Client’s Customer Experience Matrix falls into that last category.
Monday, January 22, 2007
Sterling Commerce Focuses Retailers on Total Customer Experience
Sterling Commerce, which provides software for sharing data across organizations, recently (well, last August) published “The Four Rules for Ensuring Customer Loyalty in a Competitive Retail Climate” available here (registration required).
You won’t be surprised that all four rules relate to sharing data. But this perspective does lead to past the usual corporate boundaries towards the complete customer experience. In particular, “New Rule #1” (“It’s still all about the product”) recommends that retailers partner with other companies to expand their offerings. This lets them increase sales without investing in added inventory or services. “Consumers don’t really care how many partners or systems the retailer is using... They only want to deal with one entity—the retailer of choice...the customers expects [sic] the retailer to make the whole experience seamless.”
This is a more radical position than it may seem. Although Sterling’s immediate point is simply that retailers can sell more things if they work with partners, the paper is really proposing that the retailer accept primary responsibility for the entire customer experience for the products it sells. What makes this radical is that the manufacturer would traditionally take that role. If the product breaks, the retailer should fix or replace it; if the customer has questions, they should call the retailer not the manufacturer or installer. Taken to its logical extreme, this implies the retailer would be responsible for advertising the products (to set appropriate customer expectations) and participate in design and development as well (to ensure quality and usability).
I’m not pointing out these extensions to ridicule the notion. Quite the contrary: I think it’s a great idea. One example that Sterling uses is construction of a home theater. Who better than an electronics retailer to take over the entire process of picking components, installing them, training users, servicing and ultimately updating the system over time? Perhaps there could be a regular replacement program similar to new-every-two-year cell phone or auto leasing contracts. Why shouldn’t the theater be branded by the retailer rather than the makers of the individual components? Some of this already happens: when I recently bought a High Definition TV, the salesperson ordered an upgrade to my cable service right from her sales terminal. But a more formal and comprehensive program would bring in additional sales during the initial installation, improve the chances of adding lucrative service contracts, and ensure repeat business.
The flip side of this is the store would be taking a bigger risk. If a product failed, the store would be responsible for ensuring customer satisfaction even if the manufacturer did not. And as the store took over responsibility for more elements of the customer experience, there would be more things to fail. This would force the store to be more careful about the products and service suppliers it partners with. That’s a great thing from a consumer point of view but a bit frightening for the store. Of course, we can expect the added risk and quality control overhead to be built into prices, so presumably consumers would have the choice of whether to pay for the smoother and simpler experience. (In fact, they already have this choice in home theaters, since they can purchase from specialist firms who offer exactly this sort of comprehensive service.) Stores would also likely be careful about which products they absorbed in this fashion: it makes sense for home theaters, but probably not for toasters.
In case you’re wondering, the other three rules are valid if less stimulating. Rule #2 stresses the importance of cross-channel integration, such as in-store pickup of online orders. Rule #3 recommends better in-store inventory systems to avoid apparent stock-outs (when the inventory actually is present but no one on the sales floor knows about it). Rule #4 offers the global advice to “Create a WOW experience” by creating “sustainable advantage...based on adaptability” through a “foundation for constant improvement.” That sounds intriguing but it quickly descends to the rather mundane example of cross-channel returns.
You won’t be surprised that all four rules relate to sharing data. But this perspective does lead to past the usual corporate boundaries towards the complete customer experience. In particular, “New Rule #1” (“It’s still all about the product”) recommends that retailers partner with other companies to expand their offerings. This lets them increase sales without investing in added inventory or services. “Consumers don’t really care how many partners or systems the retailer is using... They only want to deal with one entity—the retailer of choice...the customers expects [sic] the retailer to make the whole experience seamless.”
This is a more radical position than it may seem. Although Sterling’s immediate point is simply that retailers can sell more things if they work with partners, the paper is really proposing that the retailer accept primary responsibility for the entire customer experience for the products it sells. What makes this radical is that the manufacturer would traditionally take that role. If the product breaks, the retailer should fix or replace it; if the customer has questions, they should call the retailer not the manufacturer or installer. Taken to its logical extreme, this implies the retailer would be responsible for advertising the products (to set appropriate customer expectations) and participate in design and development as well (to ensure quality and usability).
I’m not pointing out these extensions to ridicule the notion. Quite the contrary: I think it’s a great idea. One example that Sterling uses is construction of a home theater. Who better than an electronics retailer to take over the entire process of picking components, installing them, training users, servicing and ultimately updating the system over time? Perhaps there could be a regular replacement program similar to new-every-two-year cell phone or auto leasing contracts. Why shouldn’t the theater be branded by the retailer rather than the makers of the individual components? Some of this already happens: when I recently bought a High Definition TV, the salesperson ordered an upgrade to my cable service right from her sales terminal. But a more formal and comprehensive program would bring in additional sales during the initial installation, improve the chances of adding lucrative service contracts, and ensure repeat business.
The flip side of this is the store would be taking a bigger risk. If a product failed, the store would be responsible for ensuring customer satisfaction even if the manufacturer did not. And as the store took over responsibility for more elements of the customer experience, there would be more things to fail. This would force the store to be more careful about the products and service suppliers it partners with. That’s a great thing from a consumer point of view but a bit frightening for the store. Of course, we can expect the added risk and quality control overhead to be built into prices, so presumably consumers would have the choice of whether to pay for the smoother and simpler experience. (In fact, they already have this choice in home theaters, since they can purchase from specialist firms who offer exactly this sort of comprehensive service.) Stores would also likely be careful about which products they absorbed in this fashion: it makes sense for home theaters, but probably not for toasters.
In case you’re wondering, the other three rules are valid if less stimulating. Rule #2 stresses the importance of cross-channel integration, such as in-store pickup of online orders. Rule #3 recommends better in-store inventory systems to avoid apparent stock-outs (when the inventory actually is present but no one on the sales floor knows about it). Rule #4 offers the global advice to “Create a WOW experience” by creating “sustainable advantage...based on adaptability” through a “foundation for constant improvement.” That sounds intriguing but it quickly descends to the rather mundane example of cross-channel returns.
Friday, January 19, 2007
Eight Foot Geese and Customer Experience Management
I saw a flock of eight foot tall geese yesterday. It was awesome.
They were waddling majestically (when you’re eight feet tall, even waddling is majestic) in front of a row of thirty foot pine trees. I caught them in the corner of my eye, and they didn’t register at first because I was in the midst of an intense business discussion. Then I realized what I’d seen and looked closer—and of course the trees were just ten feet high and they were regular three foot geese. I had been fooled because there were no human-scaled objects—benches, cars, people—nearby, so there was nothing to frame my impression.
There’s certainly a business lesson in that story. Maybe it's about the importance of metrics, or putting things in perspective, or keeping real customers in view. Or knowing the context or keeping your focus. Perhaps I’ll use it one of those ways some day. But for now, just close your eyes and imagine a slow-moving flock of giant geese in front some pine trees.
It’s awesome.
They were waddling majestically (when you’re eight feet tall, even waddling is majestic) in front of a row of thirty foot pine trees. I caught them in the corner of my eye, and they didn’t register at first because I was in the midst of an intense business discussion. Then I realized what I’d seen and looked closer—and of course the trees were just ten feet high and they were regular three foot geese. I had been fooled because there were no human-scaled objects—benches, cars, people—nearby, so there was nothing to frame my impression.
There’s certainly a business lesson in that story. Maybe it's about the importance of metrics, or putting things in perspective, or keeping real customers in view. Or knowing the context or keeping your focus. Perhaps I’ll use it one of those ways some day. But for now, just close your eyes and imagine a slow-moving flock of giant geese in front some pine trees.
It’s awesome.
Thursday, January 18, 2007
IBM Paper Makes Emotion the Focus of Customer Experience Management
Somehow a paper published in 2005 by IBM’s financial services CRM group, “Creating a 20/20 customer experience: From customers to advocates,” recently found its way to my desk. (Click here here for a copy.)
The paper has plenty of the usual buzz words (customer advocacy, moments of truth, brand promise) but what stands out is a relatively unfamiliar (to me at least) claim that “emotive attributes…present the greatest opportunity for differentiation in the marketplace.” The fundamental argument is that most banks do a good job with operational execution, so emotional connection is the next battleground.
This struck me as an interesting proposition, even though I’m not sure I agree. So let’s look a little closer.
The paper starts out by defining the feelings a customer can have about a bank, ranging from antagonist to advocate. It doesn’t explicitly mention Net Promoter Scores but the notion is lurking in the background. (I commented briefly on Net Promoter Scores here here, and they’ve also come up in Ron Shevlin’s Marketing ROI blog and Adelino de Almeida’s Profitable Marketing). The paper then argues that most effective way to create advocacy is through emotional attributes, and poses the challenge of building these into everyday bank experiences. (I’m stripping out the jargon. The actual quote is “How do banks formalize and operationalize the advocacy-building, higher-order emotive attributes, such as dignity or empathy, as promised by their brands?”)
From there, the paper goes on to discuss five “dimensions” of “customer experience expectations”, including industry baseline expectations, brand-specific expectations, importance to the customer, customer effort required, and emotional impact. All this leads to the proposition that the only really important interactions are those that “leave a lasting impact and change a particular customer’s attitude toward the company.” These are the “moments of truth” and should be the focus of a bank’s efforts.
The paper immediately pulls back from this position with a “note of caution”. “Even the most unimportant or emotionally irrelevant interaction can be soured into a disaster if there is unexpected rude treatment or a fundamental breakdown in delivery...basic delivery against brand-specific and baseline exceptions [sic; I think they meant expectations] is absolutely critical…many banks would be vastly improved by making progress on just baseline and brand-specific delivery.”
But its heart still belongs to those emotive moments of truth. It even defines “customer experience” as “the impact that certain interactions make that create a lasting feeling or attitude toward a bank.” (Italics in the original.) I’d venture that most customer experience management gurus would include all interactions in their definition. I know that I do.
The paper then works through the implications of its position. These are essentially that banks need to understand which experiences will have lasting impacts and should transform their operations to deliver them consistently. Readers get a handy framework with six dimensions (emotive attributes, rational attributes, customer segments, interactions, channels & touchpoints, products & services) and seven competencies (customer interaction design, integrated channels, event-driven communications, human performance, segment-influence operations, virtual interaction delivery, and innovation). If you’re not quite up to implementing this for yourself, IBM would be happy to help.
It’s hard to know what to make of this. Yes, emotion is an important element of the customer experience. But this isn’t news today and wasn’t news in 2005. Perhaps customer relationship management has largely focused on “rational” operational functions, but, as the paper itself concedes, you have to get those right before anything else matters, and plenty of organizations still don’t. That being the case, calling attention to emotion-intensive “moments of truth” is like a campaign to get drunk drivers to use their seatbelts: it might sometimes help but doesn’t address the main problem. It’s not so much wrong as it is distracting.
The paper has plenty of the usual buzz words (customer advocacy, moments of truth, brand promise) but what stands out is a relatively unfamiliar (to me at least) claim that “emotive attributes…present the greatest opportunity for differentiation in the marketplace.” The fundamental argument is that most banks do a good job with operational execution, so emotional connection is the next battleground.
This struck me as an interesting proposition, even though I’m not sure I agree. So let’s look a little closer.
The paper starts out by defining the feelings a customer can have about a bank, ranging from antagonist to advocate. It doesn’t explicitly mention Net Promoter Scores but the notion is lurking in the background. (I commented briefly on Net Promoter Scores here here, and they’ve also come up in Ron Shevlin’s Marketing ROI blog and Adelino de Almeida’s Profitable Marketing). The paper then argues that most effective way to create advocacy is through emotional attributes, and poses the challenge of building these into everyday bank experiences. (I’m stripping out the jargon. The actual quote is “How do banks formalize and operationalize the advocacy-building, higher-order emotive attributes, such as dignity or empathy, as promised by their brands?”)
From there, the paper goes on to discuss five “dimensions” of “customer experience expectations”, including industry baseline expectations, brand-specific expectations, importance to the customer, customer effort required, and emotional impact. All this leads to the proposition that the only really important interactions are those that “leave a lasting impact and change a particular customer’s attitude toward the company.” These are the “moments of truth” and should be the focus of a bank’s efforts.
The paper immediately pulls back from this position with a “note of caution”. “Even the most unimportant or emotionally irrelevant interaction can be soured into a disaster if there is unexpected rude treatment or a fundamental breakdown in delivery...basic delivery against brand-specific and baseline exceptions [sic; I think they meant expectations] is absolutely critical…many banks would be vastly improved by making progress on just baseline and brand-specific delivery.”
But its heart still belongs to those emotive moments of truth. It even defines “customer experience” as “the impact that certain interactions make that create a lasting feeling or attitude toward a bank.” (Italics in the original.) I’d venture that most customer experience management gurus would include all interactions in their definition. I know that I do.
The paper then works through the implications of its position. These are essentially that banks need to understand which experiences will have lasting impacts and should transform their operations to deliver them consistently. Readers get a handy framework with six dimensions (emotive attributes, rational attributes, customer segments, interactions, channels & touchpoints, products & services) and seven competencies (customer interaction design, integrated channels, event-driven communications, human performance, segment-influence operations, virtual interaction delivery, and innovation). If you’re not quite up to implementing this for yourself, IBM would be happy to help.
It’s hard to know what to make of this. Yes, emotion is an important element of the customer experience. But this isn’t news today and wasn’t news in 2005. Perhaps customer relationship management has largely focused on “rational” operational functions, but, as the paper itself concedes, you have to get those right before anything else matters, and plenty of organizations still don’t. That being the case, calling attention to emotion-intensive “moments of truth” is like a campaign to get drunk drivers to use their seatbelts: it might sometimes help but doesn’t address the main problem. It’s not so much wrong as it is distracting.
Wednesday, January 17, 2007
Still More on Lifetime Value Models
A comment on yesterday’s post on lifetime vale models makes the perfectly reasonable suggestion that people “develop a quick and dirty model, see what it can (and can't) do for you.. and iterate to the next level.” I made the foolish mistake of replying before my morning cup of coffee (not Starbucks, of course. But, come to mention it, signs all over my gym this week proclaim that they now “proudly brew” Starbucks coffee. I took this as aimed at me personally. The only thing missing is creepy posters with eyes that follow you everywhere. I digress.) In a caffeine-deprived state of orneriness, I replied that different types of lifetime value models use different technical approaches, so learning to build simple models may not teach very much about building complicated ones.
Whether or not this is correct, it does seem to contradict my comment in the original post that “there is really a continuum, so one type of model can evolve into another.” So I suppose a closer look at the topic is in order.
To focus the discussion a bit, note that I’m mostly concerned here with model complexity (the computational methods used to calculate lifetime value) rather than model scope (the range of data sources). Clearly data sources can be added incrementally, so continuity of scope is simply not in question. But can model complexity also grow incrementally, or do simpler techniques—say, what you can do in a spreadsheet—eventually run out of steam, so that you must switch to significantly different methods requiring different tools and user skills?
I still think the answer is yes, and say this based on considerable experience. I’ve built many simple lifetime value models over the years, actually dating back to the days before spreadsheet software, when we used real paper spreadsheets—huge, green analysis pads with tiny grids that spread across an entire desk. Ah, those were the days, and we didn’t have any of your fancy Starbucks coffee then either. Again I digress.
The point is, a model using a spreadsheet, whether paper or electronic, can only get so complex before it becomes unwieldy. In particular, it gets difficult to model contingent behaviors: customers who act differently depending on their past experiences. The best you can do on a spreadsheet is divide the original group of customers into increasing numbers of segments, each with a different experience history. But the number grows exponentially: if you had just seven customer experiences, each with three possible outcomes (good, bad, indifferent), that would yield 2,187 segments. And it’s really worse than that, because people can have the same experience more than once and you need to model multiple periods. Trust me, you don’t want to go there—I’ve tried.
The next step is to use some sort of database and programming language. You can get pretty far with this sort of thing—I’ve done some of these as well—but it takes a whole different level of skill than using a spreadsheet. Most business analysts don’t have the training, time or inclination to do this sort of development. Even if you have one who does, it’s not good business practice to rely on their undocumented, not-necessarily-well-tested efforts. So at this point you’re looking at a real development project, either for an IT department, advanced analytics group (i.e., statisticians) or business intelligence staff. Certainly if you’re going to use the model in an enterprise reporting system, such as measuring the results of customer experience management, you wouldn’t want anything less.
But, as I hope I’ve convinced you in the past few days, a model accurate enough to guide customer experience management has to incorporate contingencies and other subtle relationships that capture the impact of one experience on future behaviors. It would be very tough for an in-house group to build such a model from scratch. More likely, they’d end up using external software designed to handle such things. Acquiring the software and building the models would indeed take many months. It would probably result in scrapping any internally-built predecessor systems, although the data gathering processes built for those systems could likely be reused.
In the sense of pure technology, therefore, I do see three pretty much discontinuous levels to lifetime value modeling. (I can actually think of other approaches, but they have limited applications.) The simpler techniques have their uses, but can’t support metrics for enterprise-wide customer experience management. That you can’t “start simple” to support an application isn’t unusual: think of Amazon.com customized book recommendations, which are only possible if there’s a sophisticated technology to back them up. Or consider just-in-time manufacturing, which requires a sophisticated enterprise resource management system. Even Web search engines are useless if they don’t meet a minimum level of performance. Or…you get my point.
But customer experience metrics are just one use for lifetime value models. Plenty of other applications can be supported with simpler models. That’s what I wrote about yesterday. A logical corporate evolution would be to start with a simple value model and add complexity over time. Eventually the model becomes so cumbersome that it must be replaced with the next type of system. I suppose this scenario resembles what biologists call “punctuated evolution:” things grow slowly and incrementally for long periods, and then there is a sudden burst of major change. It may relate it to similar concepts from chaos theory. Makes a good title for a conference presentation, at any rate.
So I guess I was right both yesterday and today (somehow you knew I’d conclude that, didn’t you?) Companies can indeed evolve from one lifetime value model to another, even though the modeling techniques themselves are discontinuous.
This has some interesting management implications: you need to watch for evidence that you are approaching a conversion point, and recognize that you may need to shift control of the modeling process from one department to another when a conversion happens. You may even decide to let several different levels of modeling technology coexist in the organization, if they are suitable for different uses. Of course, this opens up the possibility of inconsistencies—“different versions of the truth”—familiar from other business intelligence areas. But trying to capture every aspect of an organization in one “great model in the sky” has drawbacks of its own. There are no simple solutions—but at least understanding the options can help you manage them better.
Whether or not this is correct, it does seem to contradict my comment in the original post that “there is really a continuum, so one type of model can evolve into another.” So I suppose a closer look at the topic is in order.
To focus the discussion a bit, note that I’m mostly concerned here with model complexity (the computational methods used to calculate lifetime value) rather than model scope (the range of data sources). Clearly data sources can be added incrementally, so continuity of scope is simply not in question. But can model complexity also grow incrementally, or do simpler techniques—say, what you can do in a spreadsheet—eventually run out of steam, so that you must switch to significantly different methods requiring different tools and user skills?
I still think the answer is yes, and say this based on considerable experience. I’ve built many simple lifetime value models over the years, actually dating back to the days before spreadsheet software, when we used real paper spreadsheets—huge, green analysis pads with tiny grids that spread across an entire desk. Ah, those were the days, and we didn’t have any of your fancy Starbucks coffee then either. Again I digress.
The point is, a model using a spreadsheet, whether paper or electronic, can only get so complex before it becomes unwieldy. In particular, it gets difficult to model contingent behaviors: customers who act differently depending on their past experiences. The best you can do on a spreadsheet is divide the original group of customers into increasing numbers of segments, each with a different experience history. But the number grows exponentially: if you had just seven customer experiences, each with three possible outcomes (good, bad, indifferent), that would yield 2,187 segments. And it’s really worse than that, because people can have the same experience more than once and you need to model multiple periods. Trust me, you don’t want to go there—I’ve tried.
The next step is to use some sort of database and programming language. You can get pretty far with this sort of thing—I’ve done some of these as well—but it takes a whole different level of skill than using a spreadsheet. Most business analysts don’t have the training, time or inclination to do this sort of development. Even if you have one who does, it’s not good business practice to rely on their undocumented, not-necessarily-well-tested efforts. So at this point you’re looking at a real development project, either for an IT department, advanced analytics group (i.e., statisticians) or business intelligence staff. Certainly if you’re going to use the model in an enterprise reporting system, such as measuring the results of customer experience management, you wouldn’t want anything less.
But, as I hope I’ve convinced you in the past few days, a model accurate enough to guide customer experience management has to incorporate contingencies and other subtle relationships that capture the impact of one experience on future behaviors. It would be very tough for an in-house group to build such a model from scratch. More likely, they’d end up using external software designed to handle such things. Acquiring the software and building the models would indeed take many months. It would probably result in scrapping any internally-built predecessor systems, although the data gathering processes built for those systems could likely be reused.
In the sense of pure technology, therefore, I do see three pretty much discontinuous levels to lifetime value modeling. (I can actually think of other approaches, but they have limited applications.) The simpler techniques have their uses, but can’t support metrics for enterprise-wide customer experience management. That you can’t “start simple” to support an application isn’t unusual: think of Amazon.com customized book recommendations, which are only possible if there’s a sophisticated technology to back them up. Or consider just-in-time manufacturing, which requires a sophisticated enterprise resource management system. Even Web search engines are useless if they don’t meet a minimum level of performance. Or…you get my point.
But customer experience metrics are just one use for lifetime value models. Plenty of other applications can be supported with simpler models. That’s what I wrote about yesterday. A logical corporate evolution would be to start with a simple value model and add complexity over time. Eventually the model becomes so cumbersome that it must be replaced with the next type of system. I suppose this scenario resembles what biologists call “punctuated evolution:” things grow slowly and incrementally for long periods, and then there is a sudden burst of major change. It may relate it to similar concepts from chaos theory. Makes a good title for a conference presentation, at any rate.
So I guess I was right both yesterday and today (somehow you knew I’d conclude that, didn’t you?) Companies can indeed evolve from one lifetime value model to another, even though the modeling techniques themselves are discontinuous.
This has some interesting management implications: you need to watch for evidence that you are approaching a conversion point, and recognize that you may need to shift control of the modeling process from one department to another when a conversion happens. You may even decide to let several different levels of modeling technology coexist in the organization, if they are suitable for different uses. Of course, this opens up the possibility of inconsistencies—“different versions of the truth”—familiar from other business intelligence areas. But trying to capture every aspect of an organization in one “great model in the sky” has drawbacks of its own. There are no simple solutions—but at least understanding the options can help you manage them better.
Tuesday, January 16, 2007
Types of Lifetime Value Models Part 2
Yesterday I listed four types of lifetime value models, based on combinations of complexity (simple/complex) and scope (partial/full). Each has its own characteristics:
- simple, partial. This is a simplest of all possibilities. Inputs describe just one type of activity—say, purchases—and the calculations themselves are simple. Such models are quite common. They include, for example, a typical formula that calculates lifetime value as revenue per year x years per customer. Because of their simplicity, they are largely limited to general strategic guidance on broad questions such as “what’s a new customer worth?” In addition, watching the input measures for changes can give hints of problems or opportunities worth exploring, but the measures are so general that significant shifts in underlying components are easily hidden. (For example, a small but important shift in product mix might not have much impact on the over-all numbers.) This sort of model is a reasonable place to start but doesn’t get you very far. To be of real use, it must be backed by a richer set of drill-downs into the formula components.
- simple, full. This uses a wider variety of inputs but still with simple calculations. Examples would be a lifetime value model built from a customer-level profit and loss statements, with different inputs for acquisition cost, revenue, bad debt, cost of goods, fulfillment, and so on. Again, such models are quite common, particularly among financial analysts who are used to the profit and loss statement format. The richer set of inputs allows more detailed outputs including cash flow projections and net present value estimates. To the extent that the input components map to specific experience types (prospecting, initial purchase, product use, service, etc.), these models can provide some insight into the relative cost and value of each experience. But the simplicity of the model’s formulas still prevents any exploration of subtle relationships among these experiences. For example, even if you know that customers who had a service problem renew at a lower rate than those who did not, this model cannot include that relationship. The use of these models is still therefore largely strategic, although they can extend beyond simple insights to include business planning.
- complex, partial. This uses more sophisticated formulas against a limited set of inputs. An example would be a system that uses detailed projections of sales by product line, married to fixed assumptions about acquisition, fulfillment and service costs. Such systems can help whatever department is providing the detailed input—perhaps, in the product sales example, to better understand cannibalization and cross-selling. They have less value to the company as a whole. In fact, they can be harmful if they miss important relationships: say, between a change in product mix and increased service costs. Without knowing that relationship, a revenue-based system might lead the sales people to push products that actually lose money because of higher service expense.
- complex, full: obviously this is the most comprehensive type of model, and also the most useful. By definition, it does capture the subtle relationships among different experiences. In Customer Experience Matrix terms, it fulfills the goal of “monetizing” each experience by calculating its net impact on customer value, including the downstream impacts on other experiences. Some forms of these models are built with deterministic formulas, although the best approach is probably agent based simulations. The real challenges with these models are assembling all the data inputs, standardizing them, integrating by customer, and analyzing them to uncover their relationships. Once built, the models can be used in many ways at both departmental and corporate levels: strategic planning; investment and resource allocation; forecasting; and tactical decision-making. Like any lifetime value model, they are most useful when frequently supplied with fresh data so they can be rerun to identify trends and deviations from expectations.
This bring us back to the question I started with yesterday: How detailed must a lifetime value model be to be useful? As we’ve just seen, each type of model is good for something, but my original question was in the context of helping with customer experience management. Perhaps disappointingly, the answer has to be that only complex, full-scope models can capture the long-term consequences of an experience. The top level result can still be a single, simple figure for lifetime value, but it must be backed up by sophisticated calculations on detailed data to be meaningful. Significant shortcuts are probably not going to work.
This should certainly not be an insurmountable barrier, either to lifetime value modeling or customer experience management. As we’ve already seen, companies can gain a great deal of utility from less demanding lifetime value models. Starting with those also helps them build understanding and expertise in creating the more advanced models. Even though the 2x2 matrix suggests a sharp delineation between the categories, there is really a continuum, so one type of model can evolve into another. In terms of customer experience management, simple lifetime value models can help train people throughout the enterprise to focus on lifetime value, even if they can’t yet connect it directly to experiences. Conversely, experience-oriented management provides many benefits even without lifetime value measurements. In short, companies can build both their experience management and value modeling skills in parallel, and connect them later when both have reached suitable stages of maturity. What’s important is to recognize that this connection must eventually occur, and plan for it during the early stages of the process.
- simple, partial. This is a simplest of all possibilities. Inputs describe just one type of activity—say, purchases—and the calculations themselves are simple. Such models are quite common. They include, for example, a typical formula that calculates lifetime value as revenue per year x years per customer. Because of their simplicity, they are largely limited to general strategic guidance on broad questions such as “what’s a new customer worth?” In addition, watching the input measures for changes can give hints of problems or opportunities worth exploring, but the measures are so general that significant shifts in underlying components are easily hidden. (For example, a small but important shift in product mix might not have much impact on the over-all numbers.) This sort of model is a reasonable place to start but doesn’t get you very far. To be of real use, it must be backed by a richer set of drill-downs into the formula components.
- simple, full. This uses a wider variety of inputs but still with simple calculations. Examples would be a lifetime value model built from a customer-level profit and loss statements, with different inputs for acquisition cost, revenue, bad debt, cost of goods, fulfillment, and so on. Again, such models are quite common, particularly among financial analysts who are used to the profit and loss statement format. The richer set of inputs allows more detailed outputs including cash flow projections and net present value estimates. To the extent that the input components map to specific experience types (prospecting, initial purchase, product use, service, etc.), these models can provide some insight into the relative cost and value of each experience. But the simplicity of the model’s formulas still prevents any exploration of subtle relationships among these experiences. For example, even if you know that customers who had a service problem renew at a lower rate than those who did not, this model cannot include that relationship. The use of these models is still therefore largely strategic, although they can extend beyond simple insights to include business planning.
- complex, partial. This uses more sophisticated formulas against a limited set of inputs. An example would be a system that uses detailed projections of sales by product line, married to fixed assumptions about acquisition, fulfillment and service costs. Such systems can help whatever department is providing the detailed input—perhaps, in the product sales example, to better understand cannibalization and cross-selling. They have less value to the company as a whole. In fact, they can be harmful if they miss important relationships: say, between a change in product mix and increased service costs. Without knowing that relationship, a revenue-based system might lead the sales people to push products that actually lose money because of higher service expense.
- complex, full: obviously this is the most comprehensive type of model, and also the most useful. By definition, it does capture the subtle relationships among different experiences. In Customer Experience Matrix terms, it fulfills the goal of “monetizing” each experience by calculating its net impact on customer value, including the downstream impacts on other experiences. Some forms of these models are built with deterministic formulas, although the best approach is probably agent based simulations. The real challenges with these models are assembling all the data inputs, standardizing them, integrating by customer, and analyzing them to uncover their relationships. Once built, the models can be used in many ways at both departmental and corporate levels: strategic planning; investment and resource allocation; forecasting; and tactical decision-making. Like any lifetime value model, they are most useful when frequently supplied with fresh data so they can be rerun to identify trends and deviations from expectations.
This bring us back to the question I started with yesterday: How detailed must a lifetime value model be to be useful? As we’ve just seen, each type of model is good for something, but my original question was in the context of helping with customer experience management. Perhaps disappointingly, the answer has to be that only complex, full-scope models can capture the long-term consequences of an experience. The top level result can still be a single, simple figure for lifetime value, but it must be backed up by sophisticated calculations on detailed data to be meaningful. Significant shortcuts are probably not going to work.
This should certainly not be an insurmountable barrier, either to lifetime value modeling or customer experience management. As we’ve already seen, companies can gain a great deal of utility from less demanding lifetime value models. Starting with those also helps them build understanding and expertise in creating the more advanced models. Even though the 2x2 matrix suggests a sharp delineation between the categories, there is really a continuum, so one type of model can evolve into another. In terms of customer experience management, simple lifetime value models can help train people throughout the enterprise to focus on lifetime value, even if they can’t yet connect it directly to experiences. Conversely, experience-oriented management provides many benefits even without lifetime value measurements. In short, companies can build both their experience management and value modeling skills in parallel, and connect them later when both have reached suitable stages of maturity. What’s important is to recognize that this connection must eventually occur, and plan for it during the early stages of the process.
Monday, January 15, 2007
Types of Lifetime Value Models
Consultants love 2x2 matrices. So in organizing my thoughts on the topic of lifetime value modeling, it’s natural that I ended up building one.
The question I’m wrestling with is, just how detailed must a lifetime value model must be to be useful? This is raised by my claim last Friday (here) that lifetime value is the essential measure needed to manage customer experience. My logic in a nutshell: the only way to judge whether an experience change is working is whether it improves lifetime value. Nothing else really counts.
You may or may not agree, but let’s assume that’s true for sake of discussion. The question then becomes, what does it take to build a lifetime value model that’s adequate for the purpose? Actually, the answer is obvious: the model must be able to estimate the impact of any experience change on final lifetime value. But this just leads to the equally obvious observation that “any” experience is too broad a goal, and what we really need is make tough choices about which experiences to include or exclude.
Now, being a consultant, I don’t make tough choices unless someone pays me a lot of money. But 2x2 matrices? Those you can have for free.
So let’s think about lifetime value models in two dimensions. The first is complexity. This ranges from simple to, um, complex. A simple model would be something you could do with math functions in a spreadsheet. A complex model involves polynomial formulas and multi-variate regression and such. It can incorporate many more factors than a simple model and allows for subtle relationships among them. In practical terms, a simple model is something a business analyst can create while a complex model needs a statistician.
The second dimension is scope. This indicates which experiences are included in the model and ranges from partial to full. A partial model might include only one experience such as acquisition or renewal, while a full model would include all experiences from prospecting through product use to customer service. In general, experiences map to business functions (marketing, sales, service, operations, etc.) which in turn map to departments. So even though what we really care about is experiences, we can think of a partial model as dealing with activities in one or several company departments, while a full-scope model deals with all departments. This departmental orientation makes sense because the input data will usually be held in departmental systems. So expanding the scope of a model will usually be done on a department-by-department basis.
Now we have a nice 2x2 matrix, with four types of models: simple/partial, simple/full, complex/partial and complex/full. What use can you make of each type? I think I’ll save the answer for tomorrow.
The question I’m wrestling with is, just how detailed must a lifetime value model must be to be useful? This is raised by my claim last Friday (here) that lifetime value is the essential measure needed to manage customer experience. My logic in a nutshell: the only way to judge whether an experience change is working is whether it improves lifetime value. Nothing else really counts.
You may or may not agree, but let’s assume that’s true for sake of discussion. The question then becomes, what does it take to build a lifetime value model that’s adequate for the purpose? Actually, the answer is obvious: the model must be able to estimate the impact of any experience change on final lifetime value. But this just leads to the equally obvious observation that “any” experience is too broad a goal, and what we really need is make tough choices about which experiences to include or exclude.
Now, being a consultant, I don’t make tough choices unless someone pays me a lot of money. But 2x2 matrices? Those you can have for free.
So let’s think about lifetime value models in two dimensions. The first is complexity. This ranges from simple to, um, complex. A simple model would be something you could do with math functions in a spreadsheet. A complex model involves polynomial formulas and multi-variate regression and such. It can incorporate many more factors than a simple model and allows for subtle relationships among them. In practical terms, a simple model is something a business analyst can create while a complex model needs a statistician.
The second dimension is scope. This indicates which experiences are included in the model and ranges from partial to full. A partial model might include only one experience such as acquisition or renewal, while a full model would include all experiences from prospecting through product use to customer service. In general, experiences map to business functions (marketing, sales, service, operations, etc.) which in turn map to departments. So even though what we really care about is experiences, we can think of a partial model as dealing with activities in one or several company departments, while a full-scope model deals with all departments. This departmental orientation makes sense because the input data will usually be held in departmental systems. So expanding the scope of a model will usually be done on a department-by-department basis.
Now we have a nice 2x2 matrix, with four types of models: simple/partial, simple/full, complex/partial and complex/full. What use can you make of each type? I think I’ll save the answer for tomorrow.
Friday, January 12, 2007
Choosing Metrics for Customer Exerience Management
I spent some time yesterday researching metrics for customer experience. This was far from idle curiosity, since one of our premises at Client X Client is that businesses should focus on the single measure of customer lifetime value. I wanted to see how other people approach the problem.
Our position turns out to be unusual. Of course, many people mention lifetime value, but usually as one in a long list of measures. I’ll come back later to why we disagree, but first let’s look at what other people propose. I found the measures fall into several basic categories:
- operational measures such as Web site response time, call center hold time, number of upsell offers made, number of emails sent, and so on. These measure company-controlled activities and the quality of the experience provided to the customer. Mystery shopper scores and other pure “experience” measures would also fall into this category, since they measure what the company does rather than how customers respond.
- behavior measures such as store visit length, shopping cart abandonment rates, complaint rates, and call center hold queue abandonment rates. These measure customer activities and are often closely related to the company’s operational performance.
- results measures such as conversion rates, attrition rates, and share of wallet. Like behavior measures, these measure customer activities. But they have direct financial values and thus can be tied into a lifetime value calculation.
- attitudinal measures such as customer satisfaction scores, net promoter scores, and customer comments. These are indirect rather than direct measures, and as such must be treated carefully because people often say one thing and do another. But they are useful because they can summarize the consequences of the many different experiences that customers receive.
It’s easy to say that all these measures are important. But it’s not so easy to collect and interpret them all. In fact, it’s overwhelmingly difficult. Down at the tactical level—which is where businesses make and lose money—you will need to focus on specific operational and behavior measures for specific projects such as Web site optimization or IVR deployment. In addition, you should continuously track at least a few operational and behavioral measures as leading indicators of business results. But if you try to track too many of them, they just become background noise.
The trick is to place the operational, behavioral and attitudinal measures in context by linking them to results. This is where we get back to lifetime value: it is the one ultimate result. All other result measures are just contributors to lifetime value.
Think about it. Knowing the relationship between, say, Web site load times (an operational measure) and conversion rates (a results measure) may let me predict the impact on conversion of an investment that would improve response time, or the impact of letting response time deteriorate. But to understand what the change in conversion rates is worth, I have to relate it to a dollar measure—that is, to lifetime value. (Yes I could use a simpler measure, like revenue. But I really want to consider the other implications such as the impact of conversion rates on long-term retention rates.)
The benefit of relating everything to lifetime value is that you can now compare wildly different decisions: do I upgrade the Web site or install that new IVR? Knowing that one will improve conversion rates by 5% while the other will cut hold times by 10% isn’t particularly helpful. Knowing that one will change lifetime value by $1 million and the other by $2 million clarifies matters quite nicely. (Obviously we’re talking about the aggregate lifetime value of all customers—a topic with nuances of its own.)
Similarly, at a tactical level, if I’m watching trends in a bunch of operational measures and several start to falter, knowing the impact of each change on lifetime value lets me determine which to address first. This is how you avoid being overwhelmed by too many operational and behavior measures: capture the trends on as many as you like, but only highlight the ones with a substantial business impact.
Now you see why Client X Client considers lifetime value so important. But if we’re right, the implication is you need a lifetime value forecasting model with all the underlying formulas that connect changes in other measures to the lifetime value figure. This type of model isn’t easy to build. The need for it is the reason I spend so much time looking at things like marketing mix models, agent based models and multivariate testing systems. From what I see, the techniques to build the lifetime value forecast models are indeed available even though they are rarely used in quite this way. Yet I would argue that customer experience management can only succeed as an enterprise management tool if it has a meaningful measurement regime. So building these models is a critical requirement for CEM’s future development.
Our position turns out to be unusual. Of course, many people mention lifetime value, but usually as one in a long list of measures. I’ll come back later to why we disagree, but first let’s look at what other people propose. I found the measures fall into several basic categories:
- operational measures such as Web site response time, call center hold time, number of upsell offers made, number of emails sent, and so on. These measure company-controlled activities and the quality of the experience provided to the customer. Mystery shopper scores and other pure “experience” measures would also fall into this category, since they measure what the company does rather than how customers respond.
- behavior measures such as store visit length, shopping cart abandonment rates, complaint rates, and call center hold queue abandonment rates. These measure customer activities and are often closely related to the company’s operational performance.
- results measures such as conversion rates, attrition rates, and share of wallet. Like behavior measures, these measure customer activities. But they have direct financial values and thus can be tied into a lifetime value calculation.
- attitudinal measures such as customer satisfaction scores, net promoter scores, and customer comments. These are indirect rather than direct measures, and as such must be treated carefully because people often say one thing and do another. But they are useful because they can summarize the consequences of the many different experiences that customers receive.
It’s easy to say that all these measures are important. But it’s not so easy to collect and interpret them all. In fact, it’s overwhelmingly difficult. Down at the tactical level—which is where businesses make and lose money—you will need to focus on specific operational and behavior measures for specific projects such as Web site optimization or IVR deployment. In addition, you should continuously track at least a few operational and behavioral measures as leading indicators of business results. But if you try to track too many of them, they just become background noise.
The trick is to place the operational, behavioral and attitudinal measures in context by linking them to results. This is where we get back to lifetime value: it is the one ultimate result. All other result measures are just contributors to lifetime value.
Think about it. Knowing the relationship between, say, Web site load times (an operational measure) and conversion rates (a results measure) may let me predict the impact on conversion of an investment that would improve response time, or the impact of letting response time deteriorate. But to understand what the change in conversion rates is worth, I have to relate it to a dollar measure—that is, to lifetime value. (Yes I could use a simpler measure, like revenue. But I really want to consider the other implications such as the impact of conversion rates on long-term retention rates.)
The benefit of relating everything to lifetime value is that you can now compare wildly different decisions: do I upgrade the Web site or install that new IVR? Knowing that one will improve conversion rates by 5% while the other will cut hold times by 10% isn’t particularly helpful. Knowing that one will change lifetime value by $1 million and the other by $2 million clarifies matters quite nicely. (Obviously we’re talking about the aggregate lifetime value of all customers—a topic with nuances of its own.)
Similarly, at a tactical level, if I’m watching trends in a bunch of operational measures and several start to falter, knowing the impact of each change on lifetime value lets me determine which to address first. This is how you avoid being overwhelmed by too many operational and behavior measures: capture the trends on as many as you like, but only highlight the ones with a substantial business impact.
Now you see why Client X Client considers lifetime value so important. But if we’re right, the implication is you need a lifetime value forecasting model with all the underlying formulas that connect changes in other measures to the lifetime value figure. This type of model isn’t easy to build. The need for it is the reason I spend so much time looking at things like marketing mix models, agent based models and multivariate testing systems. From what I see, the techniques to build the lifetime value forecast models are indeed available even though they are rarely used in quite this way. Yet I would argue that customer experience management can only succeed as an enterprise management tool if it has a meaningful measurement regime. So building these models is a critical requirement for CEM’s future development.
Thursday, January 11, 2007
Customer Experience Management May be Stuck in the Chasm
Enough already about brands! Somehow I’ve gotten sidetracked onto that topic, which is so inherently fascinating that it’s tough to give up. As a parting shot, I’ll cite Apple’s conflict with Cisco over the “iPhone” name as more proof that customer have relationships with brands, not companies. For example, note Cisco general counsel Mark Chandler’s statement “The action we’ve taken is about protecting our brand.” (“Cisco, Claiming Ownership of ‘iPhone,’ Sues Apple”, The New York Times), January 11, 2007, page C13). If relationships were really with companies, then the product name wouldn't be all that important. But Apple wants the iPhone name to extend its iPod brand--and apparently feels this is so valuable that it's worth fighting for.
The iPhone announcement also highlights another favorite theme of this blog: that the smart phone is emerging as a multi-channel device presenting new, unique opportunities for customer experiences. Actually, this point has now been made so often in the media that it’s probably not even worth arguing here anymore.
So let’s get back to my primary topic, implementation of customer experience management. (Did you know that was my primary topic?) I’ve been re-reading Geoffrey A. Moore’s classic Crossing the Chasm, which describes why so many technical innovations fail to gain broad market acceptance. Although the book is about the high-tech industry, its concepts probably apply to managerial innovations like customer experience management. Certainly CEM appears stuck in the chasm between adoption by a small number of early adopter "visionaries" and the acceptance by a larger market of early majority "pragmatists."
As Moore sees it, the difference between visionaries and pragmatics is profound. Visionaries seek radical change to gain strategic advantage. Pragmatists want incremental improvements to existing processes. No amount of enthusiastic preaching by visionaries will attract the pragmatists. In fact, it probably scares them off. Pragmatists need to be convinced that a proposed change is risk-free. This means leading them through a step-by-step description of a complete solution to whatever problem the innovation is claiming to solve.
I’m beginning to suspect that this is why CEM has apparently stalled in its growth. (I know many people would argue it hasn’t stalled at all—see my post Reading the Hype Meter for Customer Experience Management for why I think differently.) Current proponents are visionaries who present CEM as an all-encompassing change that will revolutionize every organization it touches. They don’t show how it can employed in a limited way to solve a particular problem. In fact, from a pragmatists’ viewpoint, they pretty much shoot themselves in the foot by arguing you shouldn’t and can’t deploy it incrementally.
I love the grand vision as much as anyone. But if CEM is going to spread throughout the business world, its advocates may need to tone down the rhetoric and start by thinking small.
The iPhone announcement also highlights another favorite theme of this blog: that the smart phone is emerging as a multi-channel device presenting new, unique opportunities for customer experiences. Actually, this point has now been made so often in the media that it’s probably not even worth arguing here anymore.
So let’s get back to my primary topic, implementation of customer experience management. (Did you know that was my primary topic?) I’ve been re-reading Geoffrey A. Moore’s classic Crossing the Chasm, which describes why so many technical innovations fail to gain broad market acceptance. Although the book is about the high-tech industry, its concepts probably apply to managerial innovations like customer experience management. Certainly CEM appears stuck in the chasm between adoption by a small number of early adopter "visionaries" and the acceptance by a larger market of early majority "pragmatists."
As Moore sees it, the difference between visionaries and pragmatics is profound. Visionaries seek radical change to gain strategic advantage. Pragmatists want incremental improvements to existing processes. No amount of enthusiastic preaching by visionaries will attract the pragmatists. In fact, it probably scares them off. Pragmatists need to be convinced that a proposed change is risk-free. This means leading them through a step-by-step description of a complete solution to whatever problem the innovation is claiming to solve.
I’m beginning to suspect that this is why CEM has apparently stalled in its growth. (I know many people would argue it hasn’t stalled at all—see my post Reading the Hype Meter for Customer Experience Management for why I think differently.) Current proponents are visionaries who present CEM as an all-encompassing change that will revolutionize every organization it touches. They don’t show how it can employed in a limited way to solve a particular problem. In fact, from a pragmatists’ viewpoint, they pretty much shoot themselves in the foot by arguing you shouldn’t and can’t deploy it incrementally.
I love the grand vision as much as anyone. But if CEM is going to spread throughout the business world, its advocates may need to tone down the rhetoric and start by thinking small.
Wednesday, January 10, 2007
Do New Locations Dilute the Starbucks Brand?
I am not a big fan of Starbucks: the lines are too long, the made-up names are pretentious, and waiting to pick up your coffee from that little table is stressful (when will it come? which one is mine?). But Starbucks was on my mind yesterday as I was thinking about brands and customer experience. In particular, I was wondering whether the Starbucks brand can be meaningfully extended outside of Starbucks stores. Sure I can buy Starbucks-brand coffee at the grocery for home-brewing, at outside locations such as airports or hotels, or in offices through catering. But am I getting the Starbucks brand experience?
These extensions would make sense if the basis of the brand were the superior taste of the coffee itself. Some people would argue it is. But I think the foundation of the Starbucks brand is the in-store experience—a comfortable, civilized atmosphere vaguely similar to a European café (or to Americans’ idealized image of a European café). Maybe drinking a cup of Starbucks while anticipating the misery of today’s air travel will trigger a pleasant memory of more congenial circumstances. But it’s more likely to dilute Starbucks’ brand by associating it with something very ugly. It’s easy to understand why Starbucks would want the incremental revenue of such sales. But you have to wonder whether they’re harming themselves in the long run.
The Starbucks situation also illustrates yesterday’s point about the difference between customers of a brand and customers of a company. People buying at a Starbucks store are both. But someone pulling coffee from a Starbucks-labeled urn during a break at a conference is a customer of the Starbucks brand only. The company they have purchased from is the conference organizer.
Several brands are involved in that simple caffeine injection: Starbucks itself; the conference center that chose Starbucks; and the organizer that chose the conference center. Each picks its suppliers with an eye to associating the experience the supplier provides (i.e., its brand) with their own. This means that every participant is to some degree at the mercy of the others: a bad experience with the coffee will harm all their brands regardless of who is at fault. Thus each participant needs to pay attention to how the others can be expected to perform before agreeing to do business.
As it happens, this morning’s (The New York Times www.nytimes.com) had an article on Starbucks competing with McDonald’s for breakfast customers. (“The Breakfast Wars”, Dining Out, The New York Times, January 10, 2007). It’s an interesting pairing: like Starbucks, McDonalds is really more about a branded experience than the food itself. McDonalds is also a good example of yesterday’s comment about offering different brand faces to different customer segments. Their ads aimed at kids, teens, and adults show very different approaches to the same general theme of McDonalds as a fun place to visit.
But most of the article is about Starbucks. It is focused primarily on the inconsistencies that Starbucks has traditionally tolerated in its food offerings. (The premise of the article—that Starbucks is trying to reach the quality of McDonalds food—is a brutal indication of just how bad Starbucks’ current food must be.) Of course, consistency is the literal definition of quality, and inconsistencies abound: in food from store to store, between the coffee and the food, and between the food and the store ambiance. Clearly Starbucks needs to improve its food to protect its brand. And, once it improves the food in its own stores, it needs to find a way to ensure that food at external locations is consistent with its new in-house standards. Good luck with that.
These extensions would make sense if the basis of the brand were the superior taste of the coffee itself. Some people would argue it is. But I think the foundation of the Starbucks brand is the in-store experience—a comfortable, civilized atmosphere vaguely similar to a European café (or to Americans’ idealized image of a European café). Maybe drinking a cup of Starbucks while anticipating the misery of today’s air travel will trigger a pleasant memory of more congenial circumstances. But it’s more likely to dilute Starbucks’ brand by associating it with something very ugly. It’s easy to understand why Starbucks would want the incremental revenue of such sales. But you have to wonder whether they’re harming themselves in the long run.
The Starbucks situation also illustrates yesterday’s point about the difference between customers of a brand and customers of a company. People buying at a Starbucks store are both. But someone pulling coffee from a Starbucks-labeled urn during a break at a conference is a customer of the Starbucks brand only. The company they have purchased from is the conference organizer.
Several brands are involved in that simple caffeine injection: Starbucks itself; the conference center that chose Starbucks; and the organizer that chose the conference center. Each picks its suppliers with an eye to associating the experience the supplier provides (i.e., its brand) with their own. This means that every participant is to some degree at the mercy of the others: a bad experience with the coffee will harm all their brands regardless of who is at fault. Thus each participant needs to pay attention to how the others can be expected to perform before agreeing to do business.
As it happens, this morning’s (The New York Times www.nytimes.com) had an article on Starbucks competing with McDonald’s for breakfast customers. (“The Breakfast Wars”, Dining Out, The New York Times, January 10, 2007). It’s an interesting pairing: like Starbucks, McDonalds is really more about a branded experience than the food itself. McDonalds is also a good example of yesterday’s comment about offering different brand faces to different customer segments. Their ads aimed at kids, teens, and adults show very different approaches to the same general theme of McDonalds as a fun place to visit.
But most of the article is about Starbucks. It is focused primarily on the inconsistencies that Starbucks has traditionally tolerated in its food offerings. (The premise of the article—that Starbucks is trying to reach the quality of McDonalds food—is a brutal indication of just how bad Starbucks’ current food must be.) Of course, consistency is the literal definition of quality, and inconsistencies abound: in food from store to store, between the coffee and the food, and between the food and the store ambiance. Clearly Starbucks needs to improve its food to protect its brand. And, once it improves the food in its own stores, it needs to find a way to ensure that food at external locations is consistent with its new in-house standards. Good luck with that.
Tuesday, January 09, 2007
Brands are Movie Stars, Companies Own the Theater
I’m still gnawing on the distinction between brand relationships and customer relationships.
Some distinction seems to be in order. Traditional brand relationships are one-sided: a brand is a movie star whose life is known to millions but is herself unaware of the fans’ activities. On the other hand, customer relationships are dialogues: they are based on interactions in which each side is (or should be) aware of the other.
But customers really interact with companies: they interact with a company’s brand. Think of the brand as a mask that the company wears when dealing with customers. One company can have many masks, which each appear different to consumers. Since all customer interactions are interactions with a brand, all customer relationships are brand relationships. QED.
Yet this does not mean that all brand relationships are customer relationships. Even though customer interactions are becoming more common, many brand relationships remain one-sided. Realistically, this is not an either/or choice some brands interact more with their customers than others, but most have a mix of one-way and two-way relationships.
What does this mean in terms of customer experience management and brand management? Well, it still means that all brand experiences—whether two-way interactions or one-sided messages—should be consistent with the brand image. Consistency does not mean all experiences must be the same for everyone. A brand based on caring about individuals, to take an obvious (possibly tautological) example, would make a point of explaining how it tailors its treatments. But even a brand based on something more conventional, such as product quality, could still personalize its treatments to express product quality in ways that are relevant to different customers.
On the other hand, there are still those situations where personalization is impossible. This points to a basic difference in the definition of a customer. From a brand perspective, a customer is anyone who buys your product, while from a company perspective, a customer is anyone who buys from you. Sometimes a company will be selling its products directly, but often it will not. Customizing an experience for people who buy from someone else (your brand’s customers, but not your company’s customers) is tricky but not impossible. It is what product configuration and after-sales service are about, not to mention informational Web sites.
Of course, even this degree of customization does not apply to mass advertising. Here, the best that can be done is to select TV shows, magazines, and other vehicles that reach particular customer segments and deliver messages appropriate to those segments. If the audience is truly a cross-section of society—although it’s hard to think of an example; maybe skywriting?—then the message cannot be segment-specific. But it should still reflect the primary brand personality.
Some distinction seems to be in order. Traditional brand relationships are one-sided: a brand is a movie star whose life is known to millions but is herself unaware of the fans’ activities. On the other hand, customer relationships are dialogues: they are based on interactions in which each side is (or should be) aware of the other.
But customers really interact with companies: they interact with a company’s brand. Think of the brand as a mask that the company wears when dealing with customers. One company can have many masks, which each appear different to consumers. Since all customer interactions are interactions with a brand, all customer relationships are brand relationships. QED.
Yet this does not mean that all brand relationships are customer relationships. Even though customer interactions are becoming more common, many brand relationships remain one-sided. Realistically, this is not an either/or choice some brands interact more with their customers than others, but most have a mix of one-way and two-way relationships.
What does this mean in terms of customer experience management and brand management? Well, it still means that all brand experiences—whether two-way interactions or one-sided messages—should be consistent with the brand image. Consistency does not mean all experiences must be the same for everyone. A brand based on caring about individuals, to take an obvious (possibly tautological) example, would make a point of explaining how it tailors its treatments. But even a brand based on something more conventional, such as product quality, could still personalize its treatments to express product quality in ways that are relevant to different customers.
On the other hand, there are still those situations where personalization is impossible. This points to a basic difference in the definition of a customer. From a brand perspective, a customer is anyone who buys your product, while from a company perspective, a customer is anyone who buys from you. Sometimes a company will be selling its products directly, but often it will not. Customizing an experience for people who buy from someone else (your brand’s customers, but not your company’s customers) is tricky but not impossible. It is what product configuration and after-sales service are about, not to mention informational Web sites.
Of course, even this degree of customization does not apply to mass advertising. Here, the best that can be done is to select TV shows, magazines, and other vehicles that reach particular customer segments and deliver messages appropriate to those segments. If the audience is truly a cross-section of society—although it’s hard to think of an example; maybe skywriting?—then the message cannot be segment-specific. But it should still reflect the primary brand personality.
Monday, January 08, 2007
Is the Brand the Experience, or Vice Versa?
My exploration of the relationship between brand marketing and customer experience management somehow landed me at the Web page of BrandAmplitude LLC, a consultancy specializing in brand strategy and research. There I found a first-rate paper by BrandAmplitude president Carol Phillips, called “It Is Rocket Science: How ‘Science’ is Displacing ‘Art’ in Marketing and Creating a New Generation of Practitioners” available here. This presents a mirror image of my comments from last Friday: it argues that brand marketing itself is becoming more like relationship marketing. Specifically, “The ‘Segmentation Era’ which began in the late 70’s and valued insight, inspiration and creativity is giving way to [a] new customer-centric model that values deep customer knowledge, empirical results and efficiency.”
According to the paper, the change is caused by interactive marketing tools that provide vast amounts of customer-level data. “In-depth customer information enables and drives an organization-wide focus on customers. Customers, rather than brands, are increasingly viewed as a company’s most important asset…customer relationships are more likely to focus on the company and its unique knowledge rather than the benefits or features of a particular offering….branded products and services are becoming simply one more tool in building profitable corporate customer franchises.”
This is an important notion: instead of relationships with brands, customers will have relationships with companies. This is because the company is the repository of the data that allows creation of intimate relationships. “When every customer has a unique relationship with a company based on personal preferences, the value of the brand (and its potential for competitive differentiation) rests more in product/service communications, distribution and delivery than in the product or service itself.” Of course, communications, distribution and delivery were always part of the brand: what’s new is that these are customized for individuals.
The impact on marketing is huge. “Marketing’s role extends well beyond identifying the brand promise to ensure that the promise is operationalized at every point of contact….Marketing is required to lead the entire organization to examine its customer facing processes and evaluate them against the ideal ‘customer experience’ and the experience offered by competitors.” The goal is now “bringing the brand to life, consistently, at every point of contact to create a competitive differentiated experience that delivers against customer expectations.”
The paper may overstate the change when it argues that “the ‘art’ of marketing” will give way to “scientific application of empirical rules” and, later, that “‘Creative’ functions will be increasingly mechanized and commoditized, for when creative can be evaluated empirically, ‘creative judgment’ becomes irrelevant.” Although relationship marketing may be more systematic and less subjective than traditional brand marketing, creativity is equally essential to both. Since the paper was written in 2004, these comments may simply reflect the excitement of a new convert.
In any event, the paper makes a compelling and well-stated argument that marketers should take the lead in managing customer experience. It’s worth adding to your own list of resources.
According to the paper, the change is caused by interactive marketing tools that provide vast amounts of customer-level data. “In-depth customer information enables and drives an organization-wide focus on customers. Customers, rather than brands, are increasingly viewed as a company’s most important asset…customer relationships are more likely to focus on the company and its unique knowledge rather than the benefits or features of a particular offering….branded products and services are becoming simply one more tool in building profitable corporate customer franchises.”
This is an important notion: instead of relationships with brands, customers will have relationships with companies. This is because the company is the repository of the data that allows creation of intimate relationships. “When every customer has a unique relationship with a company based on personal preferences, the value of the brand (and its potential for competitive differentiation) rests more in product/service communications, distribution and delivery than in the product or service itself.” Of course, communications, distribution and delivery were always part of the brand: what’s new is that these are customized for individuals.
The impact on marketing is huge. “Marketing’s role extends well beyond identifying the brand promise to ensure that the promise is operationalized at every point of contact….Marketing is required to lead the entire organization to examine its customer facing processes and evaluate them against the ideal ‘customer experience’ and the experience offered by competitors.” The goal is now “bringing the brand to life, consistently, at every point of contact to create a competitive differentiated experience that delivers against customer expectations.”
The paper may overstate the change when it argues that “the ‘art’ of marketing” will give way to “scientific application of empirical rules” and, later, that “‘Creative’ functions will be increasingly mechanized and commoditized, for when creative can be evaluated empirically, ‘creative judgment’ becomes irrelevant.” Although relationship marketing may be more systematic and less subjective than traditional brand marketing, creativity is equally essential to both. Since the paper was written in 2004, these comments may simply reflect the excitement of a new convert.
In any event, the paper makes a compelling and well-stated argument that marketers should take the lead in managing customer experience. It’s worth adding to your own list of resources.
Friday, January 05, 2007
Customer Experience Management Is Really Brand Management (and that's a good thing)
I’m still pondering yesterday’s question of what it would mean for customer management to become more like consumer packaged goods marketing. To me, the dominant feature of packaged good marketing is the focus on brands. In some ways, the brand is a surrogate for the company: because the company cannot create direct relationships with customer, it gives customers as a metaphorical relationship with the brand. Marketers who do have direct customer relationships have been less obsessed with brands, although they still acknowledge their importance.
(At the risk of confusing things, I’m going to refer to marketers with a direct customer relationship as “relationship marketers”. This seems the least painful alternative—“customer marketers” is too broad and “direct marketers” means something else.)
Brand marketers often speak of a brand personality, brand promise and a brand experience. That last phrase obviously recalls “customer experience”. The difference, if any, depends on who is talking. For example, Bernd Schmitt’s EX Group considers the brand experience as one component of the customer experience. (Brand experience involves design, communications, and products. The other primary component of customer experience is the “customer interface”, which covers direct customer interactions. Click here for the EX Group diagram.)
Although I find the distinction between direct and indirect interactions to be useful, I consider both to be experiences with a brand, and therefore don’t distinguish between “brand experience” and “customer experience”. But the semantics are not important. What really matters is that the packaged goods marketers’ notion of “brand” overlaps greatly with relationship marketers’ notion of “customer experience”.
This overlap means the many tools that packaged goods marketers have built to measure and manage brands can very likely be adopted to help relationship marketers measure and manage customer experiences. It also means customer experience management concepts can be described in brand management terms—something that may greatly speed their adoption in businesses where brand management is historically entrenched.
(At the risk of confusing things, I’m going to refer to marketers with a direct customer relationship as “relationship marketers”. This seems the least painful alternative—“customer marketers” is too broad and “direct marketers” means something else.)
Brand marketers often speak of a brand personality, brand promise and a brand experience. That last phrase obviously recalls “customer experience”. The difference, if any, depends on who is talking. For example, Bernd Schmitt’s EX Group considers the brand experience as one component of the customer experience. (Brand experience involves design, communications, and products. The other primary component of customer experience is the “customer interface”, which covers direct customer interactions. Click here for the EX Group diagram.)
Although I find the distinction between direct and indirect interactions to be useful, I consider both to be experiences with a brand, and therefore don’t distinguish between “brand experience” and “customer experience”. But the semantics are not important. What really matters is that the packaged goods marketers’ notion of “brand” overlaps greatly with relationship marketers’ notion of “customer experience”.
This overlap means the many tools that packaged goods marketers have built to measure and manage brands can very likely be adopted to help relationship marketers measure and manage customer experiences. It also means customer experience management concepts can be described in brand management terms—something that may greatly speed their adoption in businesses where brand management is historically entrenched.
Thursday, January 04, 2007
Is Packaged Goods Marketing the Way of the Future?
Writing about marketing mix models yesterday got me thinking about why this technique, which started in the consumer packaged goods industry, is now being applied elsewhere. My conclusion—which I found intriguing, and think you may as well—is that even industries in direct contact with their customers are starting to resemble the anonymous world of consumer packaged goods.
Bear with me on this. Most thought relating to customer management has come from industries where companies can link transactions to specific customers: financial services, telecommunications, direct response, travel, retail, gaming). The goal has been to gather this customer information and use it to tailor customer treatments—or customer experiences if you prefer—to each individual. Increasing sophistication has meant gathering more data to understand customers in more detail to make more precise predictions of larger numbers of behaviors. The industry’s proudest boast has been that it can measure the results of marketing activities with a precision that the customer-blind (but vastly better funded) practitioners of traditional consumer marketing cannot.
But a funny thing happened on the way to the future.
Actually, it was two funny things.
The first was simply that more and more targeted messages were sent. This meant the foundational myth—that you could attribute a specific customer response to a single marketing message—became increasingly untenable. Marketers always knew it wasn’t true, but when the number of contacts was limited to a one or two a month, the inaccuracy of accepting it was minor. But now that customers may receive dozens of messages each month—from email, Web site visits, and operational events like invoices, packages and phone calls—it became increasingly absurd to argue that the only message worth counting is the last one before the order.
The second thing that happened was customer-specific marketing moved from being a supplemental tool to the primary mode of contact. Organizationally, this meant the marketers responsible for targeted messages were now also responsible for conventional, mass media. Now they had to find a way to measure the impact of those mass media, both so they could continue to justify buying it (because, deep down, they knew it was providing some value) and so they could include it when they analyze return on all marketing investments.
Both of these factors lead in the same direction. Marketers need to allocate credit for customer behaviors across multiple inputs: the simple method of attributing a single response to a single promotion just won’t do. This is specifically what marketing mix modeling does.
Marketing mix models also incorporate external influences, such as competitive behavior and environmental conditions. Together with internal factors, these should explain pretty much everything about purchase behavior. This type of zero-based forecasting is vastly more demanding than simply measuring the incremental lift from a particular marketing campaign or the relative performance of test vs. control. But it’s also part of taking responsibility for business results: you can still blame the economy or competitors, but you have to prove they were really the cause of a shortfall. (It’s only shortfalls that need explaining. Over-performance is always due to brilliant marketing.)
In other words, becoming more like packaged goods marketers is not a bad thing. Accepting responsibility and dealing with ambiguity are signs of maturity and importance to the larger business. From the perspective of customer experience management, attempting to define all factors that impact sales forces top managers to recognize the importance of operational issues and to measure their impact as well. This, of course, is essential to a full adoption of customer experience management principles.
If all marketing is really becoming like packaged goods marketing, this may have other implications worth noting. I’ll let you know if I think of any. Or, better still, let me know what YOU think.
Bear with me on this. Most thought relating to customer management has come from industries where companies can link transactions to specific customers: financial services, telecommunications, direct response, travel, retail, gaming). The goal has been to gather this customer information and use it to tailor customer treatments—or customer experiences if you prefer—to each individual. Increasing sophistication has meant gathering more data to understand customers in more detail to make more precise predictions of larger numbers of behaviors. The industry’s proudest boast has been that it can measure the results of marketing activities with a precision that the customer-blind (but vastly better funded) practitioners of traditional consumer marketing cannot.
But a funny thing happened on the way to the future.
Actually, it was two funny things.
The first was simply that more and more targeted messages were sent. This meant the foundational myth—that you could attribute a specific customer response to a single marketing message—became increasingly untenable. Marketers always knew it wasn’t true, but when the number of contacts was limited to a one or two a month, the inaccuracy of accepting it was minor. But now that customers may receive dozens of messages each month—from email, Web site visits, and operational events like invoices, packages and phone calls—it became increasingly absurd to argue that the only message worth counting is the last one before the order.
The second thing that happened was customer-specific marketing moved from being a supplemental tool to the primary mode of contact. Organizationally, this meant the marketers responsible for targeted messages were now also responsible for conventional, mass media. Now they had to find a way to measure the impact of those mass media, both so they could continue to justify buying it (because, deep down, they knew it was providing some value) and so they could include it when they analyze return on all marketing investments.
Both of these factors lead in the same direction. Marketers need to allocate credit for customer behaviors across multiple inputs: the simple method of attributing a single response to a single promotion just won’t do. This is specifically what marketing mix modeling does.
Marketing mix models also incorporate external influences, such as competitive behavior and environmental conditions. Together with internal factors, these should explain pretty much everything about purchase behavior. This type of zero-based forecasting is vastly more demanding than simply measuring the incremental lift from a particular marketing campaign or the relative performance of test vs. control. But it’s also part of taking responsibility for business results: you can still blame the economy or competitors, but you have to prove they were really the cause of a shortfall. (It’s only shortfalls that need explaining. Over-performance is always due to brilliant marketing.)
In other words, becoming more like packaged goods marketers is not a bad thing. Accepting responsibility and dealing with ambiguity are signs of maturity and importance to the larger business. From the perspective of customer experience management, attempting to define all factors that impact sales forces top managers to recognize the importance of operational issues and to measure their impact as well. This, of course, is essential to a full adoption of customer experience management principles.
If all marketing is really becoming like packaged goods marketing, this may have other implications worth noting. I’ll let you know if I think of any. Or, better still, let me know what YOU think.
Wednesday, January 03, 2007
The Right Way to Think about Marketing Mix Models
Here’s a fine mess: I was all set to review a paper titled “Marketing-Mix Modeling the Right Way”, partly because I found the title annoyingly arrogant and partly because I have doubts regarding the methodology it describes. But when I tried to find the paper on the vendor’s site, it wasn’t there. Checking my files, I found my copy came from a friend, so perhaps it had never been released publicly. That being the case, it would be improper to critique it here.
Problem is, I now have marketing mix modeling on my mind. So I’ll just have to plunge ahead even without a white paper as a hook.
What makes marketing mix models relevant to customer experience management? The basic answer is, they illustrate a way to optimize an important set of customer-related resources. This is something customer experience management must also do if it is to move past the soft notion of “be nice to your customers” to become a serious management tool.
I suspect that most people reading this blog are already familiar with marketing mix models. For those who are not, let me correct the impression that marketing mix models simply take a company’s advertising budget and correlate it with changes in sales volume. A serious marketing mix model considers competitive activities, product pricing, retail distribution and in-store promotion. (This is why it’s called a marketing mix model and not a media mix model.) An advanced model might also incorporate the contents of marketing messages and the behavior of different customer segments. All this data is evaluated separately by geographic region. Gathering the information typically requires access to external data on advertising spend, promotions and product sales from vendors like Nielsen and Information Resources, Inc.
Marketing mix models address several key concerns for customer experience management: identifying events that influence customer behavior; integrating external information; building models that capture relationships between multiple events and behavior; and identifying optimal resource allocations. Of course, marketing mix modeling only looks at a subset of all customer-related events; the operational experiences that are central concern of customer experience management are excluded. On the other hand, marketing mix models offer much greater precision than most customer experience analysis. So it’s worth asking whether the techniques of marketing mix models can be extended to support customer experience modeling, or at least can offer some useful lessons.
The problem with directly extending marketing mix models is that they are not based on individual customers. It is certainly possible to add operational metrics as inputs: say, on-time arrivals if you’re an airline or order fulfillment accuracy if you’re a distributor or customer satisfaction scores for just about anyone. This could give some measure of how experience impacts over-all results. But it wouldn’t be directly measure the impact of specific experiences on individual customers, so the information would be hard to interpret and important relationships might be hidden. I suspect different statistical methods are needed for the individual-level analysis. I also question whether tools that predict aggregate sales by period can also project customer lifetime value.
A quick scan of marketing mix modeling vendor Websites didn’t find anybody addressing these issues, although I might have missed it. (Vendors I looked at: Upper Quadrant www.upperquadrant.com; M-Factor www.m-factor.com; Marketing Management Analytics www.mma.com; Hudson River Group www.hudsonrivergroup.com; Analytic Partners www.analyticpartners.com; Pointlogic www.pointlogic.com; Marketing Analytics Inc. www.marketinganalytics.com; Copernicus Marketing Consulting www.copernicusmarketing.com; Strategic Oxygen www.strategicoxygen.com; iknowtion www.iknowtion.com; Management Science Associates, Inc. www.msa.com; ACNielsen www.acnielsen.com [recently purchased The Modeling Group]; SAS www.sas.com [recently purchased Veridiem].)
In terms of broader lessons, the marketing mix model vendors certainly offer some useful techniques for gathering and integrating external data. They typically do this by geography (related to retail trading areas and advertising markets), which is useful because most customer data can be linked to a physical location. The general skills needed to build and calibrate marketing mix models are also relevant to customer experience modeling.
Perhaps more important, marketing mix models help companies develop attitudes that are needed for customer experience management. These include understanding that models need not be perfect to be useful; using simulation to make business decisions; considering the trade-offs inherent in the concept of optimization; including external as well as internal factors in explanations of customer behavior; moving beyond a purely product-centric view of the market; and trying to measure the real value of traditionally unaccountable activities such as advertising spend.
In short, marketing mix models may or may not provide the right technical platform to build customer value models, which I consider an essential underpinning of serious customer experience management. But marketing mix models do provide a useful template for making a sophisticated analytical tool part of high-level decision-making. This alone makes them worth a look from a customer experience management perspective.
Problem is, I now have marketing mix modeling on my mind. So I’ll just have to plunge ahead even without a white paper as a hook.
What makes marketing mix models relevant to customer experience management? The basic answer is, they illustrate a way to optimize an important set of customer-related resources. This is something customer experience management must also do if it is to move past the soft notion of “be nice to your customers” to become a serious management tool.
I suspect that most people reading this blog are already familiar with marketing mix models. For those who are not, let me correct the impression that marketing mix models simply take a company’s advertising budget and correlate it with changes in sales volume. A serious marketing mix model considers competitive activities, product pricing, retail distribution and in-store promotion. (This is why it’s called a marketing mix model and not a media mix model.) An advanced model might also incorporate the contents of marketing messages and the behavior of different customer segments. All this data is evaluated separately by geographic region. Gathering the information typically requires access to external data on advertising spend, promotions and product sales from vendors like Nielsen and Information Resources, Inc.
Marketing mix models address several key concerns for customer experience management: identifying events that influence customer behavior; integrating external information; building models that capture relationships between multiple events and behavior; and identifying optimal resource allocations. Of course, marketing mix modeling only looks at a subset of all customer-related events; the operational experiences that are central concern of customer experience management are excluded. On the other hand, marketing mix models offer much greater precision than most customer experience analysis. So it’s worth asking whether the techniques of marketing mix models can be extended to support customer experience modeling, or at least can offer some useful lessons.
The problem with directly extending marketing mix models is that they are not based on individual customers. It is certainly possible to add operational metrics as inputs: say, on-time arrivals if you’re an airline or order fulfillment accuracy if you’re a distributor or customer satisfaction scores for just about anyone. This could give some measure of how experience impacts over-all results. But it wouldn’t be directly measure the impact of specific experiences on individual customers, so the information would be hard to interpret and important relationships might be hidden. I suspect different statistical methods are needed for the individual-level analysis. I also question whether tools that predict aggregate sales by period can also project customer lifetime value.
A quick scan of marketing mix modeling vendor Websites didn’t find anybody addressing these issues, although I might have missed it. (Vendors I looked at: Upper Quadrant www.upperquadrant.com; M-Factor www.m-factor.com; Marketing Management Analytics www.mma.com; Hudson River Group www.hudsonrivergroup.com; Analytic Partners www.analyticpartners.com; Pointlogic www.pointlogic.com; Marketing Analytics Inc. www.marketinganalytics.com; Copernicus Marketing Consulting www.copernicusmarketing.com; Strategic Oxygen www.strategicoxygen.com; iknowtion www.iknowtion.com; Management Science Associates, Inc. www.msa.com; ACNielsen www.acnielsen.com [recently purchased The Modeling Group]; SAS www.sas.com [recently purchased Veridiem].)
In terms of broader lessons, the marketing mix model vendors certainly offer some useful techniques for gathering and integrating external data. They typically do this by geography (related to retail trading areas and advertising markets), which is useful because most customer data can be linked to a physical location. The general skills needed to build and calibrate marketing mix models are also relevant to customer experience modeling.
Perhaps more important, marketing mix models help companies develop attitudes that are needed for customer experience management. These include understanding that models need not be perfect to be useful; using simulation to make business decisions; considering the trade-offs inherent in the concept of optimization; including external as well as internal factors in explanations of customer behavior; moving beyond a purely product-centric view of the market; and trying to measure the real value of traditionally unaccountable activities such as advertising spend.
In short, marketing mix models may or may not provide the right technical platform to build customer value models, which I consider an essential underpinning of serious customer experience management. But marketing mix models do provide a useful template for making a sophisticated analytical tool part of high-level decision-making. This alone makes them worth a look from a customer experience management perspective.
Tuesday, January 02, 2007
Tibco Lays Out Architecture for Customer Interaction Management
Tibco Software (www.tibco.com) has been providing enterprise integration technology for what seems like forever (actually, since 1985). But while it’s still basically in the plumbing business, it has apparently been paying attention to how people are using its products. Or at least, that’s my interpretation of what led it to publish a paper on “Predictive Customer Interaction Management,” available here.
The paper speeds past the niceties of why interaction management is important to reach its real topic: what a customer interaction management architecture looks like. It presents a pretty good picture of this, specifying three main components:
- channel adapters that sense and respond to customer activities (which Tibco calls “events”) in touchpoint systems
- a recommendation engine that combines business rules, offer history and a decision engine to generate reactions
- a virtual data source that integrates customer and product data from source systems
Needless to say, Tibco has products for each component. But the paper is generic enough that this doesn’t get in the way. Its most important point is that reacting properly to customer events is not simple: it requires combining “rules, policies, inferencing, and analytics—both statistical and probabilistic—to produce complex models of reasoning via events, customer data, time, and analytics.” At the core is the recommendation engine, which does “pattern matching of a set of events with the policies, knowledge, and analytical models to generate a set of responses.”
Exactly.
Tibco’s recommendation engine is based on its BusinessEvents software, a complex event processing system. Although complex event processing is not a new concept or technology, it’s one I expect to see mentioned more often as part of a customer experience management infrastructure.
After describing the architecture, the paper presents several rather pedestrian examples of interaction management in a retail banking context: reacting to a customer’s new child, address change, large bank deposit, product inquiry, invalid telephone number, and travel purchase. None of the business practices described will be new to people who worry about such things, although offering travel insurance to someone who has recently made a large payment to a travel agent strikes me as a potentially alienating invasion of privacy. In any case, Tibco’s goal is less to present brilliant marketing ideas than to illustrate the remarkably complicated technical choreography required to make such reactions possible.
The paper speeds past the niceties of why interaction management is important to reach its real topic: what a customer interaction management architecture looks like. It presents a pretty good picture of this, specifying three main components:
- channel adapters that sense and respond to customer activities (which Tibco calls “events”) in touchpoint systems
- a recommendation engine that combines business rules, offer history and a decision engine to generate reactions
- a virtual data source that integrates customer and product data from source systems
Needless to say, Tibco has products for each component. But the paper is generic enough that this doesn’t get in the way. Its most important point is that reacting properly to customer events is not simple: it requires combining “rules, policies, inferencing, and analytics—both statistical and probabilistic—to produce complex models of reasoning via events, customer data, time, and analytics.” At the core is the recommendation engine, which does “pattern matching of a set of events with the policies, knowledge, and analytical models to generate a set of responses.”
Exactly.
Tibco’s recommendation engine is based on its BusinessEvents software, a complex event processing system. Although complex event processing is not a new concept or technology, it’s one I expect to see mentioned more often as part of a customer experience management infrastructure.
After describing the architecture, the paper presents several rather pedestrian examples of interaction management in a retail banking context: reacting to a customer’s new child, address change, large bank deposit, product inquiry, invalid telephone number, and travel purchase. None of the business practices described will be new to people who worry about such things, although offering travel insurance to someone who has recently made a large payment to a travel agent strikes me as a potentially alienating invasion of privacy. In any case, Tibco’s goal is less to present brilliant marketing ideas than to illustrate the remarkably complicated technical choreography required to make such reactions possible.