Unica (www.unica.com) released the latest version of their Affinium marketing suite earlier this month. One set of enhancements related to response attribution, which the new release makes both more flexible and easier to use. Since this is an area of particular interest for me, I asked Unica for a briefing.
Over all, the new features are very good, but I’ll leave it to Unica’s marketing materials to describe them at length. One point that struck me during the conversation, however, came in response to a question I asked about dealing with responses that can be linked to multiple promotions. Unica gives users three standard options: give all the credit to one promotion, give full credit to all the promotions (and thus double count), or divide the credit equally among all the promotions. I asked about weighting the fractions so that some promotions get more credit than others but the total still adds to one. To me this makes sense and is somewhat consistent with the notion of multivariate analysis I have discussed elsewhere.
Unica said they had considered that option but asked several major clients and not one had any interest. (Unica could in fact support such a method as a custom rule if anybody did want it.) They asked me if I had ever seen anyone do it that way, and I had to admit I couldn’t think of anybody off hand.
This raises a concern that’s always lurking in the background: are sophisticated concepts, or simple concepts that rely on sophisticated analytics, actually too complicated for the real world? The first question to answer is, do they offer real benefits? The Unica clients who are satisfied with simplistic response attribution presumably don’t think so in that particular case. Assuming the benefit are there, the second question is, how do you convince people to make a change? Can you start simple and gradually become more complicated, or do you just hide the complexity inside a black box, or do you expose the complexity and educate them about why it’s necessary? I don’t have any answers just now—it’s Friday after all—but it's important to keep asking.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Friday, September 29, 2006
Thursday, September 28, 2006
What's the Real Cost of Identity Theft?
Yesterday’s The New York Times (www.nytimes.com) carried an article on identity theft, which basically suggested the problem is not as important as it seems (“Surging Losses, but Few Victims”, Circuits, September 27, 2006). Since I had just written on the topic in this blog (September 22, 2006), I read it closely. The Times article largely dismissed the Federal Trade Commission study I had quoted—which found 10 million identity thefts per year at a cost of $48 billion—in favor of a recent Department of Justice report that found only 3.6 million cases costing $3.3 billion in a six month period.
Those are big differences, but neither the Times article nor other references came up with much of an explanation of where they came from or who was right. Since studies are available on line (see the National Criminal Justice Reference Service fact page at www.ncjrs.gov/spotlight/identity_theft/facts.html) I was able to look for myself.
The full analysis is too detailed to post here, although I’d be delighted to share it with anyone who wants a copy. Key points boil down to:
- Both studies rely on telephone surveys, which I suspect are inherently unreliable when asking for specific cost estimates. FTC study is particularly questionable since it had a sample of just 4,057 people, of whom just over 500 actually reported an identity theft. The DOJ study used a 42,000 sample.
- Once you adjust for differences in methodology (FTC asked about individuals over the past year, DOJ asked about households in the past six months), both studies yield approximately the same number of Americans affected. Both studies also found that just over half the cases involved fraudulent charges against existing credit card accounts.
- The estimated annual cost for identify theft against all existing accounts, credit card and other, came to around $14 billion in the FTC study and about $5 billion (after some adjustments I made on my own) in the DOJ study. Given the DOJ’s larger sample size and that the Times quoted The Nilson Report with an apparently reliable annual figure of $1.1 billion for bank credit card fraud, the DOJ estimate is probably closer to the truth.
- The really big difference between the two studies is the cost of new accounts opened using stolen personal data. Although categories in the two studies are not exactly comparable, the FTC pegged this at $33 billion per year while the DOJ figure (after my adjustments) is about $1.5 billion.
To me this says there is less disagreement than meets the eye. Both studies find about the same number of people affected and both find somewhat similar figures for fraud against existing accounts. The real discrepancy is in the cost of stolen personal data. Here it’s anybody’s guess as to whether the DOJ is undercounting or the FTC is exaggerating. But asking the right question is the first step to finding an answer, so at least we know where people who study this issue should be spending their time.
Those are big differences, but neither the Times article nor other references came up with much of an explanation of where they came from or who was right. Since studies are available on line (see the National Criminal Justice Reference Service fact page at www.ncjrs.gov/spotlight/identity_theft/facts.html) I was able to look for myself.
The full analysis is too detailed to post here, although I’d be delighted to share it with anyone who wants a copy. Key points boil down to:
- Both studies rely on telephone surveys, which I suspect are inherently unreliable when asking for specific cost estimates. FTC study is particularly questionable since it had a sample of just 4,057 people, of whom just over 500 actually reported an identity theft. The DOJ study used a 42,000 sample.
- Once you adjust for differences in methodology (FTC asked about individuals over the past year, DOJ asked about households in the past six months), both studies yield approximately the same number of Americans affected. Both studies also found that just over half the cases involved fraudulent charges against existing credit card accounts.
- The estimated annual cost for identify theft against all existing accounts, credit card and other, came to around $14 billion in the FTC study and about $5 billion (after some adjustments I made on my own) in the DOJ study. Given the DOJ’s larger sample size and that the Times quoted The Nilson Report with an apparently reliable annual figure of $1.1 billion for bank credit card fraud, the DOJ estimate is probably closer to the truth.
- The really big difference between the two studies is the cost of new accounts opened using stolen personal data. Although categories in the two studies are not exactly comparable, the FTC pegged this at $33 billion per year while the DOJ figure (after my adjustments) is about $1.5 billion.
To me this says there is less disagreement than meets the eye. Both studies find about the same number of people affected and both find somewhat similar figures for fraud against existing accounts. The real discrepancy is in the cost of stolen personal data. Here it’s anybody’s guess as to whether the DOJ is undercounting or the FTC is exaggerating. But asking the right question is the first step to finding an answer, so at least we know where people who study this issue should be spending their time.
Wednesday, September 27, 2006
...In Which, Dave Has a Bright Idea
A friend recently described her customer experience with a months-late furniture order. The store was sincerely apologetic, but explained that manufacturers accept an order only to find, when they go to build it weeks later, that the fabric is no longer available. The mills themselves don’t give the manufacturers current information, so there is little the manufacturer can do. The store itself is even more helpless but is left to face the customer’s wrath.
I believe I looked suitably sympathetic as she spoke, but in fact my inner consultant was off and running. This was a classic description of an out-of-control process. The first step to gaining control is measurement: in this case, build a scorecard tracking performance of each vendor. This would let the store determine which vendors had the most problems, and either pressure them to improve their performance or steer business elsewhere. Scorecard results over time would also allow the store to measure its own progress in dealing with the situation.
In itself, this a good example of using back-office analytics to improve the customer experience—something I’ve been thinking about a lot recently. But why stop there? Scorecard information could also be exposed directly to customers as they make their purchase decisions, informing them which options are likely to be delivered on time and which have a higher risk of delay. This would give the customers more control over the outcome, help to set more reasonable expectations, and, not least, deflect blame from the store should a problem arise. Here’s another excellent application of the customer experience management approach.
Then I had a really bright idea. If the problem is fabric availability, why not let the customers supply the fabric themselves? The store could actually order it for them; the goal is to get the manufacturer out of the loop.
Presumably the furniture manufacturers are reluctant to participate in such an approach because it adds complexity to their own processes. But, also presumably, everything has its price and they would be willing to accept customer-supplied fabric if they were paid a suitable premium. Some customers would be willing to pay this in return for a more certain delivery date; others might prefer to pay less and take their chances. Combine this with scorecard information that lets customers identify high-risk choices, and you now are giving them a range of options, from picking low-risk products in the first place to picking high-risk products and paying to get them on time.
We at Client X Client refer to this as “transparency”, but that’s just a buzz word. The point is that you can substantially improve the experience for different kinds of customers by letting them understand the implications of their decisions and make whichever choice best fits their needs. All kinds of marketing applications are possible: imagine an “on time or it’s free” offer from a furniture store. This would be economically feasible if the store carefully limited it to product combinations where experience has shown the suppliers are reliable. You could revolutionize the industry.
One final note: the standard Customer Experience Matrix approach would start a project like this by quantifying the cost of delayed orders. This includes the immediate financial impact and long-term customer value. It would be fairly easy to do and the number would certainly be impressive. But in this case, the importance of the problem is already so obvious to management that formal analysis would strike them as an unnecessary academic exercise. That might change if the solution were very expensive, in which case a formal financial analysis might be needed to justify the investment. But given the critical harm done by bad word-of-mouth from unhappy customers, it’s hard to imagine store management rejecting any approach that promised significant improvements.
I believe I looked suitably sympathetic as she spoke, but in fact my inner consultant was off and running. This was a classic description of an out-of-control process. The first step to gaining control is measurement: in this case, build a scorecard tracking performance of each vendor. This would let the store determine which vendors had the most problems, and either pressure them to improve their performance or steer business elsewhere. Scorecard results over time would also allow the store to measure its own progress in dealing with the situation.
In itself, this a good example of using back-office analytics to improve the customer experience—something I’ve been thinking about a lot recently. But why stop there? Scorecard information could also be exposed directly to customers as they make their purchase decisions, informing them which options are likely to be delivered on time and which have a higher risk of delay. This would give the customers more control over the outcome, help to set more reasonable expectations, and, not least, deflect blame from the store should a problem arise. Here’s another excellent application of the customer experience management approach.
Then I had a really bright idea. If the problem is fabric availability, why not let the customers supply the fabric themselves? The store could actually order it for them; the goal is to get the manufacturer out of the loop.
Presumably the furniture manufacturers are reluctant to participate in such an approach because it adds complexity to their own processes. But, also presumably, everything has its price and they would be willing to accept customer-supplied fabric if they were paid a suitable premium. Some customers would be willing to pay this in return for a more certain delivery date; others might prefer to pay less and take their chances. Combine this with scorecard information that lets customers identify high-risk choices, and you now are giving them a range of options, from picking low-risk products in the first place to picking high-risk products and paying to get them on time.
We at Client X Client refer to this as “transparency”, but that’s just a buzz word. The point is that you can substantially improve the experience for different kinds of customers by letting them understand the implications of their decisions and make whichever choice best fits their needs. All kinds of marketing applications are possible: imagine an “on time or it’s free” offer from a furniture store. This would be economically feasible if the store carefully limited it to product combinations where experience has shown the suppliers are reliable. You could revolutionize the industry.
One final note: the standard Customer Experience Matrix approach would start a project like this by quantifying the cost of delayed orders. This includes the immediate financial impact and long-term customer value. It would be fairly easy to do and the number would certainly be impressive. But in this case, the importance of the problem is already so obvious to management that formal analysis would strike them as an unnecessary academic exercise. That might change if the solution were very expensive, in which case a formal financial analysis might be needed to justify the investment. But given the critical harm done by bad word-of-mouth from unhappy customers, it’s hard to imagine store management rejecting any approach that promised significant improvements.
Tuesday, September 26, 2006
Entellium Selection Guide Asks the Wrong 100 Questions
Who could resist a paper entitled “100+ Questions CRM consultants get paid to ask”? Certainly not a sometime CRM consultant like myself. As I downloaded the paper from hosted CRM vendor Entellium (www.entellium.com), hope and fear wrestled inside me: Would I learn something new? Are they giving away all my trade secrets? Would this be heaven or would it be hell? Whatever happened to the Eagles anyway? Didn't I see they were on tour recently? The Rolling Stones are touring again. Now there's rock band.
Then the download finished.
The paper seemed promising. It offered a “simple 6-step methodology to deliver speed and savings into your buying cycle.” Great – I love methodologies. And six is such a friendly number: enough steps to be interesting, but not so many that I can’t keep them all in my mind.
Step 1 was reasonable: “determine requirements and share them with potential vendors.” But Step 2 seemed curiously accelerated: “expect the vendor to prove they can really meet your requirements”. Isn’t there an intermediate step to decide which vendors are qualified? And isn’t it unrealistic to expect the vendor to do all the proving: don’t you have to take some of the responsibility yourself? Step 3 raced ahead still faster: “finalize other considerations to get a complete understanding” of costs, service levels, and the vendor’s business model. What could be left after “finalize”?
Not much, it turns out. Step 4 is “expect a vendor presentation of the final solution to your full buying team (senior management, etc.)”, step 5 is “agree to deployment schedule and resources” and step 6 is “deploy your chosen solution”. In other words, you’ve have pretty much made your choice by step 2, finalized it in step 3, and moved on to deployment after that. I guess it’s okay to load all the hard work into steps 1 and 2, but somehow I didn’t feel I got my full 6 steps worth.
Oh well, on to the questions themselves. Most of the paper is a checklist of 104 items (yes, I counted), spread among 13 categories from “A) Solution Depth & Deployment Options” to “M) Support and Help”. These were not really questions, but functional requirements along the lines of (and I choose at random) “Automatically generate and publish reports at regular intervals to management, with no manual intervention.” Each item is followed by brief notes explaining why it’s important and checkboxes for priority: Must have? Nice to have? Not important?
Now I’m worried. Early in my career as a consultant, I took a requirements survey of that sort and—this is true—96% of all answers came back as “must have”. Maybe that won’t happen with Entellium’s list because it is so broad that users will in fact be able to rule out some areas. But don’t count on it.
More to the point, don’t count on gaining any real insight from this approach. The list of functions seemed generally reasonable, although I didn’t compare it with my own checklists to see what had been left out. You can bet it’s not complete: surely Entellium does everything that’s included, and no matter how great Entellium may or may not be (I’ve never looked at it), it can’t do everything. In fact, when you start with lists like this one, 104 is a tiny fraction of the possible entries.
My real complaint is that Entellium is encouraging users to make the mistake they love to make anyway, which is jumping right into functional checklists and vendor comparisons. The essential preliminary step, which any good consultant will insist upon, is to understand the business issues and how the proposed new system will solve them. In particular, you have to ensure the new system will fit with existing structures (people, processes and technologies).
This is such a basic point that I almost feel I should apologize for making you read so far just to reach it. But it’s also an error that I’ve seen two fairly sophisticated companies make in the past month alone. Clients suspect this initial analysis is make-work that consultants invent to pump up their billable hours. Au contraire! It’s as necessary as having your doctor do an examination before prescribing treatment. You can’t expect the software vendors to do this work for you: they lack the inclination and, in most cases, the skills. Yet rest assured that without a serious situation analysis, no list of functional requirements, regardless of how many questions it contains, will lead to an effective result.
Then the download finished.
The paper seemed promising. It offered a “simple 6-step methodology to deliver speed and savings into your buying cycle.” Great – I love methodologies. And six is such a friendly number: enough steps to be interesting, but not so many that I can’t keep them all in my mind.
Step 1 was reasonable: “determine requirements and share them with potential vendors.” But Step 2 seemed curiously accelerated: “expect the vendor to prove they can really meet your requirements”. Isn’t there an intermediate step to decide which vendors are qualified? And isn’t it unrealistic to expect the vendor to do all the proving: don’t you have to take some of the responsibility yourself? Step 3 raced ahead still faster: “finalize other considerations to get a complete understanding” of costs, service levels, and the vendor’s business model. What could be left after “finalize”?
Not much, it turns out. Step 4 is “expect a vendor presentation of the final solution to your full buying team (senior management, etc.)”, step 5 is “agree to deployment schedule and resources” and step 6 is “deploy your chosen solution”. In other words, you’ve have pretty much made your choice by step 2, finalized it in step 3, and moved on to deployment after that. I guess it’s okay to load all the hard work into steps 1 and 2, but somehow I didn’t feel I got my full 6 steps worth.
Oh well, on to the questions themselves. Most of the paper is a checklist of 104 items (yes, I counted), spread among 13 categories from “A) Solution Depth & Deployment Options” to “M) Support and Help”. These were not really questions, but functional requirements along the lines of (and I choose at random) “Automatically generate and publish reports at regular intervals to management, with no manual intervention.” Each item is followed by brief notes explaining why it’s important and checkboxes for priority: Must have? Nice to have? Not important?
Now I’m worried. Early in my career as a consultant, I took a requirements survey of that sort and—this is true—96% of all answers came back as “must have”. Maybe that won’t happen with Entellium’s list because it is so broad that users will in fact be able to rule out some areas. But don’t count on it.
More to the point, don’t count on gaining any real insight from this approach. The list of functions seemed generally reasonable, although I didn’t compare it with my own checklists to see what had been left out. You can bet it’s not complete: surely Entellium does everything that’s included, and no matter how great Entellium may or may not be (I’ve never looked at it), it can’t do everything. In fact, when you start with lists like this one, 104 is a tiny fraction of the possible entries.
My real complaint is that Entellium is encouraging users to make the mistake they love to make anyway, which is jumping right into functional checklists and vendor comparisons. The essential preliminary step, which any good consultant will insist upon, is to understand the business issues and how the proposed new system will solve them. In particular, you have to ensure the new system will fit with existing structures (people, processes and technologies).
This is such a basic point that I almost feel I should apologize for making you read so far just to reach it. But it’s also an error that I’ve seen two fairly sophisticated companies make in the past month alone. Clients suspect this initial analysis is make-work that consultants invent to pump up their billable hours. Au contraire! It’s as necessary as having your doctor do an examination before prescribing treatment. You can’t expect the software vendors to do this work for you: they lack the inclination and, in most cases, the skills. Yet rest assured that without a serious situation analysis, no list of functional requirements, regardless of how many questions it contains, will lead to an effective result.
Monday, September 25, 2006
Linking Hidden Activities to Customer Value
My last two posts have dealt with activities that affect customers but are invisible to them: back-office analytics and information security. How do you fit these into a Customer Experience framework?
Actually there are several means of connection. The most obvious is to expose the activities to customers directly, by featuring them in advertising or building them into customer-facing activities. A sales forecasting system could alert customers to sales on items they are likely to order or warn that an item in their shopping cart is about to go out of stock. Even security measures can be promoted as a customer benefit. The value of the exposed activity can then be measured like any other adjustment to the customer experience, by comparing performance of customers who are exposed to the activity with results from customers who are not.
But what about activities are customers never become aware of? More accurate sales forecasts or smarter store layouts can lead to measurable revenue increases. Other behind-the-scenes changes can reduce costs and thereby raise profit margins. Security enhancements and disaster recovery preparations can be valued like other risk-reduction methods using standard financial techniques. Whether these changes can be tied to specific transactions or must be allocated like indirect costs, they can be applied to customer value calculations.
Stated more formally, indirect activities can impact customer value in at least five ways:
- as part of the direct experience (e.g., alert customers to anticipated stock-outs)
- as customer knowledge (e.g. describe the activity in advertising )
- through improved customer management (e.g. use response analysis to improve campaign segmentation)
- through improved product economics (e.g. lower manufacturing costs yield higher profit margins)
- through improved business economics (e.g. lower overhead costs yield higher business profits)
This may seem like beating a dead horse, but the point is really important. Customer value is THE primary metric of Customer Experience Management. So if Customer Experience Management truly encompasses all business activities, then it must be possible to measure all business activities in terms of customer value. This gives businesses a consistent measure—change in customer value—to compare otherwise unrelated decisions. Such consistency is one of the key benefits of a Customer Experience Management approach.
Actually there are several means of connection. The most obvious is to expose the activities to customers directly, by featuring them in advertising or building them into customer-facing activities. A sales forecasting system could alert customers to sales on items they are likely to order or warn that an item in their shopping cart is about to go out of stock. Even security measures can be promoted as a customer benefit. The value of the exposed activity can then be measured like any other adjustment to the customer experience, by comparing performance of customers who are exposed to the activity with results from customers who are not.
But what about activities are customers never become aware of? More accurate sales forecasts or smarter store layouts can lead to measurable revenue increases. Other behind-the-scenes changes can reduce costs and thereby raise profit margins. Security enhancements and disaster recovery preparations can be valued like other risk-reduction methods using standard financial techniques. Whether these changes can be tied to specific transactions or must be allocated like indirect costs, they can be applied to customer value calculations.
Stated more formally, indirect activities can impact customer value in at least five ways:
- as part of the direct experience (e.g., alert customers to anticipated stock-outs)
- as customer knowledge (e.g. describe the activity in advertising )
- through improved customer management (e.g. use response analysis to improve campaign segmentation)
- through improved product economics (e.g. lower manufacturing costs yield higher profit margins)
- through improved business economics (e.g. lower overhead costs yield higher business profits)
This may seem like beating a dead horse, but the point is really important. Customer value is THE primary metric of Customer Experience Management. So if Customer Experience Management truly encompasses all business activities, then it must be possible to measure all business activities in terms of customer value. This gives businesses a consistent measure—change in customer value—to compare otherwise unrelated decisions. Such consistency is one of the key benefits of a Customer Experience Management approach.
Friday, September 22, 2006
CMO Council Study Highlights Consumer Security Concerns
The CMO Council (www.cmocouncil.org) just released initial results from a major study, “Secure the Trust of Your Brand”, on how information security impacts customer attitudes. The initial report is a survey of consumer attitudes. Among the eye-opening factoids:
- the Federal Trade Commission reports that 10 million Americans each year become victims of identity theft, at an average cost per victim of $5,885 and 30 hours of time.
- about one in six Americans have had personal information lost or compromised (a number which will rise if the rate of 10 million incidents per year is sustained).
- Americans are twice as worried about identity protection as terrorist attacks (80% vs. 42%, although I’m not sure exactly how those percentages were calculated).
- 40% have stopped a transaction online, on the phone or in a retail store due to a security concern.
That last figure is particularly intriguing. It suggests that individuals are paying close attention to operational security processes and rejecting those they find substandard. Maybe, but given the number of passwords you’ll see taped to computer monitors in any office or home, permit me to doubt. It's more likely that activities as simple as declining to register at a Web site are included.
Other findings support my skepticism. Respondents said security is less than half as important as product quality in deciding which companies to do business with (33% vs 77%), and a just tiny fraction could name any particular brand as having a trusted reputation for protecting its customers security.
This leads me to several observations. One is basically a Note To Self: security and privacy are different. Americans are notoriously willing to share private information in return for small conveniences or simply because someone asks. But that sort of sharing is voluntary and authorized and mostly involves information which people don’t really consider all that sensitive. Security is about involuntary and unauthorized information transfers and often involves data with much greater potential to do damage in the wrong hands. So it’s plausible that people would view privacy and security as separate issues and be more concerned about one than the other. Personally, I question the distinction: the more data organizations collect, even with authorization, the greater the risk it will be exposed in a security breach. But that’s just me.
A second observation, which is also made by the CMO Council study authors, is that there is a deep and growing popular unease about personal data security, even if it hasn’t quite yet reached the point of influencing purchase decisions. So even though marketers are probably safe in ignoring security concerns for the moment, this could change quickly. (Of course, the business and technology managers responsible for maintaining security must already pay attention. The question here is whether marketers need to address it in their messages to consumers.)
The third observation is that, from a Customer Experience Management point is view, you need a way to measure the contribution of security to customer value. I have some definite thoughts on that but this post is already too long. So I’ll write about it next time.
- the Federal Trade Commission reports that 10 million Americans each year become victims of identity theft, at an average cost per victim of $5,885 and 30 hours of time.
- about one in six Americans have had personal information lost or compromised (a number which will rise if the rate of 10 million incidents per year is sustained).
- Americans are twice as worried about identity protection as terrorist attacks (80% vs. 42%, although I’m not sure exactly how those percentages were calculated).
- 40% have stopped a transaction online, on the phone or in a retail store due to a security concern.
That last figure is particularly intriguing. It suggests that individuals are paying close attention to operational security processes and rejecting those they find substandard. Maybe, but given the number of passwords you’ll see taped to computer monitors in any office or home, permit me to doubt. It's more likely that activities as simple as declining to register at a Web site are included.
Other findings support my skepticism. Respondents said security is less than half as important as product quality in deciding which companies to do business with (33% vs 77%), and a just tiny fraction could name any particular brand as having a trusted reputation for protecting its customers security.
This leads me to several observations. One is basically a Note To Self: security and privacy are different. Americans are notoriously willing to share private information in return for small conveniences or simply because someone asks. But that sort of sharing is voluntary and authorized and mostly involves information which people don’t really consider all that sensitive. Security is about involuntary and unauthorized information transfers and often involves data with much greater potential to do damage in the wrong hands. So it’s plausible that people would view privacy and security as separate issues and be more concerned about one than the other. Personally, I question the distinction: the more data organizations collect, even with authorization, the greater the risk it will be exposed in a security breach. But that’s just me.
A second observation, which is also made by the CMO Council study authors, is that there is a deep and growing popular unease about personal data security, even if it hasn’t quite yet reached the point of influencing purchase decisions. So even though marketers are probably safe in ignoring security concerns for the moment, this could change quickly. (Of course, the business and technology managers responsible for maintaining security must already pay attention. The question here is whether marketers need to address it in their messages to consumers.)
The third observation is that, from a Customer Experience Management point is view, you need a way to measure the contribution of security to customer value. I have some definite thoughts on that but this post is already too long. So I’ll write about it next time.
Thursday, September 21, 2006
Computerworld Top BI Projects Rarely Face the Customer
When I saw that Computerworld (www.computerworld.com) had a special report on successful business intelligence projects (“BI Home Runs”, Computerworld, September 18, 2006), I confidently expected something similar to the InformationWeek article described in my September 13 post: that is, a major focus on improving customer treatments. So it was a bit of a shock to find that of the fifteen projects mentioned, only two--one each for call center management and campaign response analysis--related directly to individual customers. The balance involved aggregate sales information (six projects), operational data (five projects), and financial reporting (two projects).
You could argue that sales and operations analysis ultimately help each company match products, prices and processes with customer needs—so, in a sense, they do improve the customer experience. This is important if you’re trying to make the case, as we do at Client X Client, that all business decisions should be measured by their impact on customer value.
But that’s just special pleading. The more important observation is that only the call center project involved anything like real time distribution of business intelligence to a customer-facing system. (And even the call center project was about measuring customer satisfaction and agent performance; it’s not clear whether agents were receiving customer-specific recommendations.) Yet we frequently hear that “real time analytics” is the hottest trend going. Apparently this news hasn’t reached the editors of Computerworld or, more important, the people actually selecting business intelligence projects.
Let me be clear about this: I know there are some real implementations in place and do believe the concept is an important one. The point here is the implementations are apparently less common than the hype would lead us to believe. (And, yes, I am very familiar with the concept of a hype cycle.) Widespread deployment may follow, but it’s not a sure thing.
You could argue that sales and operations analysis ultimately help each company match products, prices and processes with customer needs—so, in a sense, they do improve the customer experience. This is important if you’re trying to make the case, as we do at Client X Client, that all business decisions should be measured by their impact on customer value.
But that’s just special pleading. The more important observation is that only the call center project involved anything like real time distribution of business intelligence to a customer-facing system. (And even the call center project was about measuring customer satisfaction and agent performance; it’s not clear whether agents were receiving customer-specific recommendations.) Yet we frequently hear that “real time analytics” is the hottest trend going. Apparently this news hasn’t reached the editors of Computerworld or, more important, the people actually selecting business intelligence projects.
Let me be clear about this: I know there are some real implementations in place and do believe the concept is an important one. The point here is the implementations are apparently less common than the hype would lead us to believe. (And, yes, I am very familiar with the concept of a hype cycle.) Widespread deployment may follow, but it’s not a sure thing.
Wednesday, September 20, 2006
RightNow Talks The Talk
“Customer Experience Management” meets the two key requirements for a successful buzzword: it’s impressive (10 syllables!) and no one quite knows what it means. (My own definition, “understand and improve how you treat your customers,” fails on both counts.) Given these virtues, it’s not surprising that many firms have adopted Customer Experience Management as part of their marketing message. Since my own firm is among these, I take particular interest in watching how others use the term.
The issue here is scope. True Customer Experience Management—defined, of course, as what Client X Client does—extends to every way a customer interacts with a company and its products, including things like brand advertising, product use, repair, and financing. Most firms apply a much narrower definition that happens to match the scope of whatever it is they are selling.
This lets me play a little game, of testing how far I have to read in their marketing materials before they reveal the true scope of their offering. It usually doesn’t take long to find that, say, a vendor of help desk software defines the customer experience in terms of providing great customer assistance.
Which leads us to RightNow Technologies (www.rightnow.com), a provider of hosted Customer Relationship Management systems. RightNow has embraced Customer Experience Management in a big way: go to their Web site and just about every headline you see will include the phrase. But the positioning does not contain usual limitations. For example, the September 11, 2006 press release announcing their newest version, headlined “RightNow 8 Helps Companies Deliver Exceptional Customer Experiences,” defines customer experience in the first paragraph as “the sum of interactions with a company's products, people and processes”.
That’s a pretty good definition, and considerably broader than a conventional CRM offering. RightNow acknowledges as much in the press release, stating in the next paragraph that “conventional CRM solutions are narrowly focused on streamlining internal processes, rather than addressing the broader, more critical issues that define the quality of the customer experience.”
Now the game is getting interesting. Is RightNow, a CRM vendor, going to admit that CRM isn’t enough? Not bloody likely. The clue is the qualifier “conventional”. Apparently RightNow offers some form of unconventional CRM that overcomes the traditional limits.
Sure enough, the next paragraph reveals their intentions. We’re told that RightNow 8 “directly addresses the customer experience challenge by ensuring that the knowledge required to optimize the quality of every interaction is available in real time wherever and whenever it is needed.” So, RightNow’s definition of Customer Experience Management is limited to direct customer-to-company interactions: pretty broad, but still not including indirect contacts experiences such as product usage and brand advertising.
Whew! Three paragraphs to get to the real scope—that was a good match. And, to be fair , RightNow really does provide exceptionally scope in their new version, including an “experience designer” to manage processes that span sales, marketing and service departments; a “feedback” module to gather comments and survey results; and an “analytics” module to integrate data from multiple sources. Maybe that’s as close to full-scope Customer Experience Management as a CRM vendor can get.
I do remain skeptical of RightNow’s ability to deliver on its promises: it’s particularly difficult for a hosted vendor to integrate with internal corporate systems, yet such integration is essential for the process integration that RightNow is touting. There is nothing in RightNow’s published materials to suggest it has an unconventional solution to this problem. I expect to talk to them within the next few days to see if they can convince me otherwise. I’ll revise this post or make a new one if anything interesting turns up.
The issue here is scope. True Customer Experience Management—defined, of course, as what Client X Client does—extends to every way a customer interacts with a company and its products, including things like brand advertising, product use, repair, and financing. Most firms apply a much narrower definition that happens to match the scope of whatever it is they are selling.
This lets me play a little game, of testing how far I have to read in their marketing materials before they reveal the true scope of their offering. It usually doesn’t take long to find that, say, a vendor of help desk software defines the customer experience in terms of providing great customer assistance.
Which leads us to RightNow Technologies (www.rightnow.com), a provider of hosted Customer Relationship Management systems. RightNow has embraced Customer Experience Management in a big way: go to their Web site and just about every headline you see will include the phrase. But the positioning does not contain usual limitations. For example, the September 11, 2006 press release announcing their newest version, headlined “RightNow 8 Helps Companies Deliver Exceptional Customer Experiences,” defines customer experience in the first paragraph as “the sum of interactions with a company's products, people and processes”.
That’s a pretty good definition, and considerably broader than a conventional CRM offering. RightNow acknowledges as much in the press release, stating in the next paragraph that “conventional CRM solutions are narrowly focused on streamlining internal processes, rather than addressing the broader, more critical issues that define the quality of the customer experience.”
Now the game is getting interesting. Is RightNow, a CRM vendor, going to admit that CRM isn’t enough? Not bloody likely. The clue is the qualifier “conventional”. Apparently RightNow offers some form of unconventional CRM that overcomes the traditional limits.
Sure enough, the next paragraph reveals their intentions. We’re told that RightNow 8 “directly addresses the customer experience challenge by ensuring that the knowledge required to optimize the quality of every interaction is available in real time wherever and whenever it is needed.” So, RightNow’s definition of Customer Experience Management is limited to direct customer-to-company interactions: pretty broad, but still not including indirect contacts experiences such as product usage and brand advertising.
Whew! Three paragraphs to get to the real scope—that was a good match. And, to be fair , RightNow really does provide exceptionally scope in their new version, including an “experience designer” to manage processes that span sales, marketing and service departments; a “feedback” module to gather comments and survey results; and an “analytics” module to integrate data from multiple sources. Maybe that’s as close to full-scope Customer Experience Management as a CRM vendor can get.
I do remain skeptical of RightNow’s ability to deliver on its promises: it’s particularly difficult for a hosted vendor to integrate with internal corporate systems, yet such integration is essential for the process integration that RightNow is touting. There is nothing in RightNow’s published materials to suggest it has an unconventional solution to this problem. I expect to talk to them within the next few days to see if they can convince me otherwise. I’ll revise this post or make a new one if anything interesting turns up.
Tuesday, September 19, 2006
The New York Times Discovers That Marketers Use Science
Today’s New York Times carried an article on the use of science by marketers, although this is not exactly news. (“Enlisting Science’s Lessons to Entice More Shoppers to Spend More”, Science Times, September 19, 2006.) The specific examples it will be familiar to any marketing professional: carefully tracking shoppers’ paths through a store; analyzing physiological reactions to advertisements; and using multivariate test designs to assess ad components.
This last example brought to mind what I consider a more interesting (and somewhat contradictory) trend: a growing recognition that it’s no longer viable to attribute customer actions to a single promotion. That was the traditional approach taken by direct marketers: put a code on the order coupon and measure results by counting the returns. We always knew this was an oversimplification, since customers receive multiple promotions and the coupon they happen to send back isn’t the only one that contributed to their action. But direct and database marketers have always been so proud of being “measurable” that it would have been hard to admit their primary measurement technique was more than a little dicey.
Still, the proliferation of channels and message opportunities makes it impossible to continue ignoring the contribution of all those other contacts. This means we need something similar to multivariate test design to help manage the customer experience. That is, we must assemble information on all the messages each customer has (or might have) been exposed to, and then analyze how the different messages correlate with different results. In the past, direct marketers did this in a very controlled fashion by creating test panels that received different streams of messages. That’s no longer practical in a world where there are so many potential combinations of message frequency, channel and content, and where message delivery is often beyond the marketers’ control. Instead, we’ll need to rely on techniques like multivariate testing to come up with estimates that may be crude, but are still more realistic than pretending multiple messages don’t exist.
Incidentally, this is a partial answer to one of the questions I posed yesterday. Multivariate analysis is one of the ways we can understand enough about complex customer behaviors to build an effective customer experience model.
This last example brought to mind what I consider a more interesting (and somewhat contradictory) trend: a growing recognition that it’s no longer viable to attribute customer actions to a single promotion. That was the traditional approach taken by direct marketers: put a code on the order coupon and measure results by counting the returns. We always knew this was an oversimplification, since customers receive multiple promotions and the coupon they happen to send back isn’t the only one that contributed to their action. But direct and database marketers have always been so proud of being “measurable” that it would have been hard to admit their primary measurement technique was more than a little dicey.
Still, the proliferation of channels and message opportunities makes it impossible to continue ignoring the contribution of all those other contacts. This means we need something similar to multivariate test design to help manage the customer experience. That is, we must assemble information on all the messages each customer has (or might have) been exposed to, and then analyze how the different messages correlate with different results. In the past, direct marketers did this in a very controlled fashion by creating test panels that received different streams of messages. That’s no longer practical in a world where there are so many potential combinations of message frequency, channel and content, and where message delivery is often beyond the marketers’ control. Instead, we’ll need to rely on techniques like multivariate testing to come up with estimates that may be crude, but are still more realistic than pretending multiple messages don’t exist.
Incidentally, this is a partial answer to one of the questions I posed yesterday. Multivariate analysis is one of the ways we can understand enough about complex customer behaviors to build an effective customer experience model.
Monday, September 18, 2006
Agent-Based Simulation for Customer Experience Modeling (Is This Fun or What?)
OK, so I spent several fun-filled hours yesterday working through a tutorial for an agent-based modeling system. The particular product I chose, SeSAm (www.simsesam.de), was developed as a teaching aid. It is free, powerful and easy to learn. I didn’t finish the tutorial* but saw enough to confirm that I could build models with agents that stored their own history and drew on that history to determine future behavior. This is the key requirement for building a customer experience model that would simulate the impact on long-term customer value of changes in business policies, resources and the environment. Specifically, I could create one activity for each cell in a Customer Experience Matrix and have customers migrate among the activities with different results depending on their previous experiences. Per yesterday’s note, this is not something I could do with a Markov chain.
But this answer just raises new questions. Here are three that leap to mind:
- Where would you get all the data? Measuring how customers behave in specific sets of conditions, such as after two service outages within a three month period, is much harder than measuring simple behaviors such as conversion rate from first to second order. Do you have the data available, do you have the analysis tools and skills to get the answers from the data, and how do you figure out which conditions are the right ones to analyze in first place?
- How do you make this simple enough for people to understand it? What sorts of summary measures can you develop that will make it clear what the model is showing, without making it so simple that it’s a meaningless “black box” which no one has any reason to believe? The Customer Experience Matrix is designed specifically to present things simply, but the modeling still will be complicated once people look under the hood.
- Is modeling at this level really necessary? It’s a fun intellectual challenge, but maybe most companies are still at the stage where simple changes can yield big improvements. If so, the fine-tuning that a detailed agent-based model makes possible is not yet necessary: we can get better results from simpler analyses that highlight the big opportunities. But that just begs the next question of, what would those analyses look like?
Despite these questions, it's encouraging to know that agent-based modeling will do what I had hoped. It’s one uncertainty I can cross off my list.
*(If you decide to test SeSAm for yourself, be forewarned that the online tutorial is more accurate than the one you can download, and that even the online tutorial uses a method for increasing Age that won’t work because of a known bug. The bug and workaround are described in the BugList section of the SeSAm Wiki in the last entry under “Normal Priority Bugs”. )
But this answer just raises new questions. Here are three that leap to mind:
- Where would you get all the data? Measuring how customers behave in specific sets of conditions, such as after two service outages within a three month period, is much harder than measuring simple behaviors such as conversion rate from first to second order. Do you have the data available, do you have the analysis tools and skills to get the answers from the data, and how do you figure out which conditions are the right ones to analyze in first place?
- How do you make this simple enough for people to understand it? What sorts of summary measures can you develop that will make it clear what the model is showing, without making it so simple that it’s a meaningless “black box” which no one has any reason to believe? The Customer Experience Matrix is designed specifically to present things simply, but the modeling still will be complicated once people look under the hood.
- Is modeling at this level really necessary? It’s a fun intellectual challenge, but maybe most companies are still at the stage where simple changes can yield big improvements. If so, the fine-tuning that a detailed agent-based model makes possible is not yet necessary: we can get better results from simpler analyses that highlight the big opportunities. But that just begs the next question of, what would those analyses look like?
Despite these questions, it's encouraging to know that agent-based modeling will do what I had hoped. It’s one uncertainty I can cross off my list.
*(If you decide to test SeSAm for yourself, be forewarned that the online tutorial is more accurate than the one you can download, and that even the online tutorial uses a method for increasing Age that won’t work because of a known bug. The bug and workaround are described in the BugList section of the SeSAm Wiki in the last entry under “Normal Priority Bugs”. )
Sunday, September 17, 2006
Customers Are Not Widgets: CRM, BPM and Improving the Customer Experience
I’ve been researching software to model the flow of customers through an organization. The thought is to take literally the mantra that “the value of a business is the value of its customer relationships” by modeling the relationships individually and viewing an aggregate result. This is all central to the Client X Client business concept.
The obvious way to model the relationships is to define a sequence of stages and then specify the percentage of customers who graduate from each step to the next. For customers, these stages might be first order, second order, third order, and so on. The technical name for this type of model is a Markov chain, which is defined by www.computeruser.com as “A random process in which the probability that a certain future state will occur depends only on the present or immediately preceding state of the system, and not on the events leading up to the present state.”
Such models are often used in business process management to define the flow of items through a standard process such as an assembly line. In models that are realistic enough to be useful, the chains can incorporate multiple paths, such as detours to repair defects or install options, and the stages themselves may have characteristics such as capacity constraints. Within a CRM system, call center activity is often modeled this way to understand the impact of staff, configuration or rule changes on wait times and costs. So long as the basic Markov condition is met—that every object within a given stage is equally likely to progress to the next stage—these models can work without actually tracing each object individually.
But customers have memories, while most widgets do not. So a customer’s future behavior may depend on how they were treated during a call center interaction, and any model of the long-term customer relationship must take this into account. This requires a different non-Markov modeling technique, in which the past experiences of each customer help to predict the customer’s future behavior. I believe the approach called “agent-based simulation” will allow this, although I haven’t yet found out for sure. But whether or not that turns out the be the proper technique, the important point for now is any attempt to optimize the customer experience must be able to simulate the impact of each interaction on all future interactions, something that a simple Markov chain can never do.
The obvious way to model the relationships is to define a sequence of stages and then specify the percentage of customers who graduate from each step to the next. For customers, these stages might be first order, second order, third order, and so on. The technical name for this type of model is a Markov chain, which is defined by www.computeruser.com as “A random process in which the probability that a certain future state will occur depends only on the present or immediately preceding state of the system, and not on the events leading up to the present state.”
Such models are often used in business process management to define the flow of items through a standard process such as an assembly line. In models that are realistic enough to be useful, the chains can incorporate multiple paths, such as detours to repair defects or install options, and the stages themselves may have characteristics such as capacity constraints. Within a CRM system, call center activity is often modeled this way to understand the impact of staff, configuration or rule changes on wait times and costs. So long as the basic Markov condition is met—that every object within a given stage is equally likely to progress to the next stage—these models can work without actually tracing each object individually.
But customers have memories, while most widgets do not. So a customer’s future behavior may depend on how they were treated during a call center interaction, and any model of the long-term customer relationship must take this into account. This requires a different non-Markov modeling technique, in which the past experiences of each customer help to predict the customer’s future behavior. I believe the approach called “agent-based simulation” will allow this, although I haven’t yet found out for sure. But whether or not that turns out the be the proper technique, the important point for now is any attempt to optimize the customer experience must be able to simulate the impact of each interaction on all future interactions, something that a simple Markov chain can never do.
Friday, September 15, 2006
The Path to Customer Optimization
One of the topics that has always fascinated me is how innovations spread through an industry. At the risk of admitting an embarrassing truth, most of the concepts that customer management thought leaders discuss have been around for a long time. I don’t mean the ancient “corner shop keeper” analogy, but the specific vision of linking customer touch points to a central management system that can optimize each interaction. I’ve been talking about it for at least five years and know other people who were talking about it before then. It’s a sound idea and, yes, part of the reason it’s taken so long to catch on has been the need for technology to advance to the point where it’s possible. The technology still isn’t quite there yet, but it’s getting close. So, assuming the technology issue will soon be solved, what will it take for the concept to be deployed in many businesses?
I can envision two starting points. One is a broad but crude deployment of the concept throughout an organization, and the other is a narrow but sophisticated deployment for particular applications. Conventional wisdom suggests the narrow version is more plausible: it’s easier to deploy and should show immediate benefits. You could argue that many of the interaction optimization systems already in place are examples of this. I have in mind automated systems to select appropriate offers on Web sites or in call centers based on individual customer history.
But these applications have been around for quite some time and not led to broader deployment of the concept. So maybe they are an evolutionary dead end: they lead to more sophisticated versions of themselves within their original environment, but don’t spread to other areas. Think of a long-necked dinosaur that is superbly adapted to grazing at the tops of tall trees, but dies out when the trees are no longer available. The survivors in that case were the primitive little mammals that didn’t do anything particularly well, but were very adaptable.
A crude enterprise-wide deployment might be the equivalent of that primitive mammal. It’s not very impressive to start, but has the right configuration to grow more powerful over time. For interaction management systems, the key is the ability to measure customer behavior across departments. This lets them balance immediate results in one area against future results somewhere else. Departmental systems can’t do that, less because of technical constraints than because department managers have little motivation to consider results outside their domains.
If this theory is correct, the short-cut of starting with departmental systems and working up from there is really a dead end street. Industry thought leaders will have to take the long way around, first convincing managers of the need for an enterprise perspective and then getting them, largely on faith, to make the substantial investment needed for even a simple enterprise-wide project. This will take some time and the initial results may be meager. But they will improve over time and, once the potential becomes clear, managers will be eager to expand the project into something approaching the grand integrated interaction optimization mechanism we have been talking about for so long.
I can envision two starting points. One is a broad but crude deployment of the concept throughout an organization, and the other is a narrow but sophisticated deployment for particular applications. Conventional wisdom suggests the narrow version is more plausible: it’s easier to deploy and should show immediate benefits. You could argue that many of the interaction optimization systems already in place are examples of this. I have in mind automated systems to select appropriate offers on Web sites or in call centers based on individual customer history.
But these applications have been around for quite some time and not led to broader deployment of the concept. So maybe they are an evolutionary dead end: they lead to more sophisticated versions of themselves within their original environment, but don’t spread to other areas. Think of a long-necked dinosaur that is superbly adapted to grazing at the tops of tall trees, but dies out when the trees are no longer available. The survivors in that case were the primitive little mammals that didn’t do anything particularly well, but were very adaptable.
A crude enterprise-wide deployment might be the equivalent of that primitive mammal. It’s not very impressive to start, but has the right configuration to grow more powerful over time. For interaction management systems, the key is the ability to measure customer behavior across departments. This lets them balance immediate results in one area against future results somewhere else. Departmental systems can’t do that, less because of technical constraints than because department managers have little motivation to consider results outside their domains.
If this theory is correct, the short-cut of starting with departmental systems and working up from there is really a dead end street. Industry thought leaders will have to take the long way around, first convincing managers of the need for an enterprise perspective and then getting them, largely on faith, to make the substantial investment needed for even a simple enterprise-wide project. This will take some time and the initial results may be meager. But they will improve over time and, once the potential becomes clear, managers will be eager to expand the project into something approaching the grand integrated interaction optimization mechanism we have been talking about for so long.
Thursday, September 14, 2006
A Tale of Two Acquisitions: Alterian and Business Objects
Acquisitions form the never-ending background noise of the technology industry. Some illustrate such obvious trends they are barely worth noting, while others hint at something important. This week brought one of each.
On September 11, Alterian (www.alterian.com) announced its purchase of Nvigorate (www.nvigorate.com), one of the better established developers of marketing resource management systems. In doing so, Alterian confirmed its strategy to expand the scope of its products to become a comprehensive system serving all marketing department needs. Alterian had signaled this intention by purchasing email marketing software vendor Dynamics Direct on May 25. The logic behind this strategy is impeccable—it’s easier to sell new products to existing customers than to sell new customers on existing products. But just about every other major marketing software vendor is taking the same approach, so the competitive advantage is almost nil. What’s worse, plenty of other vendors, including analytics heavyweights like SAS and SPSS and enterprise software giants like SAP and Oracle, already offer product lines that incorporate all of marketing and a great deal else. I remain skeptical that enough buyers want a broad-scope marketing system, but not a broader-scope analytics or enterprise solution, to provide the marketing-only vendors with a healthy long-term business.
On September 13, Business Objects (www.businessobjects.com) announced an agreement to acquire Armstrong Laing Limited, known as ALG Software, a specialist in profitability management and activity based costing solutions. This can also be read as a conventional product line extension, and it may be nothing more. But it does bring to mind the critical role that accurate profitability measurement plays in optimizing business results. In particular, profit per customer and profit per interaction can only be measured meaningfully with an activity-based approach. Thus, intentionally or not, Business Objects is positioning itself to provide a key building block for effective customer value optimization This will ultimately make Business Objects much more important to its clients than the simple, and easily duplicated, convenience of combining several marketing management functions within a single offering.
On September 11, Alterian (www.alterian.com) announced its purchase of Nvigorate (www.nvigorate.com), one of the better established developers of marketing resource management systems. In doing so, Alterian confirmed its strategy to expand the scope of its products to become a comprehensive system serving all marketing department needs. Alterian had signaled this intention by purchasing email marketing software vendor Dynamics Direct on May 25. The logic behind this strategy is impeccable—it’s easier to sell new products to existing customers than to sell new customers on existing products. But just about every other major marketing software vendor is taking the same approach, so the competitive advantage is almost nil. What’s worse, plenty of other vendors, including analytics heavyweights like SAS and SPSS and enterprise software giants like SAP and Oracle, already offer product lines that incorporate all of marketing and a great deal else. I remain skeptical that enough buyers want a broad-scope marketing system, but not a broader-scope analytics or enterprise solution, to provide the marketing-only vendors with a healthy long-term business.
On September 13, Business Objects (www.businessobjects.com) announced an agreement to acquire Armstrong Laing Limited, known as ALG Software, a specialist in profitability management and activity based costing solutions. This can also be read as a conventional product line extension, and it may be nothing more. But it does bring to mind the critical role that accurate profitability measurement plays in optimizing business results. In particular, profit per customer and profit per interaction can only be measured meaningfully with an activity-based approach. Thus, intentionally or not, Business Objects is positioning itself to provide a key building block for effective customer value optimization This will ultimately make Business Objects much more important to its clients than the simple, and easily duplicated, convenience of combining several marketing management functions within a single offering.
Wednesday, September 13, 2006
InformationWeek: IT embraces customers, not CRM
InformationWeek's September 11 issue published its list of Top 250 Innovators, with detailed profiles of the top five. What first struck me was that four of the top five firms focused on improving customer experiences: whether by better marketing (Principal Financial Group), mass customization (Automatic Data Processing), or improved customer-facing operational processes (American Power Conversion and Global Crossing). It seems that companies, and InformationWeek, are really taking seriously the soft-headed notion that customers really matter.
But on reflection, what’s even more interesting is that not one of the customer-oriented projects was a conventional Customer Relationship Management system. What apparently impressed InformationWeek was projects that radically altered business operations in ways that provided real customer benefits.
The American Power Conversion profile in particular described how “In 2004, APC launched a CRM program that included identifying touch points customers have with the company, capturing information on failed interactions, and determining what was needed to fix customer satisfaction at those points. APC set up a measurement system that assesses how the company is doing on each point.” Even though InformationWeek calls it a “CRM program”, this sounds more like what we at ClientXClient talk about with our Customer Experience Matrix: a comprehensive view of all customer interactions.
The article continues by describing how “APC’s Customer Loyalty Framework guides process improvements and sets a road map for system implementations....APC focused on automation points that connect different systems, including the company’s credit management system, Siebel apps and analytics, solutions configurator, and Oracle databases. ” In other words, it was integrating all the different systems that really counted. The Siebel CRM system was just one of several participants.
In an environment where some analysts are reporting that traditional CRM is back in fashion, it’s good to see evidence to the contrary. Smart companies are recognizing what matters is the entire customer experience, not just using CRM to build a better call center.
- David Raab
But on reflection, what’s even more interesting is that not one of the customer-oriented projects was a conventional Customer Relationship Management system. What apparently impressed InformationWeek was projects that radically altered business operations in ways that provided real customer benefits.
The American Power Conversion profile in particular described how “In 2004, APC launched a CRM program that included identifying touch points customers have with the company, capturing information on failed interactions, and determining what was needed to fix customer satisfaction at those points. APC set up a measurement system that assesses how the company is doing on each point.” Even though InformationWeek calls it a “CRM program”, this sounds more like what we at ClientXClient talk about with our Customer Experience Matrix: a comprehensive view of all customer interactions.
The article continues by describing how “APC’s Customer Loyalty Framework guides process improvements and sets a road map for system implementations....APC focused on automation points that connect different systems, including the company’s credit management system, Siebel apps and analytics, solutions configurator, and Oracle databases. ” In other words, it was integrating all the different systems that really counted. The Siebel CRM system was just one of several participants.
In an environment where some analysts are reporting that traditional CRM is back in fashion, it’s good to see evidence to the contrary. Smart companies are recognizing what matters is the entire customer experience, not just using CRM to build a better call center.
- David Raab