It’s time for another round of “What’s My Product Line?”, the game where we try to guess a company’s product by reading its white paper. Today’s contestant is Syspro (www.syspro.com) and its entry “How to Embrace CRM and Make it Succeed in Your Organization,” available here.
Our first clue is on the cover of the paper itself. The subhead reads “giving small and midsize manufacturers and distributors the visibility required to compete in a highly competitive business climate”. Can you guess who Syspro sells to? Why, yes, it’s small and midsize manufacturers and distributors! Sorry, just one point for that answer: it was too easy.
In the paper proper, Syspro starts with a description of why companies need CRM, naming all the usual suspects: increasingly sophisticated customers, lower switching costs due to the Internet, accelerated product life cycle, and so on. Not very exciting but no clues about Syspro, either.
Next comes a similarly conventional definition of CRM: “a Customer Centric Business strategy to select, manage and capitalize on valuable business (customer) relationships.” But things start to get interesting in the details, which call CRM “a practical philosophy that can transform a company by providing much greater visibility over all individual touch points and communication with customers, vendors, suppliers and prospects.” The stress on “visibility” is somewhat odd, as is the inclusion of vendors and suppliers. Can you see where they’re heading with this?
If not, the heading of the next section is a dead giveaway: “Ownership of Data”. OK, these guys are clearly taking a data-centric view of CRM. Hence the earlier mention of visibility. You might guess they sell data integration technology, but go back to “vendors and suppliers”. The kind of software that includes that as well as customers and prospects is....Enterprise Resource Planning! You win, game over. But we’re only up to page 5 of 15.
The rest of the paper isn’t bad. It describes a balanced approach to CRM deployment, including prioritization, strategies, metrics, processes, training, and implementation review, with particular emphasis on change management. This is followed by a now-predictable stress on integration in general and real-time integration in particular, with an utterly shameless plug for Syspro CRM as “an excellent example of a fully-integrated CRM solution.” The final summary gives a reasonable list of CRM success factors, citing “a well-defined impllementation strategy” and “the people factor” in addition to “integration with incumbent ERP solution.”
Notwithstanding its emphasis on data integration, this paper gives a pretty good list of the important considerations in a CRM deployment. One curious omission is a discussion of tailored customer treatments, or indeed of pretty much any marketing (as opposed to sales) activities. This may simply be too much to expect from ERP specialists.
A more serious criticism is Syspro’s assumption that a combined ERP/CRM ensures adequate data integration. This is always the position of suite vendors. But few companies run entirely on components of a single system. Perhaps small and mid-size businesses are more likely to do that than most, yet even they are likely to use products from several software vendors. This argues that the real key to “visibility” is the CRM system’s ability to consolidate data from multiple sources, and not its being part of a particular software suite.
Thanks for playing.
Tuesday, October 31, 2006
Computerworld Relates BPM to Customer Experience Transparency (But Not in Those Words)
According to Computerworld (www.computerworld.com, October 30, 2006, “BPM Is Helping Firms Control Critical Business Processes”), Business Process Management (BPM) has graduated from improving departmental processes to coordinating mission-critical activities across an enterprise. Click here for the article.
That’s nice. But what’s really important from a Customer Experience Management perspective is the article’s claim that BPM provides transparency to both managers and customers about the processes it controls.
Transparency—complete, up-to-the-minute information about the business relationship--is a critical component of improved customer experience. This is partly because customers have come to expect the data to be available, both in a self-service mode and when they ask a company representative for help. But it’s also because transparency opens new opportunities for customers to take charge: changing an order before it’s shipped, rerouting a package that’s on its way, updating account options, and so on. In a world where security and control are lacking on so many levels, gaining even a small measure of power gives customers great psychological comfort and tightens their bond to any organization that is kind enough to do so.
Managers gain from transparency because it lets them understand what’s happening in their organizations, and thereby improves their own ability to make changes. This ties directly to process simulation, another BPM capability highlighted in the article. Building an accurate simulation model is one way to know you truly understand a process, and of course it then lets you estimate the impact of proposed changes without the disruption of actually testing them.
Simulating something as complicated as an entire customer life cycle is incredibly challenging and well beyond the capabilities of most BPM systems. But simulating even discrete portions of the customer experience can be helpful. This does run the danger of optimizing local results in a way that actually harms the greater whole (that is, customer lifetime value). But that’s a risk worth taking in return for greater understanding and control of the customer experience itself.
That’s nice. But what’s really important from a Customer Experience Management perspective is the article’s claim that BPM provides transparency to both managers and customers about the processes it controls.
Transparency—complete, up-to-the-minute information about the business relationship--is a critical component of improved customer experience. This is partly because customers have come to expect the data to be available, both in a self-service mode and when they ask a company representative for help. But it’s also because transparency opens new opportunities for customers to take charge: changing an order before it’s shipped, rerouting a package that’s on its way, updating account options, and so on. In a world where security and control are lacking on so many levels, gaining even a small measure of power gives customers great psychological comfort and tightens their bond to any organization that is kind enough to do so.
Managers gain from transparency because it lets them understand what’s happening in their organizations, and thereby improves their own ability to make changes. This ties directly to process simulation, another BPM capability highlighted in the article. Building an accurate simulation model is one way to know you truly understand a process, and of course it then lets you estimate the impact of proposed changes without the disruption of actually testing them.
Simulating something as complicated as an entire customer life cycle is incredibly challenging and well beyond the capabilities of most BPM systems. But simulating even discrete portions of the customer experience can be helpful. This does run the danger of optimizing local results in a way that actually harms the greater whole (that is, customer lifetime value). But that’s a risk worth taking in return for greater understanding and control of the customer experience itself.
Monday, October 30, 2006
Does Eloqua Compete with Aprimo and Unica?
Not to be obsessive or anything, but I do want to respond a bit more to the anonymous comment on last Thursday’s post, which said (the comment, that is) “Eloqua is definitely a marketing automation and campaign management tool. They go head to head with likes of Aprimo and Unica and are beating up on them. MRM is not something Eloqua does now, but I would expect them to in the future.”
The anonymity of the comment bothered me a bit, but that’s my fault. The rules of this blog have now changed so only registered members can comment.
My real concern is the substance. As my original post stated, Eloqua and similar products do provide some marketing automation and campaign management functions. I don’t consider them true marketing automation and campaign management tools because I consider the defining characteristics of those tools to include a full-scale marketing database: one which contains a complete view of customer data, including detailed purchase history. Eloqua keeps only a customer profile and communication history and is intended to work with a conventional CRM system that holds the rest.
There is absolutely nothing wrong with this. Eloqua is sold as a system for lead generation and nurturing, not long-term management of on-going customer relationships. Its limited database is perfectly adequate for this purpose. In fact, I think very highly of Eloqua in general. Click here for my recent detailed review of the product.
The comment about Eloqua “beating up” on the likes of Aprimo and Unica is worth its own discussion. Because Eloqua is so different from those other products, there are certainly clients for whom it is more appropriate. To repeat a phrase I’ve heard many times from different vendors, if both products are being considered in the same situation, one of them is in the wrong place. That Eloqua gets chosen in those situations simply indicates it’s the right tool, not that is it better than those other products at their own core functions.
Which brings up a final bit of whining. Why do some software salespeople feel they must claim their product is the best at everything? It’s obviously not true and can only result in their being included in sales situations which they will lose, or, even worse, will fail at should they win. Maybe it’s just that salespeople are competitive by nature. Still, one of the basics of good sales training is learning to distinguish suitable from unsuitable opportunities. I guess some people need an occasional refresher course.
The anonymity of the comment bothered me a bit, but that’s my fault. The rules of this blog have now changed so only registered members can comment.
My real concern is the substance. As my original post stated, Eloqua and similar products do provide some marketing automation and campaign management functions. I don’t consider them true marketing automation and campaign management tools because I consider the defining characteristics of those tools to include a full-scale marketing database: one which contains a complete view of customer data, including detailed purchase history. Eloqua keeps only a customer profile and communication history and is intended to work with a conventional CRM system that holds the rest.
There is absolutely nothing wrong with this. Eloqua is sold as a system for lead generation and nurturing, not long-term management of on-going customer relationships. Its limited database is perfectly adequate for this purpose. In fact, I think very highly of Eloqua in general. Click here for my recent detailed review of the product.
The comment about Eloqua “beating up” on the likes of Aprimo and Unica is worth its own discussion. Because Eloqua is so different from those other products, there are certainly clients for whom it is more appropriate. To repeat a phrase I’ve heard many times from different vendors, if both products are being considered in the same situation, one of them is in the wrong place. That Eloqua gets chosen in those situations simply indicates it’s the right tool, not that is it better than those other products at their own core functions.
Which brings up a final bit of whining. Why do some software salespeople feel they must claim their product is the best at everything? It’s obviously not true and can only result in their being included in sales situations which they will lose, or, even worse, will fail at should they win. Maybe it’s just that salespeople are competitive by nature. Still, one of the basics of good sales training is learning to distinguish suitable from unsuitable opportunities. I guess some people need an occasional refresher course.
Friday, October 27, 2006
How Different is Small Company CRM?
Continuing with yesterday’s theme of hosted CRM, two more studies have come to my attention:
- Forrester Research (www.forrester.com) published “Comparing The ROI Of SaaS Versus On-Premise Using Forrester's TEI™ Approach” which concluded that SaaS (software as a service) offerings are clearly cheaper for installations with under 100 users, and competitive even up to 500 users. This study looked at several types of software, not just CRM. Click here
for details.
- Access Markets International (AMI) Partners (www.ami-partners.com) published “Mid-Market CRM: Vendor Strategies for a New Frontier”, which found only 35% of mid-sized companies have an existing CRM installation. The study looked at both hosted and on-premise products. Click here for more information.
These studies reinforce the notions I presented yesterday, that (a) smaller firms are more likely to benefit from hosted CRM (b) smaller firms are a big market opportunity and (c) smaller firms have different needs from larger organizations. None of these is a particularly fresh or brilliant insight. But they’re worth repeating because too many people still assume that smaller companies really want the same things as larger companies, just at a lower price.
In fact, I’m starting to think that the hosted firms catering to small and mid-sized businesses are really creating an entirely new category of software. Firms like Eloqua and Vtrenz provide a mix of features—essentially, email and Web campaigns for lead generation and nurturing—that don’t match my usual categories. They are not campaign management or marketing automation or marketing resource management, although they do all of those things to some degree. They are certainly not conventional CRM since they integrate with other sales and service products. If there is on-premise software with a similar mix of functions, I’m not aware of it.
Even though these systems themselves are not CRM products, their presence within an organization would change the boundaries of what CRM is expected to do. This reinforces the notion that hosted CRM in smaller businesses is fundamentally different from CRM in larger companies. I’m not kidding when I say I’m just toying with this idea. It may not make any sense once I think it through. But reconsidering the conventional categories is worthwhile, even if we end up not making any changes.
- Forrester Research (www.forrester.com) published “Comparing The ROI Of SaaS Versus On-Premise Using Forrester's TEI™ Approach” which concluded that SaaS (software as a service) offerings are clearly cheaper for installations with under 100 users, and competitive even up to 500 users. This study looked at several types of software, not just CRM. Click here
for details.
- Access Markets International (AMI) Partners (www.ami-partners.com) published “Mid-Market CRM: Vendor Strategies for a New Frontier”, which found only 35% of mid-sized companies have an existing CRM installation. The study looked at both hosted and on-premise products. Click here for more information.
These studies reinforce the notions I presented yesterday, that (a) smaller firms are more likely to benefit from hosted CRM (b) smaller firms are a big market opportunity and (c) smaller firms have different needs from larger organizations. None of these is a particularly fresh or brilliant insight. But they’re worth repeating because too many people still assume that smaller companies really want the same things as larger companies, just at a lower price.
In fact, I’m starting to think that the hosted firms catering to small and mid-sized businesses are really creating an entirely new category of software. Firms like Eloqua and Vtrenz provide a mix of features—essentially, email and Web campaigns for lead generation and nurturing—that don’t match my usual categories. They are not campaign management or marketing automation or marketing resource management, although they do all of those things to some degree. They are certainly not conventional CRM since they integrate with other sales and service products. If there is on-premise software with a similar mix of functions, I’m not aware of it.
Even though these systems themselves are not CRM products, their presence within an organization would change the boundaries of what CRM is expected to do. This reinforces the notion that hosted CRM in smaller businesses is fundamentally different from CRM in larger companies. I’m not kidding when I say I’m just toying with this idea. It may not make any sense once I think it through. But reconsidering the conventional categories is worthwhile, even if we end up not making any changes.
Thursday, October 26, 2006
Headlines We Thought We'd Never See: "Wal-Mart's Chief Says Chain Became Too Trendy Too Quickly"
I know I've already posted today and there are other things I should be doing. But before yesterday's paper vanishes into recycling, I did want to record that headline from The New York Times Business Section on October 25, 2006 for posterity.
There must be some deep Customer Experience Management lesson in there somewhere, but mostly I think it's funny.
There must be some deep Customer Experience Management lesson in there somewhere, but mostly I think it's funny.
CSO Insights Study Favors On-Demand CRM
I’ve never met Jim Dickie at CSO Insights (www.csoinsights.com) but have always been impressed by the thorough, objective (and concise!) nature of his reports on sales methods and technologies. The most recent, “On-Demand Versus On-Premise CRM: Are There Performance Differences?” lives up to expectations. Amid the contradictory and self-interested vendor claims of success for their products, CSC Insights’ surveys of over 2,500 companies found a clear winner: on-demand systems (a.k.a, hosted, application service provider, or Software as a Service) deploy faster, are more likely to stay within budget, have more satisfied users, and yield more significant improvements than conventional on-premise systems. Nor are the differences minor: almost under twice as many on-demand users reported significant improvements in results as on-premise users: 39.8% vs. 20.8%. Margins on other measures are similar.
(The paper is available here from the CSO Insights Web site, although be forewarned that registration is required and I received a follow-up inquiry from Salesforce.com. I don’t recall being asked whether I wanted one. This is not exactly a best practice.)
The report doesn’t discuss whether the on-premise systems (Siebel, Oracle, PeopleSoft, SAP and Microsoft) were deployed in more complex situations or larger companies than the on-demand products (Salesforce.com, Siebel OnDemand and NetSuite). This may help explain the results. But even if this is the case, the results suggest that simpler, less integrated on-demand solutions are adequate for the needs of many smaller, less-demanding firms. This raises the question of whether the many on-premise software vendors who hope to expand into the small to mid-size business (SMB) market will be able to succeed without radically reengineering and simplifying their products. Personally, I doubt it.
On the other hand, this also raises concerns about on-demand vendors, notably Salesforce.com, who are trying to increase the amount of customization and integration they support. Handled properly, this may be possible without compromising results for users who are satisfied with basic implementations. But it could easily result in the sort of “feature bloat” that has long affected conventional software vendors in relatively mature, competitive markets, making the systems less effective for everyone. The good news is that only some of the on-demand vendors will pursue this strategy, so there should always be simpler, cheaper alternatives for companies who don’t need anything more.
(The paper is available here from the CSO Insights Web site, although be forewarned that registration is required and I received a follow-up inquiry from Salesforce.com. I don’t recall being asked whether I wanted one. This is not exactly a best practice.)
The report doesn’t discuss whether the on-premise systems (Siebel, Oracle, PeopleSoft, SAP and Microsoft) were deployed in more complex situations or larger companies than the on-demand products (Salesforce.com, Siebel OnDemand and NetSuite). This may help explain the results. But even if this is the case, the results suggest that simpler, less integrated on-demand solutions are adequate for the needs of many smaller, less-demanding firms. This raises the question of whether the many on-premise software vendors who hope to expand into the small to mid-size business (SMB) market will be able to succeed without radically reengineering and simplifying their products. Personally, I doubt it.
On the other hand, this also raises concerns about on-demand vendors, notably Salesforce.com, who are trying to increase the amount of customization and integration they support. Handled properly, this may be possible without compromising results for users who are satisfied with basic implementations. But it could easily result in the sort of “feature bloat” that has long affected conventional software vendors in relatively mature, competitive markets, making the systems less effective for everyone. The good news is that only some of the on-demand vendors will pursue this strategy, so there should always be simpler, cheaper alternatives for companies who don’t need anything more.
Wednesday, October 25, 2006
Ponemon Institute Study Highlights Impact of Data Breaches on Customer Value
The Ponemon Institute (www.ponemon.org) earlier this week released its second annual study on the cost to companies of data breachers. The study, “2006 Annual Study: Cost of a Data Breach”, was sponsored by PGP Corporation (www.pgp.com) and Vontu, Inc. (www.vontu.com), which both sell data protection technologies. It is available here on the PGP Web site.
The study found that just over half the total cost of a breach was due to customer opportunity costs, as measured by increased turnover among existing customers and greater difficulty in acquiring new customers. Actual figures were $98 in customer opportunity costs of a total $182 cost per lost customer record. Other cost components were $54 per record for out of pocket expenses such as customer notifications and $30 per record for employee and contractor time.
From the perspective of the Ponemon Institute, what’s important about this is the total magnitude of the problem: annual cost is “in the billions” when applied to the tens of millions of records breached each year. (The Ponemon study includes its own extrapolation of 23 million notifications in 2005 and mentions a 93 million figure from the Privacy Rights Clearinghouse.) From the perspective of the study’s sponsors, what’s important is that the cost of a breach (average $4.8 million, ranging from $226,000 to $22 million) is much less than what they charge for the technologies to prevent it.
From the perspective of Customer Experience Management, what’s important is (a) that Ponemon Institute thought to include the customer impact in their measurements and (b) found customer impact to be a very significant part of the total cost. The first illustrates the growing awareness of customer value measures and the second illustrates why it’s so important to include them when making business decisions.
The study found that just over half the total cost of a breach was due to customer opportunity costs, as measured by increased turnover among existing customers and greater difficulty in acquiring new customers. Actual figures were $98 in customer opportunity costs of a total $182 cost per lost customer record. Other cost components were $54 per record for out of pocket expenses such as customer notifications and $30 per record for employee and contractor time.
From the perspective of the Ponemon Institute, what’s important about this is the total magnitude of the problem: annual cost is “in the billions” when applied to the tens of millions of records breached each year. (The Ponemon study includes its own extrapolation of 23 million notifications in 2005 and mentions a 93 million figure from the Privacy Rights Clearinghouse.) From the perspective of the study’s sponsors, what’s important is that the cost of a breach (average $4.8 million, ranging from $226,000 to $22 million) is much less than what they charge for the technologies to prevent it.
From the perspective of Customer Experience Management, what’s important is (a) that Ponemon Institute thought to include the customer impact in their measurements and (b) found customer impact to be a very significant part of the total cost. The first illustrates the growing awareness of customer value measures and the second illustrates why it’s so important to include them when making business decisions.
Tuesday, October 24, 2006
Measuring the Customer Value of Non-Customer Decisions
One of the bedrock propositions of Client X Client’s approach to Customer Experience Management is that ALL business decisions should be measured by their impact on customer lifetime value. That sounds simple and dramatic enough to be memorable, which may be reason enough to say it: after all, a good part of what we’re doing is evangelism, and a simple, clear message is critical for effective communication.
But, marketing aside, are we serious? Of course, it’s possible to argue in a general way that non-customer-facing decisions such as, say, a new factory, result in cost reductions that increase margins and therefore increase customer value. But if that’s the only connection, wouldn't it make more sense to analyze the investment directly in terms of its impact on total profits and leave customers out of it?
I believe there’s more to it than that. A detailed assessment of impact on customer value yields insights into non-customer-facing decisions that are otherwise unavailable. The critical point about customer value is that it’s always based on assumptions about future transactions. Thus, calculating the impact of a new factory on customer value requires estimating not only the change in product costs, but also any change in product mix, product development time, delivery schedules, quality, and host of other factors.
For example, if the new factory is in China, it may have lower costs but be less nimble due to longer lead times for retooling and shipping. In a quick-changing industry like fashion or consumer electronics, this could significantly reduce future sales. Conversely, if the new factory expands the range of products the company can sell to its customers, their future values increase even though the customers themselves have not changed. Or, if the new factory produces lower quality goods, the cost of that decline in quality includes not just returns and refunds on the defective merchandise itself, but the lost future sales to customers who become annoyed.
It’s possible to build such considerations into a conventional analysis, but they are much more obvious when calculations are based on projections of customer behavior. So we are making more than rhetorical points when we argue for customer value as the measure of all business decisions.
But, marketing aside, are we serious? Of course, it’s possible to argue in a general way that non-customer-facing decisions such as, say, a new factory, result in cost reductions that increase margins and therefore increase customer value. But if that’s the only connection, wouldn't it make more sense to analyze the investment directly in terms of its impact on total profits and leave customers out of it?
I believe there’s more to it than that. A detailed assessment of impact on customer value yields insights into non-customer-facing decisions that are otherwise unavailable. The critical point about customer value is that it’s always based on assumptions about future transactions. Thus, calculating the impact of a new factory on customer value requires estimating not only the change in product costs, but also any change in product mix, product development time, delivery schedules, quality, and host of other factors.
For example, if the new factory is in China, it may have lower costs but be less nimble due to longer lead times for retooling and shipping. In a quick-changing industry like fashion or consumer electronics, this could significantly reduce future sales. Conversely, if the new factory expands the range of products the company can sell to its customers, their future values increase even though the customers themselves have not changed. Or, if the new factory produces lower quality goods, the cost of that decline in quality includes not just returns and refunds on the defective merchandise itself, but the lost future sales to customers who become annoyed.
It’s possible to build such considerations into a conventional analysis, but they are much more obvious when calculations are based on projections of customer behavior. So we are making more than rhetorical points when we argue for customer value as the measure of all business decisions.
Monday, October 23, 2006
QualPro Applies Multivariate Testing to Marketing
As loyal readers of this blog know, multivariate testing is one of the techniques I feel are critical to advancing the art of Customer Experience Management. This is because MVT allows systematic analysis of the factors affecting the outcome of a process. Without a systematic approach, it’s impossible to understand or predict the long-term impact of changes in the Customer Experience. This means that each change is essentially an independent test, rather than part of a continuous improvement (that is, optimization) cycle.
QualPro (www.qualproinc.com) is a consultancy specializing in MVT training and consulting. The company’s roots are in the industrial engineering world of statistical process control. It has a delightfully, if unintentionally, retro Web site that describes the math in QualPro’s MVT system as more complex than a “Polaris Missle”. (In case you’ve forgotten or were not born yet, Polaris was the first submarine-based ballistic missile, developed in the late 1950’s.)
But time hasn't stood still at QualPro. They have applied MVT to advertising and marketing, for clients including AutoNation, Lowe’s, Staples, and even those new-fangled “cellular” telephones from Cingular Wireless. The benefits are to replace intuition about what works with hard data, and to test dozens or hundreds of variables in the time traditional methods would take to test just a handful. QualPro cites savings in the tens or even hundreds of millions of dollars.
That's real money, even in 2006 dollars.
QualPro (www.qualproinc.com) is a consultancy specializing in MVT training and consulting. The company’s roots are in the industrial engineering world of statistical process control. It has a delightfully, if unintentionally, retro Web site that describes the math in QualPro’s MVT system as more complex than a “Polaris Missle”. (In case you’ve forgotten or were not born yet, Polaris was the first submarine-based ballistic missile, developed in the late 1950’s.)
But time hasn't stood still at QualPro. They have applied MVT to advertising and marketing, for clients including AutoNation, Lowe’s, Staples, and even those new-fangled “cellular” telephones from Cingular Wireless. The benefits are to replace intuition about what works with hard data, and to test dozens or hundreds of variables in the time traditional methods would take to test just a handful. QualPro cites savings in the tens or even hundreds of millions of dollars.
That's real money, even in 2006 dollars.
Friday, October 20, 2006
How Many Slots Can Fit on the Head of a Pin?
Infor (www.infor.com), an enterprise software vendor which now owns Epiphany marketing software, was one of four winners of the CRM Magazine 2006 CRM Elite Award for its deployment at Interval International, a vacation exchange network. One critical component was Web site personalization: “Instead of delivering 12 offers on two spots on its Web site as in the past, Interval is able to deliver more than 150 offers on 45 locations on the site.”
In Client X Client terminology, those “two spots” which grew to “45 locations” are as “slots”. Expanding their number and targeting the messages they contain are critical to increasing the value of each customer experience. That the Infor software allowed Interval International to do this, apparently with little or no staff increase, is indeed prize-worthy.
It also highlights the fact that the number of slots is not fixed. Interval International added 43 slots to its Web site, and presumably could find space for a 44th. How many decals can you fit on a NASCAR racer? How many products can you place in a single movie?
The real question is not how many slots are possible, but how many can be effective. Presumably the limiting factor is customer attention: at some point, customers simply won’t even notice more messages. But attention is not a fixed quantity: if you can make the interaction last longer (without annoying the customer) or more engaging, you have more opportunities to present your messages. And, of course, the quality and relevance of the messages themselves also affects their impact.
I can imagine some interesting theories about all this, but only concrete measurement will give real results. So it’s important when planning one’s slots to build in ways to measure their impact.
In Client X Client terminology, those “two spots” which grew to “45 locations” are as “slots”. Expanding their number and targeting the messages they contain are critical to increasing the value of each customer experience. That the Infor software allowed Interval International to do this, apparently with little or no staff increase, is indeed prize-worthy.
It also highlights the fact that the number of slots is not fixed. Interval International added 43 slots to its Web site, and presumably could find space for a 44th. How many decals can you fit on a NASCAR racer? How many products can you place in a single movie?
The real question is not how many slots are possible, but how many can be effective. Presumably the limiting factor is customer attention: at some point, customers simply won’t even notice more messages. But attention is not a fixed quantity: if you can make the interaction last longer (without annoying the customer) or more engaging, you have more opportunities to present your messages. And, of course, the quality and relevance of the messages themselves also affects their impact.
I can imagine some interesting theories about all this, but only concrete measurement will give real results. So it’s important when planning one’s slots to build in ways to measure their impact.
Thursday, October 19, 2006
Aberdeen Group Study Raises Question about Marketing Serivces Providers
Aberdeen Group (www.aberdeen.com) recently released “The Precision Marketing Benchmark Report”. It found that companies making “best in class” use of precision marketing tools—which apparently means treating customers differently based on their value—enjoy higher customer retention, cross-sell and up-sell revenues, and customer satisfaction than companies that don’t.
I haven’t read the study itself, since Aberdeen’s Web site informed me “Our records indicate you are a Technology Services Provider” and therefore not eligible to receive a copy. Since I’m not a TSP, this mostly makes me wonder about the quality of Aberdeen’s research. But it also means I have to rely on an article about the study in destinationcrm.com, which reports what I consider the most interesting finding: “only 16 percent of leaders had invested in marketing service providers or agencies, while 40 percent of other companies had gone this route.”
This is real news. That more successful companies have more tools is expected, and doesn’t prove the tools caused the success. (It could just be that more successful companies have more money for tools.) But you might expect the more successful companies to be more active in customer management in general, and thus also spend more on marketing service providers. That they don’t suggests a significant negative correlation between success and use of marketing services providers.
Again, this doesn’t prove causation: maybe unsuccessful companies can’t afford to buy tools, or (more likely) to hire skilled staff, and thus have to use marketing services providers as a cheaper alternative. But perhaps Aberdeen’s Leslie Ament has it right when she is paraphrased by destinationcrm.com as saying “software provides ample advantage over human services providers”.
I imagine the marketing services providers would disagree. Ironically, Aberdeen itself was just purchased by marketing services provider Harte-Hanks.
I haven’t read the study itself, since Aberdeen’s Web site informed me “Our records indicate you are a Technology Services Provider” and therefore not eligible to receive a copy. Since I’m not a TSP, this mostly makes me wonder about the quality of Aberdeen’s research. But it also means I have to rely on an article about the study in destinationcrm.com, which reports what I consider the most interesting finding: “only 16 percent of leaders had invested in marketing service providers or agencies, while 40 percent of other companies had gone this route.”
This is real news. That more successful companies have more tools is expected, and doesn’t prove the tools caused the success. (It could just be that more successful companies have more money for tools.) But you might expect the more successful companies to be more active in customer management in general, and thus also spend more on marketing service providers. That they don’t suggests a significant negative correlation between success and use of marketing services providers.
Again, this doesn’t prove causation: maybe unsuccessful companies can’t afford to buy tools, or (more likely) to hire skilled staff, and thus have to use marketing services providers as a cheaper alternative. But perhaps Aberdeen’s Leslie Ament has it right when she is paraphrased by destinationcrm.com as saying “software provides ample advantage over human services providers”.
I imagine the marketing services providers would disagree. Ironically, Aberdeen itself was just purchased by marketing services provider Harte-Hanks.
Wednesday, October 18, 2006
[x+1] Product is Better than Its White Paper
[x+1] (www.xplusone.com) offers some pretty intriguing technology for online conversion optimization—that is, it automatically identifies customer segments based on behavior and demographics and determines the most productive offer for each segment. Founded in 1999 and previously known as Poindexter Systems, it has an impressive track record and substantial credibility.
Knowing this, I was pleased to see they had published a paper titled “Five Steps to Achieving Precise Customer Targeting Online.” It would be interesting to see a simple explanation of such a complicated product.
But I was disappointed. In fact, this may be the least effective white paper I’ve ever seen.
It doesn’t even deliver on the simple promise of its title. There are five numbered sections, but they aren’t steps in a process that leads somewhere specific. The first two, “Face the Facts” and “What’s Old is New (Again)”, present general background that boils down to the notion that segmentation is important in online marketing. The final three, “Quantify All Goals”, “Integrate Channels” and “Keep a Consistent Picture of the Consumer”, are described by the paper itself as “three key elements” in building successful campaigns, which is not at all the same as being three steps in a process.
Even knowing what [x+1] does, I could barely see the relationship between this paper and their product. Much as I poke fun at vendor papers for being self-serving, it’s even more annoying to see a paper that serves no purpose at all.
And it was poorly written to boot. Here, in its entirety, is the final sentence—the one that’s supposed to clinch the deal and leave you eager for more: “While the banner ad or ecommerce Web site will not, as the TV commercial can, evoke us emotionally and ‘make us cry,’ they can tell us much about how to get there and how to turn the evocation into a transaction.”
Huh?
Knowing this, I was pleased to see they had published a paper titled “Five Steps to Achieving Precise Customer Targeting Online.” It would be interesting to see a simple explanation of such a complicated product.
But I was disappointed. In fact, this may be the least effective white paper I’ve ever seen.
It doesn’t even deliver on the simple promise of its title. There are five numbered sections, but they aren’t steps in a process that leads somewhere specific. The first two, “Face the Facts” and “What’s Old is New (Again)”, present general background that boils down to the notion that segmentation is important in online marketing. The final three, “Quantify All Goals”, “Integrate Channels” and “Keep a Consistent Picture of the Consumer”, are described by the paper itself as “three key elements” in building successful campaigns, which is not at all the same as being three steps in a process.
Even knowing what [x+1] does, I could barely see the relationship between this paper and their product. Much as I poke fun at vendor papers for being self-serving, it’s even more annoying to see a paper that serves no purpose at all.
And it was poorly written to boot. Here, in its entirety, is the final sentence—the one that’s supposed to clinch the deal and leave you eager for more: “While the banner ad or ecommerce Web site will not, as the TV commercial can, evoke us emotionally and ‘make us cry,’ they can tell us much about how to get there and how to turn the evocation into a transaction.”
Huh?
Tuesday, October 17, 2006
Overcoming a Bad Customer Experience
I received an email this morning from HyperOffice (www.hyperoffice.com) but I don’t know what it was for.
Which is exactly their problem. HyperOffice offers hosted versions of common office functions such as group calendars and project management. I took a detailed look at them a couple of years ago and was impressed by quality design at an extremely low price—about $5 per person per month then, still under $10 per month today. But I chose not to write about them because the service proved very unreliable.
And that was it for HyperOffice and me. A really bad customer experience prevents me from ever looking at them again, or even from reading their advertisements. What actually happens is I see a note from them, remember that their service didn’t work, and go no further. So the contents of the ad itself never register.
I mention this mostly as a reminder of the overwhelming importance of operational experiences—actual use of a product or service—in developing customer attitudes, and therefore long-term customer value. It’s easy for marketers to forget that central point and focus on building the best messages and campaigns. But no amount of promotional genius can overcome the personal knowledge of a customer who has tried a product and had it fail.
Or can it? Maybe a promotion that directly addressed past problems and claimed they had been cured would earn HyperOffice a second look. It might at least get my attention because it raises the topic (product failure) I already associate with them.
The other point to bear in mind is that HyperOffice is contacting me by email. This means the message is nearly free, so the campaign may be profitable if just a tiny fraction of the people respond. And part of my disinterest in HyperOffice is because I have no current need for what they offer. If that changed, I might be willing to take another look. In that case, a timely message from them might well result in a sale.
So the lesson here is a bit more complicated than it seemed. Yes, the operational experience is primary and getting it wrong can be devastating. But if you’ve fixed a problem, smart marketing may recapture some of the lost customers, particularly if you have a low cost way to reach them.
Which is exactly their problem. HyperOffice offers hosted versions of common office functions such as group calendars and project management. I took a detailed look at them a couple of years ago and was impressed by quality design at an extremely low price—about $5 per person per month then, still under $10 per month today. But I chose not to write about them because the service proved very unreliable.
And that was it for HyperOffice and me. A really bad customer experience prevents me from ever looking at them again, or even from reading their advertisements. What actually happens is I see a note from them, remember that their service didn’t work, and go no further. So the contents of the ad itself never register.
I mention this mostly as a reminder of the overwhelming importance of operational experiences—actual use of a product or service—in developing customer attitudes, and therefore long-term customer value. It’s easy for marketers to forget that central point and focus on building the best messages and campaigns. But no amount of promotional genius can overcome the personal knowledge of a customer who has tried a product and had it fail.
Or can it? Maybe a promotion that directly addressed past problems and claimed they had been cured would earn HyperOffice a second look. It might at least get my attention because it raises the topic (product failure) I already associate with them.
The other point to bear in mind is that HyperOffice is contacting me by email. This means the message is nearly free, so the campaign may be profitable if just a tiny fraction of the people respond. And part of my disinterest in HyperOffice is because I have no current need for what they offer. If that changed, I might be willing to take another look. In that case, a timely message from them might well result in a sale.
So the lesson here is a bit more complicated than it seemed. Yes, the operational experience is primary and getting it wrong can be devastating. But if you’ve fixed a problem, smart marketing may recapture some of the lost customers, particularly if you have a low cost way to reach them.
Monday, October 16, 2006
Customer Respect Group: Wish I'd Thought of It First
A press release from The Customer Respect Group (www.customerrespect.com) last week announced results from a survey of how “High-Technology and Computer Industry” companies treat their customers online. (Short answer: a bit better than average except that they’re really bad at answering emails.)
But who is the Customer Respect Group? A look at their Web site shows they visit corporate Web sites, score them on a 100 or so elements, and sell the results to anyone who is interested. Quite frankly, this is brilliant. It can’t take more than a couple of hours to review a site and companies don’t have to agree to participate, so it’s both cheap and easy to build a large database of entries. Once you have that, what big company wouldn’t to pay $595 to see detailed results for itself and its competitors? Smaller companies can pay for a custom review of their own site, which can again be compared to the larger database. Plus, the reports themselves are eminently publicizable—the media loves surveys—and, indeed, Customer Respect Group issues one press release per month like clockwork,
You’re probably expecting me to say something negative here, but I won’t. Anything that gets companies to pay attention to their customers’ experiences is good. I would only like to see the survey results carried further.
- To other channels. After all, Web sites are just part of a total customer experience. But similar research is much more expensive outside of the Web, so you’d need a different business model. This is what “secret shopper” services do, for example.
- Across multiple channels. Consistency may be overrated but inconsistency is no great joy, either. Companies need to score the consistency of their customer treatments across channels, both in general and for individual customers. This is more manageable than comparing themselves against other companies, so it is a reasonable project to consider.
- Tied to financial value. Measuring yourself against peers and best practices provides some motive for improvement. But (with apologies to Dr. Johnson), nothing concentrates the business mind like lost profit. You can’t tie an over-all Web experience score to directly to business results because there are too many intervening variables. But you can certainly look at strengths and weaknesses highlighted by a Web site analysis and estimate their impact on customer behavior and, ultimately, on customer value. Again, this is beyond the scope of what Customer Respect Group does but it’s what you need to get real value from their results.
But who is the Customer Respect Group? A look at their Web site shows they visit corporate Web sites, score them on a 100 or so elements, and sell the results to anyone who is interested. Quite frankly, this is brilliant. It can’t take more than a couple of hours to review a site and companies don’t have to agree to participate, so it’s both cheap and easy to build a large database of entries. Once you have that, what big company wouldn’t to pay $595 to see detailed results for itself and its competitors? Smaller companies can pay for a custom review of their own site, which can again be compared to the larger database. Plus, the reports themselves are eminently publicizable—the media loves surveys—and, indeed, Customer Respect Group issues one press release per month like clockwork,
You’re probably expecting me to say something negative here, but I won’t. Anything that gets companies to pay attention to their customers’ experiences is good. I would only like to see the survey results carried further.
- To other channels. After all, Web sites are just part of a total customer experience. But similar research is much more expensive outside of the Web, so you’d need a different business model. This is what “secret shopper” services do, for example.
- Across multiple channels. Consistency may be overrated but inconsistency is no great joy, either. Companies need to score the consistency of their customer treatments across channels, both in general and for individual customers. This is more manageable than comparing themselves against other companies, so it is a reasonable project to consider.
- Tied to financial value. Measuring yourself against peers and best practices provides some motive for improvement. But (with apologies to Dr. Johnson), nothing concentrates the business mind like lost profit. You can’t tie an over-all Web experience score to directly to business results because there are too many intervening variables. But you can certainly look at strengths and weaknesses highlighted by a Web site analysis and estimate their impact on customer behavior and, ultimately, on customer value. Again, this is beyond the scope of what Customer Respect Group does but it’s what you need to get real value from their results.
Friday, October 13, 2006
Putting Value on Building Community
I spent a little more time at shop.org yesterday, finishing up my tour of the exhibit hall. One set of vendors that caught my eye were those helping companies to engage with customers outside of the normal purchasing process. These include services that let customers write and post product reviews, from Bazaarvoice (www.bazaarvoice.com) PowerReviews (www.powerreviews.com), and on-line customer training from Powered (www.powered.com).
Such applications are not new, but they highlight the importance of experiences that treat customers as part of a community. This is critical to moving from satisfaction--which a customer can have with many competing vendors--to customer loyalty, which a customer can only feel for only one firm in each category. In this regard, it’s worth noting that the many shopping comparison services at the show, which also include vendor and/or product ratings, tend to reduce loyalty by leading buyers to purchase from the multiple sources.
Community appears throughout the Customer Experience Matrix. It can be a channel (users group or fan club meeting), an activity (writing a review or receiving training), or an experience objective (build community involvement). This may seem confusing but the meanings are distinct enough that they should be clear in context. We reuse the term on purpose, to highlight something that is often overlooked because it doesn’t generate direct revenue and isn’t demanded by customers. Even the official “Community Affairs” department in most organizations is more likely to focus on relationships with external stakeholders (politicians, neighbors, etc.) than the company’s own customers.
This is where the ability to measure the long-term value of each interaction becomes important. Most managers would assume that customers who write product reviews or benefit from online training are more likely than others to purchase again. But unless manager know how much these activities change in future value, they can't determine what it’s worth to invest in them. Since all opportunities within an organization compete for resources, being able to quantify the value of community is essential to supporting it.
Such applications are not new, but they highlight the importance of experiences that treat customers as part of a community. This is critical to moving from satisfaction--which a customer can have with many competing vendors--to customer loyalty, which a customer can only feel for only one firm in each category. In this regard, it’s worth noting that the many shopping comparison services at the show, which also include vendor and/or product ratings, tend to reduce loyalty by leading buyers to purchase from the multiple sources.
Community appears throughout the Customer Experience Matrix. It can be a channel (users group or fan club meeting), an activity (writing a review or receiving training), or an experience objective (build community involvement). This may seem confusing but the meanings are distinct enough that they should be clear in context. We reuse the term on purpose, to highlight something that is often overlooked because it doesn’t generate direct revenue and isn’t demanded by customers. Even the official “Community Affairs” department in most organizations is more likely to focus on relationships with external stakeholders (politicians, neighbors, etc.) than the company’s own customers.
This is where the ability to measure the long-term value of each interaction becomes important. Most managers would assume that customers who write product reviews or benefit from online training are more likely than others to purchase again. But unless manager know how much these activities change in future value, they can't determine what it’s worth to invest in them. Since all opportunities within an organization compete for resources, being able to quantify the value of community is essential to supporting it.
Thursday, October 12, 2006
Note to Self: Keep It Simple
I spent several stimulating hours yesterday at the shop.org conference on Internet retailing. Given my current interests, I paid particular attention to vendors who test and optimize customer treatments. Some worth looking at are Certona (www.certona.com), ForeSeeResults (www.foreseeresults.com), Offermatica (www.offermatica.com), optimost (www.optimost.com) and Usability Sciences Corporation (www.usabilitysciences.com)
One thing I found is that even the most sophisticated vendors work primarily with short-term data. Most look only at results within a given Web session and do not capture the identity of individual customers. Even companies that assess satisfaction with product performance and other non-Web experiences measure the value of single transactions without considering the impact on future behavior.
This isn’t because the vendors are unaware of long-term issues. These are very smart people. I suspect that many would be delighted to tackle the intellectual challenges of measuring and maximizing the long-term impact of each customer experience.
There is a simpler reason that the vendors don’t do it: their clients don’t need it. Most Internet retailers still haven’t applied basic techniques to improve their results, such as rigorous offer and site testing or tailoring search results to customer preferences. The benefits from such improvements are still so large that most companies don’t need anything more sophisticated to increase their profits. True, advanced approaches might yield still greater improvements. But, the extra effort required to execute and explain such approaches really isn’t worth the trouble. And if the clients won’t bother, neither will the vendors.
As someone who loves sophisticated approaches, I’ll admit I find this disappointing. But it also reinforces my interest in finding the simplest possible ways to implement Customer Experience Matrix concepts. If the Matrix is as powerful as it seems, it should deliver benefits even in primitive form. And the simpler the initial approach, the easier it will be to understand, sell and deploy.
I do have some ideas for how to do this, but they're not quite ready for public consumption.
One thing I found is that even the most sophisticated vendors work primarily with short-term data. Most look only at results within a given Web session and do not capture the identity of individual customers. Even companies that assess satisfaction with product performance and other non-Web experiences measure the value of single transactions without considering the impact on future behavior.
This isn’t because the vendors are unaware of long-term issues. These are very smart people. I suspect that many would be delighted to tackle the intellectual challenges of measuring and maximizing the long-term impact of each customer experience.
There is a simpler reason that the vendors don’t do it: their clients don’t need it. Most Internet retailers still haven’t applied basic techniques to improve their results, such as rigorous offer and site testing or tailoring search results to customer preferences. The benefits from such improvements are still so large that most companies don’t need anything more sophisticated to increase their profits. True, advanced approaches might yield still greater improvements. But, the extra effort required to execute and explain such approaches really isn’t worth the trouble. And if the clients won’t bother, neither will the vendors.
As someone who loves sophisticated approaches, I’ll admit I find this disappointing. But it also reinforces my interest in finding the simplest possible ways to implement Customer Experience Matrix concepts. If the Matrix is as powerful as it seems, it should deliver benefits even in primitive form. And the simpler the initial approach, the easier it will be to understand, sell and deploy.
I do have some ideas for how to do this, but they're not quite ready for public consumption.
Wednesday, October 11, 2006
Vtrenz Gives a Solid Overview of Relationship Marketing
The question with vendor white papers is not whether they are self-serving—that’s inevitable—but whether they are useful as well. The best you can usually hope for is that the information will be accurate and relevant, even if not objective or complete. So it came as a pleasant surprise to find that marketing automation vendor Vtrenz’ (www.vtrenz.com) series “Effective Relationship Marketing” presented a fairly comprehensive overview of its topic, including several points with no obvious direct relationship to Vtrenz products.
The Vtrenz series is long for this sort of thing: three 16-page papers, one each on nurturing, growing and retaining customers. The papers provide considerable detail, including descriptions of basic concepts, principles, tactics, score cards and audit questions. Presentation is largely non-commercial, except that final question in each audit is whether you have evaluated Vtrenz’ product. A sly joke, perhaps?
The papers take a largely conventional approach to the topic, which is what you want in this sort of overview. But a few points caught my eye.
- satisfaction is not the same as loyalty. Customers can be perfectly satisfied with a product but are not loyal unless they feel some reason to choose it over competitors. Others have made this point before, but it is still relatively new.
- product purchase and product consumption are separate steps in the customer life cycle. This echoes the important Customer Experience Management insight that all interactions with a company impact customer behavior, not just marketing contacts.
- customer life stages, such as first-time buyer and repeat buyer, are distinct from customer value segments. As Vtrenz puts it, “customers within a single rung of the [life cycle] ladder may have vastly different value and profitability.” I ignored this distinction for the sake of simplicity in my October 6 post, which described the relationship between customer segments and. purchase activities in the Customer Experience Matrix. It’s important to recognize life stage, customer value and other distinctions such as attitudinal segments so that business rules can select appropriate treatments during each interaction.
- Vtrenz describes Customer Relationship Management systems as providing a “sales and service view of the customer.” The more common phrase is “sales, service and marketing.” This does reflect Vtrenz’ self-interest, since it sells only the marketing component of the sales-service-marketing troika. Vtrenz describes its product as “enterprise marketing management” software, and defines EMM as “automating such marketing activities as data management and analytics, creative development and file sharing, and operational execution.”
There’s a reasonable case to be made that marketing management is truly and properly distinct from CRM systems. But I have more trouble placing “operational execution” within the distinct marketing universe, because so many marketing contacts are integrated with sales and service activities. I understand why Vtrenz software includes execution: its customers are marketers and they want to create their own emails, microsite and online surveys. Fair enough. Other marketing automation vendors include the same features for the same reasons. Still, I’d rather see Vtrenz acknowledge that marketing execution is part of a larger CRM solution than try to define away the relationship.
But let’s not be too critical. This is a minor flaw—if it’s a flaw at all—in an otherwise excellent series. It’s worth a look.
The Vtrenz series is long for this sort of thing: three 16-page papers, one each on nurturing, growing and retaining customers. The papers provide considerable detail, including descriptions of basic concepts, principles, tactics, score cards and audit questions. Presentation is largely non-commercial, except that final question in each audit is whether you have evaluated Vtrenz’ product. A sly joke, perhaps?
The papers take a largely conventional approach to the topic, which is what you want in this sort of overview. But a few points caught my eye.
- satisfaction is not the same as loyalty. Customers can be perfectly satisfied with a product but are not loyal unless they feel some reason to choose it over competitors. Others have made this point before, but it is still relatively new.
- product purchase and product consumption are separate steps in the customer life cycle. This echoes the important Customer Experience Management insight that all interactions with a company impact customer behavior, not just marketing contacts.
- customer life stages, such as first-time buyer and repeat buyer, are distinct from customer value segments. As Vtrenz puts it, “customers within a single rung of the [life cycle] ladder may have vastly different value and profitability.” I ignored this distinction for the sake of simplicity in my October 6 post, which described the relationship between customer segments and. purchase activities in the Customer Experience Matrix. It’s important to recognize life stage, customer value and other distinctions such as attitudinal segments so that business rules can select appropriate treatments during each interaction.
- Vtrenz describes Customer Relationship Management systems as providing a “sales and service view of the customer.” The more common phrase is “sales, service and marketing.” This does reflect Vtrenz’ self-interest, since it sells only the marketing component of the sales-service-marketing troika. Vtrenz describes its product as “enterprise marketing management” software, and defines EMM as “automating such marketing activities as data management and analytics, creative development and file sharing, and operational execution.”
There’s a reasonable case to be made that marketing management is truly and properly distinct from CRM systems. But I have more trouble placing “operational execution” within the distinct marketing universe, because so many marketing contacts are integrated with sales and service activities. I understand why Vtrenz software includes execution: its customers are marketers and they want to create their own emails, microsite and online surveys. Fair enough. Other marketing automation vendors include the same features for the same reasons. Still, I’d rather see Vtrenz acknowledge that marketing execution is part of a larger CRM solution than try to define away the relationship.
But let’s not be too critical. This is a minor flaw—if it’s a flaw at all—in an otherwise excellent series. It’s worth a look.
Tuesday, October 10, 2006
Building a Customer Experience Simulation, Part II
Yesterday’s entry showed that a simple simulation model, predicting only whether customers make a purchase in each time period, could generate meaningful statistics about customer value and status. Such a model can be generated with a few lines of code in an agent-based modeling system. (I did just this using StarLogo TNG, available for free from http://education.mit.edu/starlogo-tng. Building something similar in a conventional programming language or process simulation tool would have been vastly more difficult. The current StarLogo TNG is a beta release that was unstable on my computer, but a final release is due before the end of the year and may be worth a second look.)
The only variable required in the model is the probability of each individual making a purchase. Where would this come from?
In the simplest possible model, the probability value would be a constant: say, a 50% chance of any customer making a purchase in any month. But that’s probably a bit too simple. In most businesses, recent buyers are more likely than others to purchase again. Since our simple model can generate Recency and Frequency statistics, it’s reasonable to incorporate these into the probability function. All you need is some historical data that shows how actual purchases relate to those values. The result would be a much more realistic simulation of customer behavior.
This model would be quite useful. Companies could seed it with the actual number of customers in each RF cell and get accurate estimates of future purchases by period. They could assess the value of promoting each RF cell, taking into account both the immediate response rate and later purchases by the “reactivated” customers. Changes in other assumptions would show the results of variations in customer behavior or in acquisition volumes. Additional statistics such as revenue, promotion expenses and profits could be derived from the same base calculations, enabling simple forms of customer value optimization.
All from modeling a single cell!
Of course, a real Customer Experience Matrix has many cells. The primary difference in a multi-cell model is that the probability formulas are more complicated. In particular, predictions for different activities will be interrelated. For example, the probability of a customer requesting a refund is likely to be based on recent purchases as well as previous refund behavior.
Yet a multi-cell simulation still shares the basic features of one-cell model: it works at the individual level and accumulates results by period. So a single-cell model is a good place to start when designing a Customer Experience simulation. And, as we’ve just seen, it offers considerable value of its own.
The only variable required in the model is the probability of each individual making a purchase. Where would this come from?
In the simplest possible model, the probability value would be a constant: say, a 50% chance of any customer making a purchase in any month. But that’s probably a bit too simple. In most businesses, recent buyers are more likely than others to purchase again. Since our simple model can generate Recency and Frequency statistics, it’s reasonable to incorporate these into the probability function. All you need is some historical data that shows how actual purchases relate to those values. The result would be a much more realistic simulation of customer behavior.
This model would be quite useful. Companies could seed it with the actual number of customers in each RF cell and get accurate estimates of future purchases by period. They could assess the value of promoting each RF cell, taking into account both the immediate response rate and later purchases by the “reactivated” customers. Changes in other assumptions would show the results of variations in customer behavior or in acquisition volumes. Additional statistics such as revenue, promotion expenses and profits could be derived from the same base calculations, enabling simple forms of customer value optimization.
All from modeling a single cell!
Of course, a real Customer Experience Matrix has many cells. The primary difference in a multi-cell model is that the probability formulas are more complicated. In particular, predictions for different activities will be interrelated. For example, the probability of a customer requesting a refund is likely to be based on recent purchases as well as previous refund behavior.
Yet a multi-cell simulation still shares the basic features of one-cell model: it works at the individual level and accumulates results by period. So a single-cell model is a good place to start when designing a Customer Experience simulation. And, as we’ve just seen, it offers considerable value of its own.
Monday, October 09, 2006
Building a Customer Experience Simulation
What’s the simplest possible Customer Experience Matrix?
You probably haven’t spent a lot of time wondering that, and I don’t generally lose sleep over it myself. But as we at Client X Client start building more sophisticated simulation models, defining the simplest possible case gives us an important reference point.
Of course, the question has an obvious answer: the simplest version of any matrix is a single cell. In a Customer Experience Matrix, this would describe one activity in one channel. So the real question is whether a one cell model is too simple to be useful.
Let’s assume the activity our cell monitors is purchasing. So our single cell matrix would predict purchases for a group of customers over time. This is certainly a useful thing to know.
Because we’re simulating results over time, our single cell model generates one data point for each time period. Each represents whether or not the customer made a purchase. To do this, our model needs the following functions: create a number of customers; calculate a purchase probability for each customer for each period; apply the probability to determine whether or not a (simulated) purchase was made; and store the result. The output can be imagined as a stream of ones and zeros.
Having a stream of numbers lets us use those numbers to calculate cumulative statistics. The obvious ones are total purchases to date and time since last purchase. Anyone trained in direct marketing will immediately recognize those as Recency and Frequency: two thirds of the classic RFM measure. If we had recorded the amount of each purchase, we could calculate M (Monetary Value) too. We can also use the same data to assign customers to the four basic behavioral segments: prospects (never purchased), new customers (first purchase last period), active customers (multiple purchases including one last period), and lapsed customers (one or more purchases, but none last period).
So it turns out that even a single cell simulation model can estimate customer lifetime purchases, RFM codes and behavior segments. This is enough to provide a useful starting point for a system design.
And there’s more. I’ll continue this thread tomorrow.
You probably haven’t spent a lot of time wondering that, and I don’t generally lose sleep over it myself. But as we at Client X Client start building more sophisticated simulation models, defining the simplest possible case gives us an important reference point.
Of course, the question has an obvious answer: the simplest version of any matrix is a single cell. In a Customer Experience Matrix, this would describe one activity in one channel. So the real question is whether a one cell model is too simple to be useful.
Let’s assume the activity our cell monitors is purchasing. So our single cell matrix would predict purchases for a group of customers over time. This is certainly a useful thing to know.
Because we’re simulating results over time, our single cell model generates one data point for each time period. Each represents whether or not the customer made a purchase. To do this, our model needs the following functions: create a number of customers; calculate a purchase probability for each customer for each period; apply the probability to determine whether or not a (simulated) purchase was made; and store the result. The output can be imagined as a stream of ones and zeros.
Having a stream of numbers lets us use those numbers to calculate cumulative statistics. The obvious ones are total purchases to date and time since last purchase. Anyone trained in direct marketing will immediately recognize those as Recency and Frequency: two thirds of the classic RFM measure. If we had recorded the amount of each purchase, we could calculate M (Monetary Value) too. We can also use the same data to assign customers to the four basic behavioral segments: prospects (never purchased), new customers (first purchase last period), active customers (multiple purchases including one last period), and lapsed customers (one or more purchases, but none last period).
So it turns out that even a single cell simulation model can estimate customer lifetime purchases, RFM codes and behavior segments. This is enough to provide a useful starting point for a system design.
And there’s more. I’ll continue this thread tomorrow.
Friday, October 06, 2006
Marketing Programs and Customer Experience Management
Bear with me on this one. At the risk of seeming pedantic, I want to discuss the difference between the purchase process and the customer life cycle.
The purchase process is a sequence of events, such as awareness, trial and repurchase. The customer life cycle is a set of states, such as new customer, active customer, and lapsed customer. One describes concrete activities, such as going to a store or making a payment; the other describes artificial constructs such as customer segments or relationship types. Customer Experience Management deals with the purchase process, while traditional marketing deals with the customer life cycle. In short, they are very different.
This is why it’s wrong to consider Customer Experience Management as just another term for marketing. You cannot directly map purchase process categories onto the customer life cycle. In fact, they are orthogonal: customers in (almost) any stage of the life cycle can participate in (almost) any part of the purchase process. For example, new, active and lapsed customers all could call for technical support on a previously purchased product. Each customer is part of a different segment and each should be treated differently to optimize long-term value, yet they are all performing the same purchase process activity.
This has important practical implications. One is that operational excellence is not enough: companies must also differentiate among their customers during operational contacts so each customer can be treated appropriately. This means that operational systems and processes must identify customers, determine and execute the right treatments, and track results to allow continuous improvement. It also means that the treatments themselves, such as, say, a retention program, must be deliverable across different channels and contact types. And it means that marketing plans, which are typically organized around programs and customer segments, must integrate both outbound and operational contacts to ensure optimal use of resources.
The Customer Experience Matrix, being based in Customer Experience Management, is organized around purchase process stages. (Just to refresh your memory, the Matrix has two dimensions: process stages across the top and channels down the side. Each cell in the Matrix represents an interaction.) But Client X Client has always recognized the importance of life cycle stages. We usually express this by describing different versions of the Matrix for different customer segments. Each version has its own business rules, financial values and process flows.
In formal terms, however, the Customer Experience Matrix incorporates life cycle stages by defining a set of potential objectives for each interaction. These objectives, such as providing information or expanding the customer relationship, can be mapped precisely to the objectives that marketers assign to customers based on their stage in the customer life cycle. Treatments can be scored against those objectives and matched to customers by comparing the treatment objectives with customer life cycle objectives. In theory, this allows a system to select treatments without an intervening layer of customer segments. In practice, most companies will still use segments to simplify matters.
Maybe this all still seems pedantic, and if so I apologize. But I’ve been trying for some time to articulate the relationship between marketing programs and Customer Experience Management. Now that I’ve finally managed to do it, I’m eager to share.
The purchase process is a sequence of events, such as awareness, trial and repurchase. The customer life cycle is a set of states, such as new customer, active customer, and lapsed customer. One describes concrete activities, such as going to a store or making a payment; the other describes artificial constructs such as customer segments or relationship types. Customer Experience Management deals with the purchase process, while traditional marketing deals with the customer life cycle. In short, they are very different.
This is why it’s wrong to consider Customer Experience Management as just another term for marketing. You cannot directly map purchase process categories onto the customer life cycle. In fact, they are orthogonal: customers in (almost) any stage of the life cycle can participate in (almost) any part of the purchase process. For example, new, active and lapsed customers all could call for technical support on a previously purchased product. Each customer is part of a different segment and each should be treated differently to optimize long-term value, yet they are all performing the same purchase process activity.
This has important practical implications. One is that operational excellence is not enough: companies must also differentiate among their customers during operational contacts so each customer can be treated appropriately. This means that operational systems and processes must identify customers, determine and execute the right treatments, and track results to allow continuous improvement. It also means that the treatments themselves, such as, say, a retention program, must be deliverable across different channels and contact types. And it means that marketing plans, which are typically organized around programs and customer segments, must integrate both outbound and operational contacts to ensure optimal use of resources.
The Customer Experience Matrix, being based in Customer Experience Management, is organized around purchase process stages. (Just to refresh your memory, the Matrix has two dimensions: process stages across the top and channels down the side. Each cell in the Matrix represents an interaction.) But Client X Client has always recognized the importance of life cycle stages. We usually express this by describing different versions of the Matrix for different customer segments. Each version has its own business rules, financial values and process flows.
In formal terms, however, the Customer Experience Matrix incorporates life cycle stages by defining a set of potential objectives for each interaction. These objectives, such as providing information or expanding the customer relationship, can be mapped precisely to the objectives that marketers assign to customers based on their stage in the customer life cycle. Treatments can be scored against those objectives and matched to customers by comparing the treatment objectives with customer life cycle objectives. In theory, this allows a system to select treatments without an intervening layer of customer segments. In practice, most companies will still use segments to simplify matters.
Maybe this all still seems pedantic, and if so I apologize. But I’ve been trying for some time to articulate the relationship between marketing programs and Customer Experience Management. Now that I’ve finally managed to do it, I’m eager to share.
Thursday, October 05, 2006
Exact Software Thinks You Should Buy Their Product (But Says Something Interesting, Anyway)
“CRM has sustained success through its ability to help companies sell but, only focuses on a portion of the customer relationship, not taking into account pervasive business processes that can affect customers.”
I couldn’t have said it better myself, although I might have been more careful about the commas. But that is the opening sentence of “Customer Relationship Management: Putting Customers at the Center of the Business,” a white paper from Exact Software (www.exactamerica.com). Even more to my liking, the paper continues, “using current CRM point solutions will not build and manage an entire customer experience.”
But it soon shifts to a narrow focus. “Building upon and improving CRM involves recognizing and linking all business processes, including workflow, documents, employee and client communications, departments and data storage, to better monitor and look after the customer relationship….The most important element of a customer-centric solution is workflow.”
Really? Workflow? Who knew?
By now you’ve guessed what Exact is selling. And you’re right: the white paper is collateral for e-Synergy, a “Web-based collaboration platform” that “integrates and consolidates corporate data into a single database, allowing all members of the value chain to view and modify information based on their access and roles within the system.” (Exact also sells products for manufacturing, field service, accounting, and analytics, but not a conventional CRM system.)
Despite the self-serving motive, Exact’s paper does present an intriguing vision of giving customers a clear view of all business processes that affect them, both directly and indirectly. This isn’t a complete solution: beyond exposing those processes, you need to measure and manage them to produce optimal outcomes. But given how often customers are viewed as purely passive recipients of business “treatments,” it’s worth a reminder that they can and should be actively engaged in creating their own experience.
I couldn’t have said it better myself, although I might have been more careful about the commas. But that is the opening sentence of “Customer Relationship Management: Putting Customers at the Center of the Business,” a white paper from Exact Software (www.exactamerica.com). Even more to my liking, the paper continues, “using current CRM point solutions will not build and manage an entire customer experience.”
But it soon shifts to a narrow focus. “Building upon and improving CRM involves recognizing and linking all business processes, including workflow, documents, employee and client communications, departments and data storage, to better monitor and look after the customer relationship….The most important element of a customer-centric solution is workflow.”
Really? Workflow? Who knew?
By now you’ve guessed what Exact is selling. And you’re right: the white paper is collateral for e-Synergy, a “Web-based collaboration platform” that “integrates and consolidates corporate data into a single database, allowing all members of the value chain to view and modify information based on their access and roles within the system.” (Exact also sells products for manufacturing, field service, accounting, and analytics, but not a conventional CRM system.)
Despite the self-serving motive, Exact’s paper does present an intriguing vision of giving customers a clear view of all business processes that affect them, both directly and indirectly. This isn’t a complete solution: beyond exposing those processes, you need to measure and manage them to produce optimal outcomes. But given how often customers are viewed as purely passive recipients of business “treatments,” it’s worth a reminder that they can and should be actively engaged in creating their own experience.
Wednesday, October 04, 2006
KNOVA Wins at Buzzword Bingo
This was going to be another snarky deconstruction of a press release, based on an announcement from knowledge management vendor KNOVA (www.knova.com). Indeed, some sarcasm may still be in order, given that KNOVA has decided it delivers something called an “Intelligent Customer Experience”. I’m sorry but that’s not an informative label. Nor are matters clarified by learning that KNOVA brings “the power of Web 2.0 to the enterprise.” And there is no obvious connection among KNOVA 7’s highlighted features of “personalized Microsites, new actionable analytics, a new Visual Search Manager and collaborative authoring.”
But in reading beyond the first paragraph—something I only did in service to you, dear reader—I found KNOVA actually makes a reasonable case for these claims. The connecting thread, and what justifies the Web 2.0 label, is the notion of community collaboration. KNOVA makes a reasonable case that it has applied this to customer service, which is a good idea and rather impressive when you think about it.
What actually softened my attitude, however, was a previous glance at the Products Overview tab on KNOVA’s Web site. This is always the first place I look, since it tends to give the most specific information about what a company is actually selling. It met expectations with a list of modules: contact center, self-service, forums, field service, guided selling and knowledge desk. This gave a fairly clear notion of what they offer.
The pivotal sentences came a bit later, after listing several challenges that face support organizations: “There are fundamentally only three ways of meeting these challenges. You can make your contact center more effective, you can help customers help themselves and each other, or you can make your products require less service.”
Now that’s a true customer experience management attitude, and therefore dear to my heart. It will come as no surprise that KNOVA claims to do all three. The basis of the first two claims is fairly obvious, since they sell knowledge management systems to support customer service. The claim for product improvement relates to identifying common complaints and sending reports back to product development staff. It’s a bit of a stretch but within reason.
So hats off the you, KNOVA: it seems there’s truly some there there. Good luck with the launch and I’ll keep you in mind.
But in reading beyond the first paragraph—something I only did in service to you, dear reader—I found KNOVA actually makes a reasonable case for these claims. The connecting thread, and what justifies the Web 2.0 label, is the notion of community collaboration. KNOVA makes a reasonable case that it has applied this to customer service, which is a good idea and rather impressive when you think about it.
What actually softened my attitude, however, was a previous glance at the Products Overview tab on KNOVA’s Web site. This is always the first place I look, since it tends to give the most specific information about what a company is actually selling. It met expectations with a list of modules: contact center, self-service, forums, field service, guided selling and knowledge desk. This gave a fairly clear notion of what they offer.
The pivotal sentences came a bit later, after listing several challenges that face support organizations: “There are fundamentally only three ways of meeting these challenges. You can make your contact center more effective, you can help customers help themselves and each other, or you can make your products require less service.”
Now that’s a true customer experience management attitude, and therefore dear to my heart. It will come as no surprise that KNOVA claims to do all three. The basis of the first two claims is fairly obvious, since they sell knowledge management systems to support customer service. The claim for product improvement relates to identifying common complaints and sending reports back to product development staff. It’s a bit of a stretch but within reason.
So hats off the you, KNOVA: it seems there’s truly some there there. Good luck with the launch and I’ll keep you in mind.
Tuesday, October 03, 2006
DecisionPower Offers Agent-Based Modeling for Marketers
My continuing quest for an agent-based modeling system has led to DecisionPower Inc. (www.decisionpower.com), which claims—accurately, so far as I know—to offer the only agent-based modeling application tailored specifically for marketing. My product review should appear shortly in DM News (www.dmnews.com) and is already in the archive at http://archive.raabassociatesinc.com/2006/10/decisionpower-inc-marketsim/, so you can read the details there. What’s relevant here is whether DecisionPower’s product, called MarketSim, can create a customer experience model.
The short answer is no. MarketSim predicts market share among competitors in a single product category. It models the purchase decisions of individual consumers (the agents) for that category, and aggregates the results to estimate total sales for each product. This is not the same as modeling interactions between consumers and one company, which could span multiple products and interactions other than purchases.
The long answer is maybe. MarketSim inputs include advertising, channels, pricing, and in-store displays, each of which is part of the customer experience. It can also factor in external factors such as the weather and business conditions, which are part of “context” in customer experience analysis. Furthermore, it should be possible to extend a MarketSim model to include other experience components, such as customer service. In MarketSim terms, these would most likely be treated as influences on future purchases, since everything in MarketSim is aimed at calculating purchase propensity. But that might be an okay way to build a customer experience model, so long as you could also gather other statistics such as cost of goods and customer service expense. MarketSim doesn’t appear to calculate those, but perhaps it could be modified to do so.
I still don’t know whether MarketSim is the best tool for customer experience models. It’s expensive and complex and takes considerable training to operate. Restructuring it might actually be more work than starting with a general purpose agent-based modeling system and building from scratch. On the other hand, MarketSim has an impressively detailed model of the awareness-trial-use-repurchase cycle that is a major part of the Customer Experience Matrix. It seems a shame to reinvent that particular wheel.
The short answer is no. MarketSim predicts market share among competitors in a single product category. It models the purchase decisions of individual consumers (the agents) for that category, and aggregates the results to estimate total sales for each product. This is not the same as modeling interactions between consumers and one company, which could span multiple products and interactions other than purchases.
The long answer is maybe. MarketSim inputs include advertising, channels, pricing, and in-store displays, each of which is part of the customer experience. It can also factor in external factors such as the weather and business conditions, which are part of “context” in customer experience analysis. Furthermore, it should be possible to extend a MarketSim model to include other experience components, such as customer service. In MarketSim terms, these would most likely be treated as influences on future purchases, since everything in MarketSim is aimed at calculating purchase propensity. But that might be an okay way to build a customer experience model, so long as you could also gather other statistics such as cost of goods and customer service expense. MarketSim doesn’t appear to calculate those, but perhaps it could be modified to do so.
I still don’t know whether MarketSim is the best tool for customer experience models. It’s expensive and complex and takes considerable training to operate. Restructuring it might actually be more work than starting with a general purpose agent-based modeling system and building from scratch. On the other hand, MarketSim has an impressively detailed model of the awareness-trial-use-repurchase cycle that is a major part of the Customer Experience Matrix. It seems a shame to reinvent that particular wheel.
Subscribe to:
Posts (Atom)