It seems to be open season on Fred Reichheld. For many years, his concept of Net Promoter Score as a critical predictor of business success has been questioned by marketers. The Internets are now buzzing with a recent academic study “A Longitudinal Examination of Net Promoter and Firm Revenue Growth” (Timothy L. Keiningham, Bruce Cooil, Tor Wallin Andreassen, & Lerzan Aksoy, Journal of Marketing, July 2007) that duplicated Reichheld’s research but “fails to replicate his assertions regarding the ‘clear superiority’ of Net Promoter compared with other measures in those industries.” See, for example, comments by Adelino de Almeida, Alan Mitchell, and Walter Carl. I didn’t see an immediate rebuttal on Reichheld’s own blog, although the blog does contain responses to other criticisms.
There’s a significant contrast between the Net Promoter approach – focusing on a single outcome measure – and the Balanced Scorecard approach of viewing multiple predictive metrics. I think the Balanced Scorecard approach, particularly if cascaded down to individuals see the strategic measures they can directly affect, makes a lot more sense.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Wednesday, July 11, 2007
Tuesday, July 10, 2007
The Performance Power Grid Doesn't Impress
Every so often, someone offers to send me a review copy of a new business book. Usually I don’t accept, but given my current interest in performance management techniques, a headline touting “Six Reasons the Performance Power Grid Trumps the Balanced Scorecard” was intriguing. After all, Balanced Scorecard is the dominant approach to performance management today—something that becomes clear when you read other books on the topic and find that most have adopted its framework (with or without acknowledgement). So it seemed worth looking at something that claims to supersede it.
I therefore asked for a copy of The Performance Power Grid by David F. Giannetto and Anthony Zecca (John Wiley & Sons, 2006), and promised to mention it in this blog.
The book has its merits: it’s short and the type is big. On the other hand, there are no pictures and very few illustrations.
As to content: I didn’t exactly disagree with it, but nor did I find it particularly enlightening. The authors’ fundamental point is that organizations should build reporting systems that focus workers at each level on the tasks that are most important for business success. Well, okay. Balanced Scorecard says the same thing—the authors seem to have misinterpreted Balanced Scorecard to be about non-strategic metrics, and then criticize it based on that misinterpretation. The Performance Power Grid does seem to focus a bit more on immediate feedback to lower-level workers than Balanced Scorecard, but a fully-developed Balanced Scorecard system definitely includes “cascading” scorecards that reach all workers.
What I really found frustrating about the book was a lack of concrete information on exactly what goes into its desired system. Somehow you pick your “power drivers” to populate a “performance portal” on your “power grid” on a “(there’s a lot of “power” going on here), and provide analytics so workers can see why things are happening and how they can change them. But exactly what this portal looks like and which data are presented for analysis, isn’t explained in any detail.
The authors might argue that the specifics are unique to each company. But even so, a few extended examples and some general guidelines would be most helpful. The book does actually abound in examples, but most are either historical analogies (Battle of Gettysburg, Apollo 13) or extremely simplistic (a package delivery company focusing on timely package delivery). Then, just when you think maybe the point is each worker should focus on one or two things, the authors casually mention “10 to 15 metrics for each employee that they themselves can affect and are responsible for.” That’s a lot of metrics. I sure would have liked to see a sample list.
On the other hand, the authors are consultants who say their process has been used with great success. My guess is this has less to do with the particular approach than that any method will work if it leads companies to focus relentlessly on key business drivers. It never hurts to repeat that lesson, although I wouldn’t claim it’s a new one.
I therefore asked for a copy of The Performance Power Grid by David F. Giannetto and Anthony Zecca (John Wiley & Sons, 2006), and promised to mention it in this blog.
The book has its merits: it’s short and the type is big. On the other hand, there are no pictures and very few illustrations.
As to content: I didn’t exactly disagree with it, but nor did I find it particularly enlightening. The authors’ fundamental point is that organizations should build reporting systems that focus workers at each level on the tasks that are most important for business success. Well, okay. Balanced Scorecard says the same thing—the authors seem to have misinterpreted Balanced Scorecard to be about non-strategic metrics, and then criticize it based on that misinterpretation. The Performance Power Grid does seem to focus a bit more on immediate feedback to lower-level workers than Balanced Scorecard, but a fully-developed Balanced Scorecard system definitely includes “cascading” scorecards that reach all workers.
What I really found frustrating about the book was a lack of concrete information on exactly what goes into its desired system. Somehow you pick your “power drivers” to populate a “performance portal” on your “power grid” on a “(there’s a lot of “power” going on here), and provide analytics so workers can see why things are happening and how they can change them. But exactly what this portal looks like and which data are presented for analysis, isn’t explained in any detail.
The authors might argue that the specifics are unique to each company. But even so, a few extended examples and some general guidelines would be most helpful. The book does actually abound in examples, but most are either historical analogies (Battle of Gettysburg, Apollo 13) or extremely simplistic (a package delivery company focusing on timely package delivery). Then, just when you think maybe the point is each worker should focus on one or two things, the authors casually mention “10 to 15 metrics for each employee that they themselves can affect and are responsible for.” That’s a lot of metrics. I sure would have liked to see a sample list.
On the other hand, the authors are consultants who say their process has been used with great success. My guess is this has less to do with the particular approach than that any method will work if it leads companies to focus relentlessly on key business drivers. It never hurts to repeat that lesson, although I wouldn’t claim it’s a new one.
Monday, July 09, 2007
APQC Provides 3 LTV Case Studies
One of the common criticisms of lifetime value is that it has no practical applications. You and I know this is false, but some people still need convincing. The APQC formerly American Productivity and Quality Council) recently published “Insights into Using Customer Valuation Strategies to Drive Growth and Increase Profits from Aon Risk Services, Sprint Nextel, and a Leading Brokerage Services Firm,” which provides three mini-case histories that may help.
Aon created profitability scorecards for 10,000 insurance customers. The key findings were variations in customer service costs, which had a major impact on profitability. The cost estimates were based on surveys of customer-facing personnel. Results were used for planning, pricing, and to change how clients were serviced, and have yielded substantial financial gains.
Sprint Nextel developed a lifetime value model for 45 million wireless customers, classified by segments and services and using “a combination of historical costs, costing assumptions, cost tracing techniques, and activity-based allocations”. The model is used to assess the financial impact of proposed marketing programs and for strategic planning.
The brokerage firm also built a lifetime value model for customer segments, which were defined by trading behaviors, asset levels, portfolio mix and demographics. Value is determined by the products and services used by each segment, and in particular by the costs associated with different service channels. The LTV model is used to evaluate the three-year impact of marketing decisions such as pricing and advertising.
The paper also identifies critical success factors at each company: senior management support, organizational buy-in and profitability analysis technology at Aon; model buy-in at Sprint Nextel; and the model, profitability analysis and customer data at the brokerage firm.
My own take is that this paper reinforces the point that lifetime value is useful only when looking at individual customers or customer segments: a single lifetime value figure for all customers is of little utility. It also reinforces the need to model that incremental impact of different marketing programs, or of any change in the customer experience. Although the Aon and brokerage models are not described in detail, it appears they take expected customer behaviors as inputs and then calculate the financial impact. This is less demanding than having a model forecast the behavior changes themselves. Since it clearly delivers considerable value on its own, it’s a good first step in a larger project towards a comprehensive lifetime value-based management approach.
Aon created profitability scorecards for 10,000 insurance customers. The key findings were variations in customer service costs, which had a major impact on profitability. The cost estimates were based on surveys of customer-facing personnel. Results were used for planning, pricing, and to change how clients were serviced, and have yielded substantial financial gains.
Sprint Nextel developed a lifetime value model for 45 million wireless customers, classified by segments and services and using “a combination of historical costs, costing assumptions, cost tracing techniques, and activity-based allocations”. The model is used to assess the financial impact of proposed marketing programs and for strategic planning.
The brokerage firm also built a lifetime value model for customer segments, which were defined by trading behaviors, asset levels, portfolio mix and demographics. Value is determined by the products and services used by each segment, and in particular by the costs associated with different service channels. The LTV model is used to evaluate the three-year impact of marketing decisions such as pricing and advertising.
The paper also identifies critical success factors at each company: senior management support, organizational buy-in and profitability analysis technology at Aon; model buy-in at Sprint Nextel; and the model, profitability analysis and customer data at the brokerage firm.
My own take is that this paper reinforces the point that lifetime value is useful only when looking at individual customers or customer segments: a single lifetime value figure for all customers is of little utility. It also reinforces the need to model that incremental impact of different marketing programs, or of any change in the customer experience. Although the Aon and brokerage models are not described in detail, it appears they take expected customer behaviors as inputs and then calculate the financial impact. This is less demanding than having a model forecast the behavior changes themselves. Since it clearly delivers considerable value on its own, it’s a good first step in a larger project towards a comprehensive lifetime value-based management approach.
Friday, July 06, 2007
Sources of Benchmark Studies
Somehow I found myself researching benchmarking vendors this morning. Usually I think of the APQC, formerly American Productivity and Quality Center, as the source of such studies. They do seem to be the leader and their Web site provides lots of information on the topic.
But a few other names came up too. (I’ve excluded some specialists in particular fields such as customer service or health care.):
Kaiser Associates
Reset Group (New Zealand)
Resource Services Inc.
Best Practices LLC
MarketingSherpa
MarketingProfs
Cornerstone (banking)
Some of these simply do Web surveys. I wouldn’t trust those without closely examining the technique because it’s too easy for people to give inaccurate replies. Others do more traditional in-depth studies. The studies may be within a single organization, among firms in a single industry, or across industries.
But a few other names came up too. (I’ve excluded some specialists in particular fields such as customer service or health care.):
Kaiser Associates
Reset Group (New Zealand)
Resource Services Inc.
Best Practices LLC
MarketingSherpa
MarketingProfs
Cornerstone (banking)
Some of these simply do Web surveys. I wouldn’t trust those without closely examining the technique because it’s too easy for people to give inaccurate replies. Others do more traditional in-depth studies. The studies may be within a single organization, among firms in a single industry, or across industries.
Thursday, July 05, 2007
Is Marketing ROI Important?
You may have noticed that my discussions of marketing performance measurement have not stressed Return on Marketing Investment as an important metric. Frankly, this surprises even me: ROMI appears every time I jot down a list of such measures, but it never quite fits into the final schemes. To use the categories I proposed yesterday, ROMI isn’t a measure of business value, of strategic alignment, or of marketing efficiency. I guess it comes closest to the efficiency category, but the efficiency measures tend to be more simple and specific, such as a cost per unit or time per activity. Although ROMI could be considered the ultimate measure of marketing efficiency, it is too abstract to fit easily into this group.
Still, my silence doesn’t mean I haven’t been giving ROMI much thought. (I am, after all, a man of many secrets.) In fact, I spent some time earlier this week revisiting what I assume is the standard work on the topic, James Lenskold’s excellent Marketing ROI. Lenksold takes a rigorous and honest view of the subject, which means he discusses the challenges as well as the advantages. I came away feeling ROMI faces two major issues: the practical one of identifying exactly which results are caused by a particular marketing investment, and the more conceptual one of how to deal with benefits that depend in part on future marketing activities.
The practical issue of linking results to investments has no simple solution: there’s no getting around the fact that life is complex. But any measure of marketing performance faces the same challenge, so I don’t see this as a flaw in ROMI itself. The only thing I would say is that ROMI may give a false illusion of precision that persists no matter how many caveats are presented along with the numbers.
How to treat future, contingent benefits is also a problem any methodology must face. Lenskold offers several options, from treating several investments into a single investment for analytical purposes, to reporting the future benefits separately from the immediate ROMI, to treating investments with long-term results (e.g. brand building) as overhead rather than marketing. Since he covers pretty much all the possibilities, one of them must be the right answer (or, more likely, different answers will be right in different situations). My own attitude is this isn’t something to agonize over: all marketing decisions (indeed, all business decisions) require assumptions about the future, so it’s not necessary to isolate future marketing programs as something to treat separate from, say, future product costs. Both will result in part from future business decisions. When I calculate lifetime value, I certainly include the results of future marketing efforts in the value stream. Were I to calculate ROMI, I’d do the same.
So here's what it comes down to. Even though I'm attracted to the idea of ROMI, I find it isn't concrete enough to replace specific marketing efficiency measures like cost per order, but is still too narrow to provide the strategic insight gained from lifetime value. (This applies unless you define ROMI to include the results of future marketing decisions, but then it's really the same as incremental LTV.)
Now you know why ROMI never makes my list of marketing performance measures.
Still, my silence doesn’t mean I haven’t been giving ROMI much thought. (I am, after all, a man of many secrets.) In fact, I spent some time earlier this week revisiting what I assume is the standard work on the topic, James Lenskold’s excellent Marketing ROI. Lenksold takes a rigorous and honest view of the subject, which means he discusses the challenges as well as the advantages. I came away feeling ROMI faces two major issues: the practical one of identifying exactly which results are caused by a particular marketing investment, and the more conceptual one of how to deal with benefits that depend in part on future marketing activities.
The practical issue of linking results to investments has no simple solution: there’s no getting around the fact that life is complex. But any measure of marketing performance faces the same challenge, so I don’t see this as a flaw in ROMI itself. The only thing I would say is that ROMI may give a false illusion of precision that persists no matter how many caveats are presented along with the numbers.
How to treat future, contingent benefits is also a problem any methodology must face. Lenskold offers several options, from treating several investments into a single investment for analytical purposes, to reporting the future benefits separately from the immediate ROMI, to treating investments with long-term results (e.g. brand building) as overhead rather than marketing. Since he covers pretty much all the possibilities, one of them must be the right answer (or, more likely, different answers will be right in different situations). My own attitude is this isn’t something to agonize over: all marketing decisions (indeed, all business decisions) require assumptions about the future, so it’s not necessary to isolate future marketing programs as something to treat separate from, say, future product costs. Both will result in part from future business decisions. When I calculate lifetime value, I certainly include the results of future marketing efforts in the value stream. Were I to calculate ROMI, I’d do the same.
So here's what it comes down to. Even though I'm attracted to the idea of ROMI, I find it isn't concrete enough to replace specific marketing efficiency measures like cost per order, but is still too narrow to provide the strategic insight gained from lifetime value. (This applies unless you define ROMI to include the results of future marketing decisions, but then it's really the same as incremental LTV.)
Now you know why ROMI never makes my list of marketing performance measures.
Tuesday, July 03, 2007
Marketing Performance: Plan, Simulate, Measure
Let’s dig a bit deeper into the relationships I mentioned yesterday among systems for marketing performance measurement, marketing planning, and marketing simulation (e.g., marketing mix models, lifetime value models). You can think of marketing performance measures as falling into three broad categories:
- measures that show how marketing investments impact business value, such as profits or stock price
- measures that show how marketing investments align with business strategy
- measures that show how efficiently marketing is doing its job (both in terms of internal operations and of cost per unit – impression, response, revenue, etc.)
We can put aside the middle category, which is really a special case related to Balanced Scorecard concepts. Measures in this are traditional Balanced Scorecard measures of business results and performance drivers. By design, the Balanced Scorecard focuses on just a few of these measures, so it is not concerned with the details captured in the marketing planning system. (Balanced Scorecard proponents recognize the importance of such plans; they just want to manage them elsewhere). Also, as I’ve previously commented, Balanced Scorecard systems don’t attempt to precisely correlate performance drivers to results, even though they do use strategy maps to identify general causal relationships between them. So Balanced Scorecard systems also don’t need marketing simulation systems, which do attempt to define those correlations.
This leaves the high-level measures of business value and the low-level measures of efficiency. Clearly the low-level measures rely on detailed plans, since you can only measure efficiency by looking at performance of individual projects and then the project mix. (For example: measuring cost per order makes no sense unless you specify the product, channel, offer and other specifics. Only then can you determine whether results for a particular campaign were too high or too low, by comparing them with similar campaigns.)
But it turns out that even the high-level measures need to work from detailed plans. The problem here is that aggregate measures of marketing activity are too broad to correlate meaningfully with aggregate business results. Different marketing activities affect different customer segments, different business measures (revenue, margins, service costs, satisfaction, attrition), and different time periods (some have immediate effects, others are long-term investments). Past marketing investments also affect current period results. So a simple correlation of this period marketing costs vs. this period business results makes no sense. Instead, you need to look at the details of specific marketing efforts, past and present, to estimate how they each contribute to current business results. (And you need to be reasonably humble in recognizing that you’ll never really account for results precisely—which is why marketing mix models start with a base level of revenue that would occur even if you did nothing.) The logical place to capture those detailed marketing effort is the marketing planning system.
The role of simulation systems in high-level performance reporting is to convert these detailed marketing plans into estimates of business impact from each program. The program results can then be aggregated to show the impact of marketing as a whole.
Of course, if the simulation system is really evaluating individual projects, it can also provide measures for the low-level marketing efficiency reports. In fact, having those sorts of measures is the only way the low-level system can get beyond comparing programs only against other similar programs, to allow comparisons across different program types. This is absolutely essential if marketers are going to shift resources from low- to high-yield activities and therefore make sure they are optimizing return on the marketing budget as a whole. (Concretely: if I want to compare direct mail to email, then looking at response rate won’t do. But if I add a simulation system that calculates the lifetime value acquired from investments in both, I can decide which one to choose.)
So it turns out that planning and simulation systems are both necessary for both high-level and low-level marketing performance measurement. The obvious corollary is that the planning system must capture the data needed for the simulation system to work. This would include tags to identify the segments, time periods and outcomes the each program is intended to affect. Some of these will be part of the planning system already, but other items will be introduced only to make simulation work.
- measures that show how marketing investments impact business value, such as profits or stock price
- measures that show how marketing investments align with business strategy
- measures that show how efficiently marketing is doing its job (both in terms of internal operations and of cost per unit – impression, response, revenue, etc.)
We can put aside the middle category, which is really a special case related to Balanced Scorecard concepts. Measures in this are traditional Balanced Scorecard measures of business results and performance drivers. By design, the Balanced Scorecard focuses on just a few of these measures, so it is not concerned with the details captured in the marketing planning system. (Balanced Scorecard proponents recognize the importance of such plans; they just want to manage them elsewhere). Also, as I’ve previously commented, Balanced Scorecard systems don’t attempt to precisely correlate performance drivers to results, even though they do use strategy maps to identify general causal relationships between them. So Balanced Scorecard systems also don’t need marketing simulation systems, which do attempt to define those correlations.
This leaves the high-level measures of business value and the low-level measures of efficiency. Clearly the low-level measures rely on detailed plans, since you can only measure efficiency by looking at performance of individual projects and then the project mix. (For example: measuring cost per order makes no sense unless you specify the product, channel, offer and other specifics. Only then can you determine whether results for a particular campaign were too high or too low, by comparing them with similar campaigns.)
But it turns out that even the high-level measures need to work from detailed plans. The problem here is that aggregate measures of marketing activity are too broad to correlate meaningfully with aggregate business results. Different marketing activities affect different customer segments, different business measures (revenue, margins, service costs, satisfaction, attrition), and different time periods (some have immediate effects, others are long-term investments). Past marketing investments also affect current period results. So a simple correlation of this period marketing costs vs. this period business results makes no sense. Instead, you need to look at the details of specific marketing efforts, past and present, to estimate how they each contribute to current business results. (And you need to be reasonably humble in recognizing that you’ll never really account for results precisely—which is why marketing mix models start with a base level of revenue that would occur even if you did nothing.) The logical place to capture those detailed marketing effort is the marketing planning system.
The role of simulation systems in high-level performance reporting is to convert these detailed marketing plans into estimates of business impact from each program. The program results can then be aggregated to show the impact of marketing as a whole.
Of course, if the simulation system is really evaluating individual projects, it can also provide measures for the low-level marketing efficiency reports. In fact, having those sorts of measures is the only way the low-level system can get beyond comparing programs only against other similar programs, to allow comparisons across different program types. This is absolutely essential if marketers are going to shift resources from low- to high-yield activities and therefore make sure they are optimizing return on the marketing budget as a whole. (Concretely: if I want to compare direct mail to email, then looking at response rate won’t do. But if I add a simulation system that calculates the lifetime value acquired from investments in both, I can decide which one to choose.)
So it turns out that planning and simulation systems are both necessary for both high-level and low-level marketing performance measurement. The obvious corollary is that the planning system must capture the data needed for the simulation system to work. This would include tags to identify the segments, time periods and outcomes the each program is intended to affect. Some of these will be part of the planning system already, but other items will be introduced only to make simulation work.
Monday, July 02, 2007
Marketing Planning and Marketing Measurement: Surprisingly Separate
As part of my continuing research into marketing performance measurement, I’ve been looking at software vendors who provide marketing planning systems. I haven’t found any products that do marketing planning by itself. Instead, the function is part of larger systems. In order of increasing scope, these fall into three groups:
Marketing resource management:
- Aprimo
- MarketingPilot
- Assetlink
- Orbis Australian; active throughout Asia; just opened London office)
- MarketingCentral
- Xeed (Dutch; active throughout Europe)
Enterprise marketing:
- Unica
- SAS
- Teradata
- Alterian
Enterprise management:
- SAP
- Oracle/Siebel
- Infor
Few companies would buy an enterprise marketing or enterprise management system solely for its marketing planning module. Even marketing resource management software is primarily bought for other functions (mostly content management and program management). This makes sense in that most marketing planning comes down to aggregating information about the marketing programs that reside in these larger systems.
Such aggregations include comparisons across time periods, of budgets against actuals, and of different products and regions against each other. These are great for running marketing operations but don’t address larger strategic issues such as impact of marketing on customer attitudes or company value. Illustrating this connection requires analytical input from tools such as marketing mix models or business simulations. This is provided by measurement products like Upper Quadrant , Veridiem (now owned by SAS) and MMA Avista. Presumably we’ll see closer integ/ration between the two sets of products over time.
Marketing resource management:
- Aprimo
- MarketingPilot
- Assetlink
- Orbis Australian; active throughout Asia; just opened London office)
- MarketingCentral
- Xeed (Dutch; active throughout Europe)
Enterprise marketing:
- Unica
- SAS
- Teradata
- Alterian
Enterprise management:
- SAP
- Oracle/Siebel
- Infor
Few companies would buy an enterprise marketing or enterprise management system solely for its marketing planning module. Even marketing resource management software is primarily bought for other functions (mostly content management and program management). This makes sense in that most marketing planning comes down to aggregating information about the marketing programs that reside in these larger systems.
Such aggregations include comparisons across time periods, of budgets against actuals, and of different products and regions against each other. These are great for running marketing operations but don’t address larger strategic issues such as impact of marketing on customer attitudes or company value. Illustrating this connection requires analytical input from tools such as marketing mix models or business simulations. This is provided by measurement products like Upper Quadrant , Veridiem (now owned by SAS) and MMA Avista. Presumably we’ll see closer integ/ration between the two sets of products over time.