It turns out the world of mobile marketing systems is further developed than I thought. (Either that, or these people are really good at putting up impressive Web sites.) A bit more research turned up a number of vendors who appear to have reasonably sophisticated mobile marketing systems. Those that seem worth noting include: MOVO, Flytxt, Kodime, MessageBuzz, Velti, Wire2Air, and Knotice. These are in addition to the firms I mentioned yesterday: Enpocket, Ad Infuse and Waterfall Mobile.
It’s hard to tell which of these are sold as software and which are platforms used in-house by mobile marketing agencies. I suspect most fall into the latter category. And, of course, without examining them closely you never know what’s real. But at a minimum we can say that several companies who understand what it takes to build a decent marketing system have applied their knowledge to mobile marketing.
So far, it seems most of these companies are mobile marketing specialists. Knotice stands out as an exception, claiming to integrate “email, web, mobile and emerging interactive TV platforms.”
Mobile is outside my current range of activities, so I don’t know how much more time I’ll be spending on the topic. But I’m glad I took a little peek—it’s interesting to see what’s going on out there.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Friday, March 30, 2007
Thursday, March 29, 2007
Enpocket Makes Mobile Advertising Look Mature
As you know from Monday's post, I’ve been poking around a bit at mobile marketing software. One company that turned up is Enpocket, a Boston-based firm that has developed what appears—and I’m basing this only on their Web site—to be an impressively complete system for managing mobile advertising campaigns. Its two main components are a marketing engine that sends messages in pretty much any format (text, email, Web page, video, etc.), and a personalization engine that builds self-adjusting predictive models to target those messages. The marketing engine also maintains some form of customer database—again, I haven’t contacted the company for details—that holds customer preferences and permissions, predictive model scores, and external information such as demographics, billing, and phone usage.
Enpocket describes this in some detail in a white paper published last year. The paper is aimed primarily at convincing mobile phone operators to use the system themselves to market data services to their customers. This is just one small opportunity within the world of mobile marketing, but perhaps it’s a shrewd place to start since the operators have full access to internal data that might not be available to others. Other materials on the enpocket site indicate they also work with other types of advertisers. The enpocket Web site also says they now offer a content and community management module (not sure why those are lumped together) that is not mentioned in the white paper.
I don’t know what, if anything, is truly unique about enpocket. For example, Ad Infuse also uses “advanced matching algorithms” to target mobile ads. Waterfall Mobile promises interactive features (voting, polling, on-demand content delivery, etc.) that enpocket doesn’t mention. But what impresses me about enpocket is the maturity of their vision: rule-based and event-triggered campaigns linked to a customer database and automated targeting engine.
It took conventional database marketers years to reach this stage. Even Web marketers are just starting to get there. Obviously enpocket has the advantage of building on what’s already been done in other media. But there’s still a big difference between knowing what should be done, and actually doing it. While I don’t know what enpocket has actually delivered, at least they’re making the right promises.
Enpocket describes this in some detail in a white paper published last year. The paper is aimed primarily at convincing mobile phone operators to use the system themselves to market data services to their customers. This is just one small opportunity within the world of mobile marketing, but perhaps it’s a shrewd place to start since the operators have full access to internal data that might not be available to others. Other materials on the enpocket site indicate they also work with other types of advertisers. The enpocket Web site also says they now offer a content and community management module (not sure why those are lumped together) that is not mentioned in the white paper.
I don’t know what, if anything, is truly unique about enpocket. For example, Ad Infuse also uses “advanced matching algorithms” to target mobile ads. Waterfall Mobile promises interactive features (voting, polling, on-demand content delivery, etc.) that enpocket doesn’t mention. But what impresses me about enpocket is the maturity of their vision: rule-based and event-triggered campaigns linked to a customer database and automated targeting engine.
It took conventional database marketers years to reach this stage. Even Web marketers are just starting to get there. Obviously enpocket has the advantage of building on what’s already been done in other media. But there’s still a big difference between knowing what should be done, and actually doing it. While I don’t know what enpocket has actually delivered, at least they’re making the right promises.
Wednesday, March 28, 2007
Defining Process is Key to Selecting Software
Readership of this blog picks up greatly when I write about specific software products. I’d like to think that’s because I am a famous expert on the topic, but suspect it’s simply that people are always looking for information about which products to buy. Given the volume of information already available on the Internet, this seems a little surprising. But given the quality of that information, perhaps not.
Still, no matter how pleased I am to attract more readers, nothing can replace talking directly to a software vendor. And not just talking, but actually seeing and working with their software. I’ve run across this many times over the years: people buy a product for a specific purpose without really understanding how it will do what they have in mind. Then, sometimes literally within minutes of installing it, they find it isn’t what they need.
This doesn’t mean every software purchase must be preceded by a test installation. But it does mean your pre-purchase research has to be thorough enough that you understand how the software will meet your goals. Sometimes there’s enough information on their Web site to know this; sometimes it takes a sales demonstration; sometimes you have to load a trial copy and play with it. Sometimes nothing less than a true proof of concept—complete with live data and key functionality—will do.
So how do you know when you know enough to buy? That’s the tricky part. You must define what you want to the system to do—that is, your requirements—and understand what capabilities the system needs to do it. The only way I know to do this is to work through the process flow of the system: a step-by-step description of the inputs, processing and outputs needed to accomplish the desired outcome. You then identify the system capabilities needed at every stage in the process. Of course, this is harder than it sounds when systems are complicated and there are many ways to do things.
The level of detail required depends on the situation. But my point today is simply that you have to think things through and visualize how the software will accomplish your goals. If you can't yet do that, you’re not ready to make a purchase.
Still, no matter how pleased I am to attract more readers, nothing can replace talking directly to a software vendor. And not just talking, but actually seeing and working with their software. I’ve run across this many times over the years: people buy a product for a specific purpose without really understanding how it will do what they have in mind. Then, sometimes literally within minutes of installing it, they find it isn’t what they need.
This doesn’t mean every software purchase must be preceded by a test installation. But it does mean your pre-purchase research has to be thorough enough that you understand how the software will meet your goals. Sometimes there’s enough information on their Web site to know this; sometimes it takes a sales demonstration; sometimes you have to load a trial copy and play with it. Sometimes nothing less than a true proof of concept—complete with live data and key functionality—will do.
So how do you know when you know enough to buy? That’s the tricky part. You must define what you want to the system to do—that is, your requirements—and understand what capabilities the system needs to do it. The only way I know to do this is to work through the process flow of the system: a step-by-step description of the inputs, processing and outputs needed to accomplish the desired outcome. You then identify the system capabilities needed at every stage in the process. Of course, this is harder than it sounds when systems are complicated and there are many ways to do things.
The level of detail required depends on the situation. But my point today is simply that you have to think things through and visualize how the software will accomplish your goals. If you can't yet do that, you’re not ready to make a purchase.
Tuesday, March 27, 2007
Lifetime Value is More than Another Way to Spell ROI
One of our central propositions at Client X Client is that every business decision should be measured by its impact on customer lifetime value. This is because lifetime value provides a common denominator to compare decisions that are otherwise utterly dissimilar. How else do I choose whether to invest in a new factory or improve customer service?
I was presenting this argument yesterday when I realized that you could say the same for Return on Investment. That brought me up short. Is it possible that we’re really not adding anything beyond traditional ROI analysis? Have we deluded ourselves into thinking this is something new and useful?
But remember what most ROI analyses actually look like: they isolate whatever cost and revenue elements are needed to prove a particular business case. The new factory is justified by lower product costs; better customer service is justified by higher retention rates. But each of those are just portions of the actual business impact of the investments. If the new factory produces poor quality products, it may have a negative impact on lifetime value. If better customer service only retains less profitable customers, it may also be a poor investment.
This is the reason you need to measure lifetime value: because lifetime value inherently forces you to consider all the factors that might be impacted by a decision. As my previous posts have discussed, these can be summarized along two dimensions, with three elements each: order type (new, renewal and cross sell) and financial value (revenue, promotion cost, fulfillment cost). Those combine to form a convenient 3x3 matrix that can serve as a simple checklist for assessing any business analysis: have you considered the estimated impact of the proposed decision on each cell? There’s no guarantee your answers will be correct, but at least you’ll have asked the right questions. That alone makes lifetime value more useful than conventional ROI evaluations.
I was presenting this argument yesterday when I realized that you could say the same for Return on Investment. That brought me up short. Is it possible that we’re really not adding anything beyond traditional ROI analysis? Have we deluded ourselves into thinking this is something new and useful?
But remember what most ROI analyses actually look like: they isolate whatever cost and revenue elements are needed to prove a particular business case. The new factory is justified by lower product costs; better customer service is justified by higher retention rates. But each of those are just portions of the actual business impact of the investments. If the new factory produces poor quality products, it may have a negative impact on lifetime value. If better customer service only retains less profitable customers, it may also be a poor investment.
This is the reason you need to measure lifetime value: because lifetime value inherently forces you to consider all the factors that might be impacted by a decision. As my previous posts have discussed, these can be summarized along two dimensions, with three elements each: order type (new, renewal and cross sell) and financial value (revenue, promotion cost, fulfillment cost). Those combine to form a convenient 3x3 matrix that can serve as a simple checklist for assessing any business analysis: have you considered the estimated impact of the proposed decision on each cell? There’s no guarantee your answers will be correct, but at least you’ll have asked the right questions. That alone makes lifetime value more useful than conventional ROI evaluations.
Monday, March 26, 2007
Why You're Going to Replace the Mobile Marketing Software You Haven't Even Bought Yet
I’ve seen several (well, two) articles recently about mobile marketing software. That’s one mention short of a trend, but I figured I’d be proactive and see what was going on out there. The general idea behind the articles was that new products are now making it easier to do serious campaign management for mobile promotions.
Somewhat to my disappointment, a quick bit of Googling showed there are many more than two products already present in this space. Most seem to be SMS bulk mailers—very much the equivalent of simple software for sending direct mail or mass emails. Of course, we all know that sort of untargeted marketing is a bad idea in any channels and pretty much unthinkable in mobile marketing, where the customer pays to receive the message. So those products aren’t worth much attention.
But there do seem to be several more sophisticated products that offer advanced capabilities. I won’t mention names because I haven’t spent enough time researching the topic to understand which ones are truly important.
Still, my general thought for the day is that it's silly to have to invent the features needed for this sort of product. Surely the marketing world has enough experience by now to understand the basic features necessary to run campaigns and manage interactions. Any list would include customer profiles, segmentation, testing, response analysis, propensity modeling, and lifetime value estimates (yes that last one is special pleading; sorry, I’m obsessed). I could come up with more but the cat is sitting on my lap. The point is, it makes vastly more sense to extend current marketing systems into the mobile channel, than to build separate mobile marketing systems that will later need to be integrated.
Marketing software vendors surely see this opportunity. But to take advantage of it, they would need to invest in not merely in technology, but also in the services and expertise needed to help novice marketers enter the mobile channel. This is expensive and experts are rare. So it’s more likely the vendors will defer any major effort until standard practices are widely understood. Pity – it will mean a lot of work down the road to fix the problems now being created.
Somewhat to my disappointment, a quick bit of Googling showed there are many more than two products already present in this space. Most seem to be SMS bulk mailers—very much the equivalent of simple software for sending direct mail or mass emails. Of course, we all know that sort of untargeted marketing is a bad idea in any channels and pretty much unthinkable in mobile marketing, where the customer pays to receive the message. So those products aren’t worth much attention.
But there do seem to be several more sophisticated products that offer advanced capabilities. I won’t mention names because I haven’t spent enough time researching the topic to understand which ones are truly important.
Still, my general thought for the day is that it's silly to have to invent the features needed for this sort of product. Surely the marketing world has enough experience by now to understand the basic features necessary to run campaigns and manage interactions. Any list would include customer profiles, segmentation, testing, response analysis, propensity modeling, and lifetime value estimates (yes that last one is special pleading; sorry, I’m obsessed). I could come up with more but the cat is sitting on my lap. The point is, it makes vastly more sense to extend current marketing systems into the mobile channel, than to build separate mobile marketing systems that will later need to be integrated.
Marketing software vendors surely see this opportunity. But to take advantage of it, they would need to invest in not merely in technology, but also in the services and expertise needed to help novice marketers enter the mobile channel. This is expensive and experts are rare. So it’s more likely the vendors will defer any major effort until standard practices are widely understood. Pity – it will mean a lot of work down the road to fix the problems now being created.
Friday, March 23, 2007
ClickFox Generates Detailed Experience Maps
I’m just finishing a review for next month's DM News of ClickFox, a product that visualizes the paths followed by customers as they navigate interactive voice response (IVR), kiosks, Web sites and other self-service systems. John Pasqualetto, ClickFox’s Director of Business Development, tells me the product is easy to sell: basically, the company loads a week’s worth of interaction logs, plots them onto a model that classifies the different types of events, and shows customer paths through that model to the prospective buyer. “The value jumps right out,” according to John. “Users say, ‘I’ve never seen my data presented this way before.’”
If you think this sounds similar to the funnel reports provided by most Web analytics systems, so do I. One difference is the visual representation: it's hard to describe this in words, but if you look at the ClickFox Web site, you’ll see they organize their models around the purpose of the different Web pages or IVR options. Essentially, they build a conceptual map of the system. Web analytics systems generally take a more mechanical approach, arranging pages based on frequency of use or links to other pages. This makes it harder to grasp what’s actually going on from the customer viewpoint. (On the other hand, the Web systems can usually drill down to display the actual pages themselves, which ClickFox does not. Web systems also deal with higher data volumes than IVRs.)
The ClickFox approach is also somewhat similar to Client X Client's beloved Customer Experience Matrix, which also plots interactions by purpose. But we generally work at a higher level—that is, we’d look at an IVR session as a single transaction, rather than breaking it into its components. We also think in terms of a standard set of purpose categories, rather than defining a custom set in each situation. (Of course, custom sets make sense when you’re analyzing at the ClickFox level of detail.) So ClickFox would be complementary rather than competitive with what we do. Otherwise, I would not have been able to review them.
What’s really important in all this is that ClickFox provides another good tool for Customer Experience Management. The more of those, the better.
If you think this sounds similar to the funnel reports provided by most Web analytics systems, so do I. One difference is the visual representation: it's hard to describe this in words, but if you look at the ClickFox Web site, you’ll see they organize their models around the purpose of the different Web pages or IVR options. Essentially, they build a conceptual map of the system. Web analytics systems generally take a more mechanical approach, arranging pages based on frequency of use or links to other pages. This makes it harder to grasp what’s actually going on from the customer viewpoint. (On the other hand, the Web systems can usually drill down to display the actual pages themselves, which ClickFox does not. Web systems also deal with higher data volumes than IVRs.)
The ClickFox approach is also somewhat similar to Client X Client's beloved Customer Experience Matrix, which also plots interactions by purpose. But we generally work at a higher level—that is, we’d look at an IVR session as a single transaction, rather than breaking it into its components. We also think in terms of a standard set of purpose categories, rather than defining a custom set in each situation. (Of course, custom sets make sense when you’re analyzing at the ClickFox level of detail.) So ClickFox would be complementary rather than competitive with what we do. Otherwise, I would not have been able to review them.
What’s really important in all this is that ClickFox provides another good tool for Customer Experience Management. The more of those, the better.
Thursday, March 22, 2007
Survey Highlights Interest in Marketing Performance Measurement
According to The CMO Council, “the majority of marketers feel that their top goal in 2007 is to quantify and measure the value of marketing programs and investments (43.8%)” and “respondents tapped [marketing] performance dashboards as the top automated solution to be deployed in 2007.”
This is happy news for Client X Client, since that’s the pond we swim in. The Grinch in me points out that 43.8% is not a majority and that the actual survey question asked about “issues or challenges”, not goals. And if I get really cranky, I remember that the CMO Council runs a Marketing Performance Measurement Forum and Mastering MPM Online Certificate program—so there is probably some bias in the membership and perhaps their survey technique.
But despite these caveats, it’s good to see that performance measurement ranks high in the lists of concerns and system plans. The survey, of 350 marketers (we don’t know how many responded), also covered top accomplishments in 2006 (number one: “restructured and realigned marketing’), organizational plans (top item: “add new competencies and capabilities”), progress in improving the perception of marketing within their company (a Lake Wobegone-like 67.7% are "above average"), span of marketing authority, agency relationship changes, and sources of information and advice. It’s interesting stuff and available for free (registration required).
This is happy news for Client X Client, since that’s the pond we swim in. The Grinch in me points out that 43.8% is not a majority and that the actual survey question asked about “issues or challenges”, not goals. And if I get really cranky, I remember that the CMO Council runs a Marketing Performance Measurement Forum and Mastering MPM Online Certificate program—so there is probably some bias in the membership and perhaps their survey technique.
But despite these caveats, it’s good to see that performance measurement ranks high in the lists of concerns and system plans. The survey, of 350 marketers (we don’t know how many responded), also covered top accomplishments in 2006 (number one: “restructured and realigned marketing’), organizational plans (top item: “add new competencies and capabilities”), progress in improving the perception of marketing within their company (a Lake Wobegone-like 67.7% are "above average"), span of marketing authority, agency relationship changes, and sources of information and advice. It’s interesting stuff and available for free (registration required).
Tuesday, March 20, 2007
Proving the Value of Site Optimization
Eric’s comment on yesterday’s post, to the effect that “There shouldn’t be much debate here. Both full and fractional designs have their place in the testing cycle” is a useful reminder that it’s easy to get distracted by technical details and miss the larger perspective of the value provided by testing systems. This in turn raises the question posed implicitly by Friday’s post and Demi’s comment, of why so few companies have actually adopted these systems despite the proven benefits.
My personal theory is it has less to do with a reluctance to be measured than a lack of time and skills to conduct the testing itself. You can outsource the skills part: most if not all of the site testing vendors have staff to do this for you. But time is harder to come by. I suspect that most Web teams are struggling to keep up with demands for operational changes, such as accommodating new features, products and promotions. Optimization simply takes a lower priority.
(I’m tempted to add that optimization implies a relatively stable platform, whereas things are constantly changing on most sites. But plenty of areas, such as landing pages and check out processes, are usually stable enough that optimization is possible.)
Time can be expanded by adding more staff, either in-house or outsourced. This comes down to a question of money. Measuring the financial value of optimization comes back to last Wednesday's post on the credibility of marketing metrics.
Most optimization tests seem to focus on simple goals such as conversion rates, which have the advantage of being easy to measure but don’t capture the full value of an improvement. As I’ve argued many times in this blog, that value is properly defined as change in lifetime value. Calculating this is difficult and convincing others to accept the result is harder still. Marketing analysts therefore shy away from the problem unless pushed to engage it by senior management. The senior managers themselves will not be willing to invest the necessary resources unless they believe there is some benefit.
This is a chicken-and-egg problem, since the benefit from lifetime value analysis comes from shifting resources into more productive investments, but the only way to demonstrate this is possible is to do the lifetime value calculations in the first place. The obstacle is not insurmountable, however. One-off projects can illustrate the scope of the opportunity without investing in a permanent, all-encompassing LTV system. The series of “One Big Button” posts culminating last Monday described some approaches to this sort of analysis.
Which brings us back to Web site testing. Short term value measures will at best understate the benefits of an optimization project, and at worst lead to changes that destroy rather than increase long term value. So it makes considerable sense for a site testing trial project to include a pilot LTV estimate. It’s almost certain that the estimated value of the test benefit will be higher when based on LTV than when based on immediate results alone. This higher value can then justify expanded resources for both site testing and LTV.
And you thought last week’s posts were disconnected.
My personal theory is it has less to do with a reluctance to be measured than a lack of time and skills to conduct the testing itself. You can outsource the skills part: most if not all of the site testing vendors have staff to do this for you. But time is harder to come by. I suspect that most Web teams are struggling to keep up with demands for operational changes, such as accommodating new features, products and promotions. Optimization simply takes a lower priority.
(I’m tempted to add that optimization implies a relatively stable platform, whereas things are constantly changing on most sites. But plenty of areas, such as landing pages and check out processes, are usually stable enough that optimization is possible.)
Time can be expanded by adding more staff, either in-house or outsourced. This comes down to a question of money. Measuring the financial value of optimization comes back to last Wednesday's post on the credibility of marketing metrics.
Most optimization tests seem to focus on simple goals such as conversion rates, which have the advantage of being easy to measure but don’t capture the full value of an improvement. As I’ve argued many times in this blog, that value is properly defined as change in lifetime value. Calculating this is difficult and convincing others to accept the result is harder still. Marketing analysts therefore shy away from the problem unless pushed to engage it by senior management. The senior managers themselves will not be willing to invest the necessary resources unless they believe there is some benefit.
This is a chicken-and-egg problem, since the benefit from lifetime value analysis comes from shifting resources into more productive investments, but the only way to demonstrate this is possible is to do the lifetime value calculations in the first place. The obstacle is not insurmountable, however. One-off projects can illustrate the scope of the opportunity without investing in a permanent, all-encompassing LTV system. The series of “One Big Button” posts culminating last Monday described some approaches to this sort of analysis.
Which brings us back to Web site testing. Short term value measures will at best understate the benefits of an optimization project, and at worst lead to changes that destroy rather than increase long term value. So it makes considerable sense for a site testing trial project to include a pilot LTV estimate. It’s almost certain that the estimated value of the test benefit will be higher when based on LTV than when based on immediate results alone. This higher value can then justify expanded resources for both site testing and LTV.
And you thought last week’s posts were disconnected.
Monday, March 19, 2007
Is Taguchi Good for Multivariate Testing?
I’ve spent a lot of time recently talking to vendors of Web site testing systems. One topic that keeps coming up is whether Taguchi testing—which tests selected combinations of variables and infers the results for untested combinations—is a useful technique for this application. Some vendors use it heavily; some make it available but don’t recommend it; others reject it altogether.
Vendors in the non-Taguchi camp tell me they’ve done tests comparing Taguchi and “full factorial” tests (which test all possible combinations), and gotten different results. Since the main claim of Taguchi is that it finds the optimum combination, this is powerful practical evidence against it. On the theoretical level, the criticism is that Taguchi assumes that there are no interactions among test variables, meaning results for each variable are not affected by the values of other variables, when such interactions are in fact common. Moreover, how would you know whether interactions existed if you didn’t test for them? (Taguchi tests are generally too small to find interactions.)
Taguchi proponents might argue that careful test design can avoid interactions. But the more common justification seems to be that Taguchi makes it possible to test many more alternatives than conventional A/B tests (which change just one item at a time) or full-factorial designs (which need a lot of traffic to get adequate volume for each combination.)
So, the real question is not whether Taguchi ignores interactions (it does), but whether Taguchi leads to better results more quickly. This is possible even if those results not optimal, because Taguchi lets users test a wider variety of options with a given amount of traffic. I’m guessing Taguchi does help, at least for sites without huge visitor volumes.
Incidentally, I tried to do a quick classification of which vendors favor Taguchi. But it’s not so simple, because even vendors who prefer other methods still offer Taguchi as an option. And some alternative methods can be seen more as refinements of Taguchi than total rejections of it. So I think I’ll avoid naming names just now, and let the vendors speak for themselves. (Vendors to check: Offermatica, Optimost, Memetrics, SiteSpect, Vertster.)
Vendors in the non-Taguchi camp tell me they’ve done tests comparing Taguchi and “full factorial” tests (which test all possible combinations), and gotten different results. Since the main claim of Taguchi is that it finds the optimum combination, this is powerful practical evidence against it. On the theoretical level, the criticism is that Taguchi assumes that there are no interactions among test variables, meaning results for each variable are not affected by the values of other variables, when such interactions are in fact common. Moreover, how would you know whether interactions existed if you didn’t test for them? (Taguchi tests are generally too small to find interactions.)
Taguchi proponents might argue that careful test design can avoid interactions. But the more common justification seems to be that Taguchi makes it possible to test many more alternatives than conventional A/B tests (which change just one item at a time) or full-factorial designs (which need a lot of traffic to get adequate volume for each combination.)
So, the real question is not whether Taguchi ignores interactions (it does), but whether Taguchi leads to better results more quickly. This is possible even if those results not optimal, because Taguchi lets users test a wider variety of options with a given amount of traffic. I’m guessing Taguchi does help, at least for sites without huge visitor volumes.
Incidentally, I tried to do a quick classification of which vendors favor Taguchi. But it’s not so simple, because even vendors who prefer other methods still offer Taguchi as an option. And some alternative methods can be seen more as refinements of Taguchi than total rejections of it. So I think I’ll avoid naming names just now, and let the vendors speak for themselves. (Vendors to check: Offermatica, Optimost, Memetrics, SiteSpect, Vertster.)
Friday, March 16, 2007
The Market for Web Testing Systems is Young
I’ve received online newsletters in the past week from three Web optimization vendors: [x+1], Memtrics and Optimost. All are interesting. The article that particularly caught my attention was a Jupiterresearch report available from [x+1], which contained results from a December 2005 survey of 251 Web site decision makers in companies with $50 million or more in annual revenues.
What was startling about the report is its finding that 32% of the respondents said they had deployed a “testing/optimization” application and another 28% planned to do so within the next twelve months. Since a year has now passed, the current total should be around 60%.
With something like 400,000 U.S. companies with $50 million or more in revenue, this would imply around 200,000 installations. Yet my conversations with Web testing vendors show they collectively have maybe 500 installations-—certainly fewer than 1,000.
To put it mildly, that's a big difference.
There may be some definitional issues here. It's extremely unlikely that Jupiterresearch accidentally found 75 (32% of 251) of the 500 companies using the testing systems (especially since the total was probably under 500 at the time of the survey). So, presumably, some people who said they had testing systems were using products not on the standard list. (These would be Optimost, Offermatica, Memetrics, SiteSpect and Vertster. Jupiterresearch adds what I label as “behavioral targeting” vendors: [x+1] and Touch Clarity (now part of Omniture), e-commerce platform vendor ATG, and testing/targeting hybrid Kefta.) Maybe some other responders weren’t using anything and chose not to admit it.
But I suspect the main factor is sample bias. Jupiterresearch doesn’t say where the original survey list came from, but it was probably weighted toward advanced Web site users. As in any group, the people most involved in the topic are most likely to have responded, further skewing the results.
Sample bias is a well know issue among researchers. Major public opinion polls use elaborate adjustments to compensate for it. I don’t mean to criticize Jupiterresearch for not doing something similar: they never claim their sample was representative or that the numbers can be projected across all $50 million+ companies.
Still, the report certainly gives the impression that a large fraction of potential users have already adopted testing/optimization systems. Given what we know about the vendor installation totals, this is false. And it’s an error with consequence: vendors and investors act differently in mature vs. immature markets. Working from the wrong assumptions will lead them to costly mistakes.
Damage to customers is harder to identify. If anything, a fear of being the last company without this technology may prompt them to move more quickly. This would presumably be a benefit. But the hype may also lead them to believe that offerings and vendors are more mature than in reality. This could lead them to give less scrutiny to individual vendors than they would if they knew the market is young. And maybe it’s just me, but I believe as a general principle that people do better to base their decisions on accurate information.
I don’t think the situation here is unique. Surveys like these often give penetration numbers that seem unrealistically high to me. The reasons are probably the same as the ones I’ve listed above. It’s important for information consumers to recognize that while such surveys are give valuable insights into how users are behaving, they do have their limits.
What was startling about the report is its finding that 32% of the respondents said they had deployed a “testing/optimization” application and another 28% planned to do so within the next twelve months. Since a year has now passed, the current total should be around 60%.
With something like 400,000 U.S. companies with $50 million or more in revenue, this would imply around 200,000 installations. Yet my conversations with Web testing vendors show they collectively have maybe 500 installations-—certainly fewer than 1,000.
To put it mildly, that's a big difference.
There may be some definitional issues here. It's extremely unlikely that Jupiterresearch accidentally found 75 (32% of 251) of the 500 companies using the testing systems (especially since the total was probably under 500 at the time of the survey). So, presumably, some people who said they had testing systems were using products not on the standard list. (These would be Optimost, Offermatica, Memetrics, SiteSpect and Vertster. Jupiterresearch adds what I label as “behavioral targeting” vendors: [x+1] and Touch Clarity (now part of Omniture), e-commerce platform vendor ATG, and testing/targeting hybrid Kefta.) Maybe some other responders weren’t using anything and chose not to admit it.
But I suspect the main factor is sample bias. Jupiterresearch doesn’t say where the original survey list came from, but it was probably weighted toward advanced Web site users. As in any group, the people most involved in the topic are most likely to have responded, further skewing the results.
Sample bias is a well know issue among researchers. Major public opinion polls use elaborate adjustments to compensate for it. I don’t mean to criticize Jupiterresearch for not doing something similar: they never claim their sample was representative or that the numbers can be projected across all $50 million+ companies.
Still, the report certainly gives the impression that a large fraction of potential users have already adopted testing/optimization systems. Given what we know about the vendor installation totals, this is false. And it’s an error with consequence: vendors and investors act differently in mature vs. immature markets. Working from the wrong assumptions will lead them to costly mistakes.
Damage to customers is harder to identify. If anything, a fear of being the last company without this technology may prompt them to move more quickly. This would presumably be a benefit. But the hype may also lead them to believe that offerings and vendors are more mature than in reality. This could lead them to give less scrutiny to individual vendors than they would if they knew the market is young. And maybe it’s just me, but I believe as a general principle that people do better to base their decisions on accurate information.
I don’t think the situation here is unique. Surveys like these often give penetration numbers that seem unrealistically high to me. The reasons are probably the same as the ones I’ve listed above. It’s important for information consumers to recognize that while such surveys are give valuable insights into how users are behaving, they do have their limits.
Thursday, March 15, 2007
Is SiteSpect Really Better? How Would You Know?
Tuesday’s post and subsequent discussion of whether SiteSpect’s no-tag approach to Web site testing is significantly easier than inserting Javascript tags has been interesting but, for me at least, inconclusive. I understand that inserting tags into a production page requires the same testing as any other change, and that SiteSpect avoids this. But the tags are only inserted once, either per slot on a given page or for the page as a whole. After this, any number of tests can be set up and run on that page without additional changes. And given the simplicity of the tags themselves, are they unlikely to cause problems that take a lot of work to fix.
Of course, no work is easier than a little work, so avoiding tags does have some benefit. But most of the labor will still be in setting up the tests themselves. So the efficiency of the set up procedure will have much more impact on the total effort required to run a Web testing system than whether or not it uses tags. I’ve now seen demonstrations of all the major systems—Offermatica, Memetrics, Kefta, Optimost and SiteSpect—and written reviews of the first three (posted in my article archive). But even that doesn’t give me enough information to say one is easier to work with than another.
This is a fundamental issue with any kind of software assessment. You can talk to vendors, look at demonstrations, compare function lists, and read as many reviews as you like, but none of that shows what it’s like to use a product for your particular projects. Certainly with the Web testing systems, the different ways that clients configure their Web sites will have a major impact on whether a particular product is hard or easy to use. Deployment effort will also depend on what other systems are part of the site, as well as the nature of the desired tests themselves.
This line of reasoning leads mostly towards insisting that users should run their own tests before buying anything. That’s certainly sound advice: nobody ever regretted testing a product too thoroughly. But testing only works if you understand what you’re doing. Buyers who have never worked with a particular type of system often won’t know enough to run a meaningful test. So simply to proclaim that testing is always the solution isn’t correct.
This is where vendors can help. The more realistic a simulation they can provide of using their product, the more intelligently customers can judge whether the product will work for them. The reality is that most customers’ needs can be met by more than one product. Even though customers rightly want to find the best solution, all they really need is to find one that’s adequate and get on with their business. The first vendor to prove they can do the job, wins.
Products that claim a unique and substantial advantage over competitors, like SiteSpect, face a tougher challenge. Basically, no one believes it when vendors say their product is better, simply because all vendors say that. So vendors making radical claims must work hard to prove their case through explanations, benchmarks, case studies, worksheets, and whatever else it might take to show that the differences (a) really exist and (b) really matter. In theory, head-to-head comparisons against other vendors are the best way to do this, but the obvious bias of vendor-sponsored comparisons (not to mention potential for lawsuits) makes this extremely difficult. The best such vendors can do is to state their claims clearly and with as much justification as possible, and hope they can convince potential buyers to take a closer look.
Of course, no work is easier than a little work, so avoiding tags does have some benefit. But most of the labor will still be in setting up the tests themselves. So the efficiency of the set up procedure will have much more impact on the total effort required to run a Web testing system than whether or not it uses tags. I’ve now seen demonstrations of all the major systems—Offermatica, Memetrics, Kefta, Optimost and SiteSpect—and written reviews of the first three (posted in my article archive). But even that doesn’t give me enough information to say one is easier to work with than another.
This is a fundamental issue with any kind of software assessment. You can talk to vendors, look at demonstrations, compare function lists, and read as many reviews as you like, but none of that shows what it’s like to use a product for your particular projects. Certainly with the Web testing systems, the different ways that clients configure their Web sites will have a major impact on whether a particular product is hard or easy to use. Deployment effort will also depend on what other systems are part of the site, as well as the nature of the desired tests themselves.
This line of reasoning leads mostly towards insisting that users should run their own tests before buying anything. That’s certainly sound advice: nobody ever regretted testing a product too thoroughly. But testing only works if you understand what you’re doing. Buyers who have never worked with a particular type of system often won’t know enough to run a meaningful test. So simply to proclaim that testing is always the solution isn’t correct.
This is where vendors can help. The more realistic a simulation they can provide of using their product, the more intelligently customers can judge whether the product will work for them. The reality is that most customers’ needs can be met by more than one product. Even though customers rightly want to find the best solution, all they really need is to find one that’s adequate and get on with their business. The first vendor to prove they can do the job, wins.
Products that claim a unique and substantial advantage over competitors, like SiteSpect, face a tougher challenge. Basically, no one believes it when vendors say their product is better, simply because all vendors say that. So vendors making radical claims must work hard to prove their case through explanations, benchmarks, case studies, worksheets, and whatever else it might take to show that the differences (a) really exist and (b) really matter. In theory, head-to-head comparisons against other vendors are the best way to do this, but the obvious bias of vendor-sponsored comparisons (not to mention potential for lawsuits) makes this extremely difficult. The best such vendors can do is to state their claims clearly and with as much justification as possible, and hope they can convince potential buyers to take a closer look.
Wednesday, March 14, 2007
Just as You Always Suspected: Nobody Believes Marketing Effectiveness Measures
I consider it a point of honor not to simply reproduce a vendor’s press release. So when Marketing Management Analytics sent one headed “Most Financial Executives Critical of Marketing Effectiveness Measures: Only 7% Say They are Satisfied with their Company's Ability to Measure Marketing ROI”, I asked to see the details. In this case, it turned out that not only did the press release represent the study accurately, but it also picked up on the same two points that I found most intriguing:
- “only 7% of senior-level financial executives surveyed report being satisfied with their company's ability to measure marketing ROI”, compared with 23% of marketers in a similar earlier survey; and,
- “only one in 10 senior-level financial executives report confidence in marketing's ability to forecast its impact on sales” compared with one in four marketers.
And that, my friends, is the problem in a nutshell: financial managers have almost no confidence in marketing measurements, and marketers don't even realize how bad things are.
With numbers like these, is it any wonder that advanced concepts like customer experience management attract so little executive support? Nobody is willing to take a risk on them because nobody believes the supporting analysis. Note that three-quarters of the marketers themselves are not confident in their measurements.
In fact, the one other really interesting tidbit in the financial executive detail was that “customer value measurements” ranked a surprisingly high number three (34.6%) in the list of marketing effectiveness metrics. Only “effectiveness of marketing driving sales” (52.2%) and “brand equity and awareness” (44.1%) were more common. “Return on marketing investments” (25.7%) and “contribution” (22.8%) ranked lower.
It makes sense to me that “driving sales” would be the most common measure; after all, it is easy to understand and relatively simple to measure. But impact on brand equity and customer value are much more complicated. I do find it odd that they are almost as popular. I’m also trying to reconcile this set of answers with the fact that so few respondents had any confidence in any type of measurement: what exactly does it mean to rely on a measure that you don’t trust?
All in all, though, this survey represents an urgent warning to marketers that they must work much harder to build credible measures for the value of their activities.
(Both surveys were funded by MMA, which provides marketing mix models and other marketing performance analytics. The financial executive survey was conducted with Financial Executives International while the marketer survey was made with the Association of National Advertisers.)
- “only 7% of senior-level financial executives surveyed report being satisfied with their company's ability to measure marketing ROI”, compared with 23% of marketers in a similar earlier survey; and,
- “only one in 10 senior-level financial executives report confidence in marketing's ability to forecast its impact on sales” compared with one in four marketers.
And that, my friends, is the problem in a nutshell: financial managers have almost no confidence in marketing measurements, and marketers don't even realize how bad things are.
With numbers like these, is it any wonder that advanced concepts like customer experience management attract so little executive support? Nobody is willing to take a risk on them because nobody believes the supporting analysis. Note that three-quarters of the marketers themselves are not confident in their measurements.
In fact, the one other really interesting tidbit in the financial executive detail was that “customer value measurements” ranked a surprisingly high number three (34.6%) in the list of marketing effectiveness metrics. Only “effectiveness of marketing driving sales” (52.2%) and “brand equity and awareness” (44.1%) were more common. “Return on marketing investments” (25.7%) and “contribution” (22.8%) ranked lower.
It makes sense to me that “driving sales” would be the most common measure; after all, it is easy to understand and relatively simple to measure. But impact on brand equity and customer value are much more complicated. I do find it odd that they are almost as popular. I’m also trying to reconcile this set of answers with the fact that so few respondents had any confidence in any type of measurement: what exactly does it mean to rely on a measure that you don’t trust?
All in all, though, this survey represents an urgent warning to marketers that they must work much harder to build credible measures for the value of their activities.
(Both surveys were funded by MMA, which provides marketing mix models and other marketing performance analytics. The financial executive survey was conducted with Financial Executives International while the marketer survey was made with the Association of National Advertisers.)
Tuesday, March 13, 2007
SiteSpect Does Web Tests without Tags
I had a long and interesting talk yesterday with Larry Epstein at SiteSpect, a vendor of Web site multivariate testing and targeting software. SiteSpect’s primary claim to fame is they manage such tests without inserting any page tags, unlike pretty much all other vendors in this space. Their trick, as I understand it, is to use a proxy server that inserts test changes and captures results between site visitors and a client’s Web server. Users control changes by defining conditions, such as words or values to replace in specified pages, which the system checks for as traffic streams by.
Even though defining complex changes can take a fair amount of technical expertise, users with appropriate skills can make it happen without modifying the underlying pages. This frees marketers from reliance on the technical team that manages the site. It also frees the process from Javascript (which is inside most page tags), which doesn’t always execute correctly and adds some time to page processing.
This is an intriguing approach, but I haven’t decided what I think of it. Tagging individual pages or even specific regions within each page is clearly work, but it’s by far the most widely used approach. This might mean that most users find it acceptable or it might be the reason relatively few people use such systems. (Or both.) There is also an argument that requiring tags on every page means you get incomplete results when someone occasionally leaves one out by mistake. But I think this applies more to site analytics than testing. With testing, the number of tags is limited and they should be inserted with surgical precision. Therefore, inadvertent error should not be an issue and the technical people should simply do the insertions as part of their job.
I’m kidding, of course. If there’s one thing I’ve learned from years of working with marketing systems, it’s that marketers never want to rely on technical people for anything—and the technical people heartily agree that marketers should do as much as possible for themselves. There are very sound, practical reasons for this that boil down to the time and effort required to accurately transfer requests from marketers to technologists. If the marketers can do the work themselves, these very substantial costs can be avoided.
This holds true even when significant technical skills are still required. Setting up complex marketing campaigns, for example, can be almost as much work in campaign management software as when programmers had to do it. Most companies with such software therefore end up with experts in their marketing departments to do the setup. The difference between programmers and these campaign management super users isn’t really so much their level of technical skill, as it is that the super users are part of the marketing department. This makes them both more familiar with marketers’ needs and more responsive to their requests.
Framing the issue this way puts SiteSpect’s case in a different light. Does SiteSpect really give marketers more control over testing and segmentation than other products? Compared with products where vendor professional services staff sets up the tests, the answer is yes. (Although relying on vendor staff may be more like relying on an internal super user than a corporate IT department.) But most of the testing products do provide marketing users with substantial capabilities once the initial tagging is complete. So I’d say the practical advantage for SiteSpect is relatively small.
But I’ll give the last word to SiteSpect. Larry told me they have picked up large new clients specifically because those companies did find working with tag-based testing systems too cumbersome. So perhaps there are advantages I haven’t seen, or perhaps there are particular situations where SiteSpect’s no-tag approach has special advantages.
Time, and marketing skills, will tell.
Even though defining complex changes can take a fair amount of technical expertise, users with appropriate skills can make it happen without modifying the underlying pages. This frees marketers from reliance on the technical team that manages the site. It also frees the process from Javascript (which is inside most page tags), which doesn’t always execute correctly and adds some time to page processing.
This is an intriguing approach, but I haven’t decided what I think of it. Tagging individual pages or even specific regions within each page is clearly work, but it’s by far the most widely used approach. This might mean that most users find it acceptable or it might be the reason relatively few people use such systems. (Or both.) There is also an argument that requiring tags on every page means you get incomplete results when someone occasionally leaves one out by mistake. But I think this applies more to site analytics than testing. With testing, the number of tags is limited and they should be inserted with surgical precision. Therefore, inadvertent error should not be an issue and the technical people should simply do the insertions as part of their job.
I’m kidding, of course. If there’s one thing I’ve learned from years of working with marketing systems, it’s that marketers never want to rely on technical people for anything—and the technical people heartily agree that marketers should do as much as possible for themselves. There are very sound, practical reasons for this that boil down to the time and effort required to accurately transfer requests from marketers to technologists. If the marketers can do the work themselves, these very substantial costs can be avoided.
This holds true even when significant technical skills are still required. Setting up complex marketing campaigns, for example, can be almost as much work in campaign management software as when programmers had to do it. Most companies with such software therefore end up with experts in their marketing departments to do the setup. The difference between programmers and these campaign management super users isn’t really so much their level of technical skill, as it is that the super users are part of the marketing department. This makes them both more familiar with marketers’ needs and more responsive to their requests.
Framing the issue this way puts SiteSpect’s case in a different light. Does SiteSpect really give marketers more control over testing and segmentation than other products? Compared with products where vendor professional services staff sets up the tests, the answer is yes. (Although relying on vendor staff may be more like relying on an internal super user than a corporate IT department.) But most of the testing products do provide marketing users with substantial capabilities once the initial tagging is complete. So I’d say the practical advantage for SiteSpect is relatively small.
But I’ll give the last word to SiteSpect. Larry told me they have picked up large new clients specifically because those companies did find working with tag-based testing systems too cumbersome. So perhaps there are advantages I haven’t seen, or perhaps there are particular situations where SiteSpect’s no-tag approach has special advantages.
Time, and marketing skills, will tell.
Monday, March 12, 2007
One Big Button is Built
I did go ahead and implement the “One Big Button” opportunity analysis in my sample LTV system (see last week's posts for details). As expected, it took about a day’s work, mostly checking that the calculations were correct. That still left the challenge of finding report layouts that lead users through the results. There is no one right way to do that, of course. QlikTech makes it easy to experiment with alternatives, which is a mixed blessing since it’s perhaps too much fun to play with.
My final (?) version shows a one line summary plus details for three types of changes (acquisition, renewal/retention, and cross sell), each split into recommendations for increased vs. decreased investment. Users can drill down to see details on the individual products and sources. That should tell them pretty much what they need to know.
I was eager to see the results of the calculations—remember, I’m working with live data—and was pleased to see they were reasonable: the system proposed changes to fewer than half the total products and estimated a 10% increase in value. Claims of huge potential improvement would have been less credible.
That left just one question: what should be on the One Big Button itself? The color choice was easy—a nice monetary green. But “Make more money!” seems a bit crass, while “Recommendations” sounds so bland. Since the button label can be a formula, I ended up calculating the estimated value of the opportunities and displaying “How can I add $2,620,707 more profit?” If that doesn’t get their attention, I don’t know what will.
My final (?) version shows a one line summary plus details for three types of changes (acquisition, renewal/retention, and cross sell), each split into recommendations for increased vs. decreased investment. Users can drill down to see details on the individual products and sources. That should tell them pretty much what they need to know.
I was eager to see the results of the calculations—remember, I’m working with live data—and was pleased to see they were reasonable: the system proposed changes to fewer than half the total products and estimated a 10% increase in value. Claims of huge potential improvement would have been less credible.
That left just one question: what should be on the One Big Button itself? The color choice was easy—a nice monetary green. But “Make more money!” seems a bit crass, while “Recommendations” sounds so bland. Since the button label can be a formula, I ended up calculating the estimated value of the opportunities and displaying “How can I add $2,620,707 more profit?” If that doesn’t get their attention, I don’t know what will.
Friday, March 09, 2007
Convincing Managers to Care about Customer Value Measures
I spoke earlier this week at the DAMA International Symposium and Wilshire Meta-Data Conference, which serves a primarily technical audience of data modelers and architects. My own talk was about applications for customer value metrics, which boiled down to lifetime value applications and building them with the Customer Experience Matrix. (In fact, preparation for this talk is what inspired my earlier series of posts on that topic.)
One of the questions that came up was how to convince business managers that this sort of framework is needed. I’m not sure I gave a particularly coherent answer at the time, but this is in fact something that Client X Client has given a fair amount of thought. The correct (if cliched) response is that different managers have different needs, so you have to address each person appropriately.
CEOs, COOs and other top managers are looking at company-wide issues. Benefits that matter to them include:
- understanding how customers are being treated across different parts of the organization. Of course, this “customer eye view” is the central proposition of both the Customer Experience Matrix and customer experience management in general. But in practice it’s still very hard to come by, and good CEOs and COOs recognize how desperately they need it.
- gaining metrics for customer experience management. I’ve made this point many times in this blog but I’ll say it again: the only way to focus an organization on customer experience is to measure the financial impact of that experience. Top managers understand this intuitively. If they really believe customer experience is important, they’ll eagerly adopt a solution that provides such measures.
- identify opportunities for improvement. Measuring results is essential, but managers want even more to know where they can do better. This comes back to the One Big Button I’ve been writing about all week. The Customer Experience Matrix and other customer value approaches offer specific techniques to surface experience improvement opportunities and estimate their value.
- optimize resource allocation. Choosing where to direct limited resources is arguably the central job of senior management. Impact on customer value is the one criterion that can meaningfully compare investments throughout the company. It offers senior managers both a tool for their own use and a communication mechanism to get others in the company thinking the same way.
Chief Financial Officers share the CEO’s a company-wide perspective but look at things from a financial viewpoint. For them, customer value approaches offer:
- new business insights from new metrics. Although the CFO’s job is to understand what’s happening in the business from financial data, the information from traditional financial systems is really quite limited. Customer value measures organize information in ways that reveal patterns and trends in customer behavior which traditional measures do not.
- better forecasting. Forecasts based on individual customers or customer segments can be significantly more accurate than simple projections based on aggregate trends or percentage changes. Forecast quality has always been important but it’s even more of a hot button because of Sarbanes-Oxley and other corporate governance requirements.
- cross-function Return on Investment measures. CFOs are ultimately responsible for ensuring that ROI estimates are accurate. Customer value metrics help them to identify the impact of investments across departments and over time. These effects are often hidden from departmental managers who would otherwise prepare estimates based only on the impact within their own area.
Marketing departments gain substantial operational benefits from customer value measurements and the Customer Experience Matrix. These include:
- better ways to visualize, control and coordinate customer treatments. Different departments and systems execute treatments in different channels and stages of the product life cycle. Bringing information about these together in one place is a major challenge that the Customer Experience Matrix in particular helps to meet. Applications range from setting general experience strategies to managing interactions with individual customers.
- monitor customer behavior for trends and opportunities. A rich set of customer value measures will highlight important changes as quickly as they occur. On a strategic level, the Customer Experience Matrix identifies the value (actual and potential) of every step in the purchase cycle to ensure companies get the greatest possible return from every customer-facing event.
- measure return on marketing investments. Customer value measurements give marketers the tools they need to prove the value of their expenditures. This improves the productivity of their spending while ensuring they can justify their budgets to the rest of the company.
Customer Service managers, like marketers, deal directly with customers and need tools to measure the effectiveness of their efforts. Benefits for them include:
- visualization of standard contact policies and of individual contact histories. The Customer Experience Matrix provides tools to track the flow of customers through product purchase and use stages, to see the specific treatments they receive, and to display individual event histories to an agent or self-service system as an interaction occurs. All this helps managers to understand and improve how customers are treated.
- identify best treatment rules based on long-term results. Customer value measurements can show the impact of each treatment on long-term value. Without them, managers are often stuck looking only at immediate results or have no result information at all. Having a good measurement system in place makes it easy for managers to continually test, evaluate and refine alternative treatments.
- recommend treatments during interactions. The optimal business rules discovered by customer value analysis can be deployed to operational systems for execution. A strong customer value framework will support on-the-fly calculations that can adjust treatment recommendations based on information gathered during the interaction itself.
If there’s a common theme to all this, it’s that customer value measurement gives managers at all levels a new and powerful tool to quantify the impact of business decisions on long-term value. Let me try that again: in plain English, it helps them make more money. If that’s not a compelling benefit, I don’t know what is.
One of the questions that came up was how to convince business managers that this sort of framework is needed. I’m not sure I gave a particularly coherent answer at the time, but this is in fact something that Client X Client has given a fair amount of thought. The correct (if cliched) response is that different managers have different needs, so you have to address each person appropriately.
CEOs, COOs and other top managers are looking at company-wide issues. Benefits that matter to them include:
- understanding how customers are being treated across different parts of the organization. Of course, this “customer eye view” is the central proposition of both the Customer Experience Matrix and customer experience management in general. But in practice it’s still very hard to come by, and good CEOs and COOs recognize how desperately they need it.
- gaining metrics for customer experience management. I’ve made this point many times in this blog but I’ll say it again: the only way to focus an organization on customer experience is to measure the financial impact of that experience. Top managers understand this intuitively. If they really believe customer experience is important, they’ll eagerly adopt a solution that provides such measures.
- identify opportunities for improvement. Measuring results is essential, but managers want even more to know where they can do better. This comes back to the One Big Button I’ve been writing about all week. The Customer Experience Matrix and other customer value approaches offer specific techniques to surface experience improvement opportunities and estimate their value.
- optimize resource allocation. Choosing where to direct limited resources is arguably the central job of senior management. Impact on customer value is the one criterion that can meaningfully compare investments throughout the company. It offers senior managers both a tool for their own use and a communication mechanism to get others in the company thinking the same way.
Chief Financial Officers share the CEO’s a company-wide perspective but look at things from a financial viewpoint. For them, customer value approaches offer:
- new business insights from new metrics. Although the CFO’s job is to understand what’s happening in the business from financial data, the information from traditional financial systems is really quite limited. Customer value measures organize information in ways that reveal patterns and trends in customer behavior which traditional measures do not.
- better forecasting. Forecasts based on individual customers or customer segments can be significantly more accurate than simple projections based on aggregate trends or percentage changes. Forecast quality has always been important but it’s even more of a hot button because of Sarbanes-Oxley and other corporate governance requirements.
- cross-function Return on Investment measures. CFOs are ultimately responsible for ensuring that ROI estimates are accurate. Customer value metrics help them to identify the impact of investments across departments and over time. These effects are often hidden from departmental managers who would otherwise prepare estimates based only on the impact within their own area.
Marketing departments gain substantial operational benefits from customer value measurements and the Customer Experience Matrix. These include:
- better ways to visualize, control and coordinate customer treatments. Different departments and systems execute treatments in different channels and stages of the product life cycle. Bringing information about these together in one place is a major challenge that the Customer Experience Matrix in particular helps to meet. Applications range from setting general experience strategies to managing interactions with individual customers.
- monitor customer behavior for trends and opportunities. A rich set of customer value measures will highlight important changes as quickly as they occur. On a strategic level, the Customer Experience Matrix identifies the value (actual and potential) of every step in the purchase cycle to ensure companies get the greatest possible return from every customer-facing event.
- measure return on marketing investments. Customer value measurements give marketers the tools they need to prove the value of their expenditures. This improves the productivity of their spending while ensuring they can justify their budgets to the rest of the company.
Customer Service managers, like marketers, deal directly with customers and need tools to measure the effectiveness of their efforts. Benefits for them include:
- visualization of standard contact policies and of individual contact histories. The Customer Experience Matrix provides tools to track the flow of customers through product purchase and use stages, to see the specific treatments they receive, and to display individual event histories to an agent or self-service system as an interaction occurs. All this helps managers to understand and improve how customers are treated.
- identify best treatment rules based on long-term results. Customer value measurements can show the impact of each treatment on long-term value. Without them, managers are often stuck looking only at immediate results or have no result information at all. Having a good measurement system in place makes it easy for managers to continually test, evaluate and refine alternative treatments.
- recommend treatments during interactions. The optimal business rules discovered by customer value analysis can be deployed to operational systems for execution. A strong customer value framework will support on-the-fly calculations that can adjust treatment recommendations based on information gathered during the interaction itself.
If there’s a common theme to all this, it’s that customer value measurement gives managers at all levels a new and powerful tool to quantify the impact of business decisions on long-term value. Let me try that again: in plain English, it helps them make more money. If that’s not a compelling benefit, I don’t know what is.
Thursday, March 08, 2007
Building the One Big Button (Using LTV to Find Business Opportunities) – Part 4
So far the posts in this series have described how to identify and rank opportunities for business improvements in acquisition, renewal and cross sell orders. The obvious next step is to combine these into a single list. That’s easy enough for acquisition and cross sell orders, since both are being ranked by return on (marketing) investment. But renewal orders were being ranked with a different measure, change in Lifetime Value per customer. So there’s no direct way to mix the two.
I suppose we could translate the acquisition and cross sell opportunities into change in LTV per customer and then rank all three on that. But return on investment is important. It addresses the business reality that there are limited resources and you want to employ them in the most productive way possible. LTV, by contrast, is ultimately a net present value measure, which implicitly considers the cost of capital but doesn’t really address the fact that its total quantity is limited. So far as I recollect from my long-ago finance classes, there is no truly satisfactory reconciliation between the two perspectives. You simply have to consider both when making business decisions.
Another possibility is to treat the change in margin costs for renewal orders as an “investment” and calculate a return on investment for cross sell in that way. But I don’t think that makes sense in terms of financial theory—variable costs aren’t drawn from a limited pool in the same way as investments. And, as a practical matter, the disparity between return on marketing vs. return on all costs would give ratios that were not directly comparable.
We could also build a combined list of opportunities ranked by change in total LTV (as opposed to LTV per customer). This would bring the largest opportunities to the top of the list. It makes sense on the theory that management can only consider so many changes at once, so it should focus on the opportunities with the greatest potential impact. But it does bother me that this approach would not yield optimal results in terms of return on investment. Also, from a practical standpoint, each type of opportunity (acquisition, renewal, cross sell) would probably be explored by a different group (marketers for acquisitions and possibly cross sell; operations managers for renewal margin), so it might make just as much sense to keep the lists separate for that purpose.
In reality, the LTV system could easily produce several sets of rankings, so this isn’t something to agonize over.
Whether the lists are combined or stay separate, the same product will appear more than once if it is flagged in more than one category. It is possible to combine the estimated opportunities for a given product, since each type of opportunity impacts different components of the LTV formula: acquisition affects number of new customers and acquisition costs; renewals affect renewal margin and years per customer; and cross sell affects cross sell revenue per year. A single estimate based on the combined changes would certainly be of interest to the product managers, although chaining together the different assumptions—each of which is very tentative—may give them more weight than they deserve.
This brings us to the issue of implementation. Acting on opportunities to reallocate marketing budgets is fairly straightforward: marketers look at the products and advertising sources in question and decide how to best increase or reduce spending. Renewal opportunities are a different story. Knowing that the margin on a given product is unusually high or low doesn’t begin to indicate where the additional dollars should be spent or removed. It could be in service, manufacturing cost, or even pricing. All this approach can do is point to areas that look like they have above-average chance for improvement. The detailed assessment must take place outside of the LTV system.
I think this is about as far as I can take this topic for now. Certainly the One Big Button seems like a practical idea that managers would find useful. As you can imagine, I’m eager to try adding it to my sample LTV system. Things are a bit busy right now but it shouldn’t be more than one or two day’s work. I’ll let you know if and when I get it working.
I suppose we could translate the acquisition and cross sell opportunities into change in LTV per customer and then rank all three on that. But return on investment is important. It addresses the business reality that there are limited resources and you want to employ them in the most productive way possible. LTV, by contrast, is ultimately a net present value measure, which implicitly considers the cost of capital but doesn’t really address the fact that its total quantity is limited. So far as I recollect from my long-ago finance classes, there is no truly satisfactory reconciliation between the two perspectives. You simply have to consider both when making business decisions.
Another possibility is to treat the change in margin costs for renewal orders as an “investment” and calculate a return on investment for cross sell in that way. But I don’t think that makes sense in terms of financial theory—variable costs aren’t drawn from a limited pool in the same way as investments. And, as a practical matter, the disparity between return on marketing vs. return on all costs would give ratios that were not directly comparable.
We could also build a combined list of opportunities ranked by change in total LTV (as opposed to LTV per customer). This would bring the largest opportunities to the top of the list. It makes sense on the theory that management can only consider so many changes at once, so it should focus on the opportunities with the greatest potential impact. But it does bother me that this approach would not yield optimal results in terms of return on investment. Also, from a practical standpoint, each type of opportunity (acquisition, renewal, cross sell) would probably be explored by a different group (marketers for acquisitions and possibly cross sell; operations managers for renewal margin), so it might make just as much sense to keep the lists separate for that purpose.
In reality, the LTV system could easily produce several sets of rankings, so this isn’t something to agonize over.
Whether the lists are combined or stay separate, the same product will appear more than once if it is flagged in more than one category. It is possible to combine the estimated opportunities for a given product, since each type of opportunity impacts different components of the LTV formula: acquisition affects number of new customers and acquisition costs; renewals affect renewal margin and years per customer; and cross sell affects cross sell revenue per year. A single estimate based on the combined changes would certainly be of interest to the product managers, although chaining together the different assumptions—each of which is very tentative—may give them more weight than they deserve.
This brings us to the issue of implementation. Acting on opportunities to reallocate marketing budgets is fairly straightforward: marketers look at the products and advertising sources in question and decide how to best increase or reduce spending. Renewal opportunities are a different story. Knowing that the margin on a given product is unusually high or low doesn’t begin to indicate where the additional dollars should be spent or removed. It could be in service, manufacturing cost, or even pricing. All this approach can do is point to areas that look like they have above-average chance for improvement. The detailed assessment must take place outside of the LTV system.
I think this is about as far as I can take this topic for now. Certainly the One Big Button seems like a practical idea that managers would find useful. As you can imagine, I’m eager to try adding it to my sample LTV system. Things are a bit busy right now but it shouldn’t be more than one or two day’s work. I’ll let you know if and when I get it working.
Wednesday, March 07, 2007
Building the One Big Button (Using LTV to Find Business Opportunities) – Part 3
My last two posts started a discussion of using generic Lifetime Value figures to identify business opportunities. Yesterday described the calculations for acquisition orders. Today will finish up with renewal and cross sell orders. The general approach is similar.
- for renewal orders, the key variable is renewal margin (which I am defining as renewal marketing and product cost divided by renewal revenue). This is calculated for each product, for the company as a whole, and perhaps at relevant intermediate levels such as divisions. (If the company can offer different treatments to customers from different sources, the analysis could be done down to the product/source level too.) The assumption here is that products with above-average margins might be under-spending on experience quality, while products with below-average margins might be spending too much. (Even as I write this, I’m less than thrilled with this approach. Margin has much to do with the nature of particular product, so comparing it across products is questionable. But this method does have the advantage of focusing attention on the high margin opportunities. And even if the approximate margin of a product is due largely to its nature, small increases or decreases vs. current spending can probably still be correlated with improvements or reductions in experience value. That said, if anybody has better suggestions, I’m eager to hear them.)
The next step is to estimate the impact of changes from the current practice. I’m not aware of any rule, similar to the Square Root Rule for advertising, that generally estimates the impact of product and service spend on retention. The best we can do is assume (rather optimistically) that spending has been optimized. This implies taking the current margin as the base and assuming any increase or decrease has diminishing returns—say, that each dollar change in product and service costs yields a fifty cent change in revenues. This differs, I think correctly, from the advertising spend assumption that incremental increases are always less efficient. Cutting service costs may well drive away so many customers that it actually reduces margin. An alternative would be to take the company-wide average margin as the base and assume that the further any product’s margin deviates from the average, the greater the impact of a change back toward the average. But this goes back to the issue raised previously: different products have different natural margins so the company-wide average probably isn’t very relevant.
Whatever method is used, the process is the same: estimate the impact of a standard margin increase or decrease on renewal revenue; translate this into a change in years per customer, and then calculate the related change in LTV. This calculation would include all back-end values (renewal plus cross sell), since changing the length of the customer lifetime would also change how long the customer is available for cross sales. Since renewal costs are largely variable, it doesn’t make sense to calculate a “return on investment” on the margin change, as we did with acquisition costs. Instead, the opportunities would be ranked by change in back-end LTV per customer. To ensure cross sell values are considered, the LTV calculation for this ranking includes them along with renewal values.
- for cross sell orders, we’re back to looking at marketing expenditures. As with acquisition orders, we’d calculate a ROI ratio as net value (cross sell revenue – marketing cost plus product cost) divided by cross sell marketing cost. Then we rank products on this ratio against the company-wide average and use the Square Root Rule to estimate the impact of a 20% increase or decrease. Finally, calculate the estimated change in ROI and rank the opportunities accordingly.
Simple, eh?
- for renewal orders, the key variable is renewal margin (which I am defining as renewal marketing and product cost divided by renewal revenue). This is calculated for each product, for the company as a whole, and perhaps at relevant intermediate levels such as divisions. (If the company can offer different treatments to customers from different sources, the analysis could be done down to the product/source level too.) The assumption here is that products with above-average margins might be under-spending on experience quality, while products with below-average margins might be spending too much. (Even as I write this, I’m less than thrilled with this approach. Margin has much to do with the nature of particular product, so comparing it across products is questionable. But this method does have the advantage of focusing attention on the high margin opportunities. And even if the approximate margin of a product is due largely to its nature, small increases or decreases vs. current spending can probably still be correlated with improvements or reductions in experience value. That said, if anybody has better suggestions, I’m eager to hear them.)
The next step is to estimate the impact of changes from the current practice. I’m not aware of any rule, similar to the Square Root Rule for advertising, that generally estimates the impact of product and service spend on retention. The best we can do is assume (rather optimistically) that spending has been optimized. This implies taking the current margin as the base and assuming any increase or decrease has diminishing returns—say, that each dollar change in product and service costs yields a fifty cent change in revenues. This differs, I think correctly, from the advertising spend assumption that incremental increases are always less efficient. Cutting service costs may well drive away so many customers that it actually reduces margin. An alternative would be to take the company-wide average margin as the base and assume that the further any product’s margin deviates from the average, the greater the impact of a change back toward the average. But this goes back to the issue raised previously: different products have different natural margins so the company-wide average probably isn’t very relevant.
Whatever method is used, the process is the same: estimate the impact of a standard margin increase or decrease on renewal revenue; translate this into a change in years per customer, and then calculate the related change in LTV. This calculation would include all back-end values (renewal plus cross sell), since changing the length of the customer lifetime would also change how long the customer is available for cross sales. Since renewal costs are largely variable, it doesn’t make sense to calculate a “return on investment” on the margin change, as we did with acquisition costs. Instead, the opportunities would be ranked by change in back-end LTV per customer. To ensure cross sell values are considered, the LTV calculation for this ranking includes them along with renewal values.
- for cross sell orders, we’re back to looking at marketing expenditures. As with acquisition orders, we’d calculate a ROI ratio as net value (cross sell revenue – marketing cost plus product cost) divided by cross sell marketing cost. Then we rank products on this ratio against the company-wide average and use the Square Root Rule to estimate the impact of a 20% increase or decrease. Finally, calculate the estimated change in ROI and rank the opportunities accordingly.
Simple, eh?
Tuesday, March 06, 2007
Building the One Big Button (Using LTV to Find Business Opportunities) – Part 2
Yesterday’s post described key leverage points within the three major Lifetime Value components of original, renewal and cross sell orders. It further showed how each is related to total LTV. We want to use these to build a prioritized list of business opportunities—putting something behind the One Big Button in the LTV system.
Building this list three steps: finding where improvement seems possible; estimating the amount of potential improvement; and prioritizing the results. We’ll discuss these for each of the three LTV components.
- for original orders, the key leverage point is acquisition cost, and in particular, acquisition cost by source. The specific measure to use is the ratio of LTV to acquisition cost, which is essentially acquisition return on investment. (People who really worry about these things, like James Lenskold in his book Marketing ROI, might argue that revenues and costs of future discretionary decisions should be excluded. But we need those values so we don’t throw away an expensive source that brings in customers with high back-end value, or add cheap sources that attract low-value customers.)
To identify potential improvements, we’ll calculate Acquisition ROI for each source/product combination, for each product, and for the business as a whole. For source/product combinations that are performing below average, we’ll estimate the impact of reducing the marketing investment, therefore presumably improving results (because we can drop the least effective promotions within that source) and freeing marketing funds to spend more productively elsewhere. For combinations that are performing above average, we’ll estimate the impact of spending more.
In an ideal world, we’d have expert managers carefully estimate the individual results of changing the investment level for each source. Here on planet Earth, the resources to do that are probably not available. One approach is to use a shortcut such as the “Square Root Rule”, which assumes that revenue increases as the square root of advertising expenditure. (See excellent blog posts by Kevin Hillstrom and Alan Rimm-Kaufman explaining this in detail). To further simplify matters and keep results somewhat realistic, I would arbitrarily calculate the impact of a 20% increase or decrease in acquisition expense—even though the optimal change, based on the Square Root Rule, might be quite different.
We still need to translate the expected change in acquisition volume to total LTV. If we assume the acquisition revenue per customer stays the same, we can use the estimated acquisition revenue to calculate the change in the number of new customers. We can multiply that by the back-end LTV (renewal plus cross sell) per person to find the change in back-end LTV added. Combine this with the new figures for acquisition value and you have the total change in LTV. Finally, divide the change in total LTV by the change in acquisition spend to give the estimated Acquisition ROI for the change. The analysis would drop any opportunities where the change in ROI was below average.
Finally, we have to rank the changes. This can also be done using the change in Acquisition ROI. Since some opportunities involve a spending increase and others a spending decrease, total spending may go up or down. We might impose a further constraint that limits the recommended changes to opportunities that yield no net increase in acquisition spending, or a maximum increase of, say, 10%. This is another good candidate for a slider on the user interface. The analysis could be conducted at the level of the company as a whole and also for individual products. In companies with intermediate structures such as a division or product group, the analysis could be done at those levels too.
Well, that’s enough fun for one day. Tomorrow we’ll look at treatment of renewal and cross sell opportunities.
Building this list three steps: finding where improvement seems possible; estimating the amount of potential improvement; and prioritizing the results. We’ll discuss these for each of the three LTV components.
- for original orders, the key leverage point is acquisition cost, and in particular, acquisition cost by source. The specific measure to use is the ratio of LTV to acquisition cost, which is essentially acquisition return on investment. (People who really worry about these things, like James Lenskold in his book Marketing ROI, might argue that revenues and costs of future discretionary decisions should be excluded. But we need those values so we don’t throw away an expensive source that brings in customers with high back-end value, or add cheap sources that attract low-value customers.)
To identify potential improvements, we’ll calculate Acquisition ROI for each source/product combination, for each product, and for the business as a whole. For source/product combinations that are performing below average, we’ll estimate the impact of reducing the marketing investment, therefore presumably improving results (because we can drop the least effective promotions within that source) and freeing marketing funds to spend more productively elsewhere. For combinations that are performing above average, we’ll estimate the impact of spending more.
In an ideal world, we’d have expert managers carefully estimate the individual results of changing the investment level for each source. Here on planet Earth, the resources to do that are probably not available. One approach is to use a shortcut such as the “Square Root Rule”, which assumes that revenue increases as the square root of advertising expenditure. (See excellent blog posts by Kevin Hillstrom and Alan Rimm-Kaufman explaining this in detail). To further simplify matters and keep results somewhat realistic, I would arbitrarily calculate the impact of a 20% increase or decrease in acquisition expense—even though the optimal change, based on the Square Root Rule, might be quite different.
We still need to translate the expected change in acquisition volume to total LTV. If we assume the acquisition revenue per customer stays the same, we can use the estimated acquisition revenue to calculate the change in the number of new customers. We can multiply that by the back-end LTV (renewal plus cross sell) per person to find the change in back-end LTV added. Combine this with the new figures for acquisition value and you have the total change in LTV. Finally, divide the change in total LTV by the change in acquisition spend to give the estimated Acquisition ROI for the change. The analysis would drop any opportunities where the change in ROI was below average.
Finally, we have to rank the changes. This can also be done using the change in Acquisition ROI. Since some opportunities involve a spending increase and others a spending decrease, total spending may go up or down. We might impose a further constraint that limits the recommended changes to opportunities that yield no net increase in acquisition spending, or a maximum increase of, say, 10%. This is another good candidate for a slider on the user interface. The analysis could be conducted at the level of the company as a whole and also for individual products. In companies with intermediate structures such as a division or product group, the analysis could be done at those levels too.
Well, that’s enough fun for one day. Tomorrow we’ll look at treatment of renewal and cross sell opportunities.
Monday, March 05, 2007
Building the One Big Button (Using LTV to Find Business Opportunities)
As anyone who reads this blog regularly might have expected, I did go ahead and add the page of questions (see last Friday’s post) as a sort of index to the sample Lifetime Value system. I like the results even more than I had expected. It’s much easier to read the questions than to look at the report names and remember which data that report presents. And, come to think of it, there’s no reason to limit this approach to one question per report. Since many reports answer several different questions, you could have several question-buttons point to the same report.
Adding the question-buttons didn’t take very long, so most of my attention went to the more challenging question of whether I could build the One Big Button that identifies business opportunities. This took a bit more thought.
Of course, the proper response is that every company will have its own way of analyzing the data, based on the nature of its business and on corporate and personal preferences. It would be legitimate, and possibly wiser, to let it go at that. But I did wonder whether there could be a generic analysis based on the generic nature of the Lifetime Value components themselves.
I think there is. Start with the three major divisions in the LTV model: original orders, renewal orders, and cross sell orders. For each of these, ask: which components of total LTV does it affect, and what business decisions can we identify that would change those component?
- For original orders, the key components are acquisition cost and customer quality. (Quality is measured by back-end results; that is, renewals and cross sales). The key decision is source mix. So it’s plausible to identify situations where there is an opportunity to improve results by investing more in higher-performing sources and less in the weaker ones.
- For renewal orders, the key components are years per customer and value per year. In the real world, how long customers remain active (i.e., years per customer) is determined primarily by their experience with the actual product or service. (Companies can and do create retention programs, but these can only work at the margins and are rarely a major expense because the amount of dollars that can be spent effectively is quite limited.)
It’s hard to identify a single variable that impacts experience quality in the same way that source mix affects acquisitions. But it’s probably valid in general to assume that improving the experience will increase product and/or service costs, and thus decrease margin (revenue – costs). (Yes, I know that higher spending does not necessarily produce a better experience and that experience improvements sometimes even reduce costs. I’ll readily agree that this particular area is the one where company-specific analytics offer the most improvement over generic LTV components.) Note that increasing years per customer affects cross sell as well as renewal revenue—a model relationship which probably reflects most businesses’ economic reality.
- For cross sell orders, once we assume that years per customer is mostly the result of customer experience, the remaining component is value per year. This again can be considered primarily a factor of marketing efforts. So we can look for situations where greater investment in cross sell marketing is likely to improve overall results.
In brief, we’ve identified a key variable for each LTV element: source mix for original orders; margin for renewal orders; and marketing spend for cross sell orders. Respectively, these impact acquisition cost and back-end value; back-end value and longevity; and cross sell value. As you see, each element affects itself and its successors.
So far so good, but we still haven’t explained how to identify opportunities for improvement in each element, let alone how to estimate the impact of any change or how to prioritize the changes. I’ll start to reveal those mysteries tomorrow.
Adding the question-buttons didn’t take very long, so most of my attention went to the more challenging question of whether I could build the One Big Button that identifies business opportunities. This took a bit more thought.
Of course, the proper response is that every company will have its own way of analyzing the data, based on the nature of its business and on corporate and personal preferences. It would be legitimate, and possibly wiser, to let it go at that. But I did wonder whether there could be a generic analysis based on the generic nature of the Lifetime Value components themselves.
I think there is. Start with the three major divisions in the LTV model: original orders, renewal orders, and cross sell orders. For each of these, ask: which components of total LTV does it affect, and what business decisions can we identify that would change those component?
- For original orders, the key components are acquisition cost and customer quality. (Quality is measured by back-end results; that is, renewals and cross sales). The key decision is source mix. So it’s plausible to identify situations where there is an opportunity to improve results by investing more in higher-performing sources and less in the weaker ones.
- For renewal orders, the key components are years per customer and value per year. In the real world, how long customers remain active (i.e., years per customer) is determined primarily by their experience with the actual product or service. (Companies can and do create retention programs, but these can only work at the margins and are rarely a major expense because the amount of dollars that can be spent effectively is quite limited.)
It’s hard to identify a single variable that impacts experience quality in the same way that source mix affects acquisitions. But it’s probably valid in general to assume that improving the experience will increase product and/or service costs, and thus decrease margin (revenue – costs). (Yes, I know that higher spending does not necessarily produce a better experience and that experience improvements sometimes even reduce costs. I’ll readily agree that this particular area is the one where company-specific analytics offer the most improvement over generic LTV components.) Note that increasing years per customer affects cross sell as well as renewal revenue—a model relationship which probably reflects most businesses’ economic reality.
- For cross sell orders, once we assume that years per customer is mostly the result of customer experience, the remaining component is value per year. This again can be considered primarily a factor of marketing efforts. So we can look for situations where greater investment in cross sell marketing is likely to improve overall results.
In brief, we’ve identified a key variable for each LTV element: source mix for original orders; margin for renewal orders; and marketing spend for cross sell orders. Respectively, these impact acquisition cost and back-end value; back-end value and longevity; and cross sell value. As you see, each element affects itself and its successors.
So far so good, but we still haven’t explained how to identify opportunities for improvement in each element, let alone how to estimate the impact of any change or how to prioritize the changes. I’ll start to reveal those mysteries tomorrow.
Friday, March 02, 2007
Business Intelligence and the One Big Button
Literal-minded creature that I am, yesterday’s discussion of organizing analysis tools around questions led me to consider changing my sample LTV system to open with a list of questions that the system can answer. Selecting a question would take you to the tab with the related information. (Nothing unique here—many systems do this.)
But I soon realized that things like “How much do revenue, marketing costs and product costs contribute to total value?” aren’t really what managers want to know. In fact, they really want just one button that answers the question, “How can I make more money?” This must look beyond past performance, and even beyond the factors that caused past performance, to identify opportunities for improvement.
You could argue that this is where human creativity comes in, and ultimately you’d be correct. But if we limit the discussion to marginal improvements within an existing structure, the process used to uncover opportunities is pretty well defined and can indeed be automated. It involves comparing the results of on-going efforts—things like different customer acquisition programs—and shifting investments from below-average to above-average performers.
Of course, it’s not trivial to get information on the results. You also have to estimate the incremental (rather than average) return on investments. But standard systems and formulas can do those sorts of things. They can also estimate the size of the opportunity represented by each change, so the system can prioritize the list of recommendations that the One Big Button returns.
Now, if the only thing managers have to do is push one button, why not automate the process entirely? Indeed, you may well do that in some situations. But here’s where we get back to human judgment.
It’s not just that systems sometimes recommend things that managers know are wrong. An automated forecast based on toy sales by week would predict incredible sales each January. Any human (at least in the U.S.) knows the pattern is seasonal. However, this isn’t such a big deal. Systems can be built to incorporate such factors and can be modified over time to avoid repeating an error.
The real reason you want humans involved is that looking at the recommendations and underlying data will generate new ideas and insights. A machine can only work within the existing structure, but a smart manager or analyst can draw inferences about what else might be worth trying. This will only happen if the manager sees the data.
I’m not saying any of the ideas I’ve just presented are new or profound. But they’re worth keeping in mind. For example, they apply to the question of whether Web targeting should be based on automated behavior monitoring or structured tests. (The correct answer is both—and make sure to look at the results of the automated systems to see if they suggest new tests.)
These ideas may also help developers of business intelligence and analytics systems understand how they can continue to add value, even after specialized features are assimilated into broader platforms. (I’m thinking here of acquisitions: Google/Urchin, Omniture/Touch Clarity, Oracle/Hyperion, SAP/Pilot, and so on.) Many analytical capabilities are rapidly approaching commodity status. In this world, only vendors who help answer the really important question—vendors who put something useful behind the One Big Button—will be able to survive.
But I soon realized that things like “How much do revenue, marketing costs and product costs contribute to total value?” aren’t really what managers want to know. In fact, they really want just one button that answers the question, “How can I make more money?” This must look beyond past performance, and even beyond the factors that caused past performance, to identify opportunities for improvement.
You could argue that this is where human creativity comes in, and ultimately you’d be correct. But if we limit the discussion to marginal improvements within an existing structure, the process used to uncover opportunities is pretty well defined and can indeed be automated. It involves comparing the results of on-going efforts—things like different customer acquisition programs—and shifting investments from below-average to above-average performers.
Of course, it’s not trivial to get information on the results. You also have to estimate the incremental (rather than average) return on investments. But standard systems and formulas can do those sorts of things. They can also estimate the size of the opportunity represented by each change, so the system can prioritize the list of recommendations that the One Big Button returns.
Now, if the only thing managers have to do is push one button, why not automate the process entirely? Indeed, you may well do that in some situations. But here’s where we get back to human judgment.
It’s not just that systems sometimes recommend things that managers know are wrong. An automated forecast based on toy sales by week would predict incredible sales each January. Any human (at least in the U.S.) knows the pattern is seasonal. However, this isn’t such a big deal. Systems can be built to incorporate such factors and can be modified over time to avoid repeating an error.
The real reason you want humans involved is that looking at the recommendations and underlying data will generate new ideas and insights. A machine can only work within the existing structure, but a smart manager or analyst can draw inferences about what else might be worth trying. This will only happen if the manager sees the data.
I’m not saying any of the ideas I’ve just presented are new or profound. But they’re worth keeping in mind. For example, they apply to the question of whether Web targeting should be based on automated behavior monitoring or structured tests. (The correct answer is both—and make sure to look at the results of the automated systems to see if they suggest new tests.)
These ideas may also help developers of business intelligence and analytics systems understand how they can continue to add value, even after specialized features are assimilated into broader platforms. (I’m thinking here of acquisitions: Google/Urchin, Omniture/Touch Clarity, Oracle/Hyperion, SAP/Pilot, and so on.) Many analytical capabilities are rapidly approaching commodity status. In this world, only vendors who help answer the really important question—vendors who put something useful behind the One Big Button—will be able to survive.
Thursday, March 01, 2007
Users Want Answers, Not Tools
I hope you appreciated that yesterday’s post about reports within the sample Lifetime Value system was organized around the questions that each report answered, and not around the report contents themselves. (You DID read yesterday’s post, didn’t you? Every last word?) This was the product of considerable thought about what it takes to make systems like that useful to actual human beings.
Personally I find those kinds of reports intrinsically fascinating, especially when they have fun charts and sliders to play with. But managers without the time for leisurely exploration—shall we call it “data tourism”?—need an easier way to get into the data and find exactly what they want. Starting with a list of questions they might ask, and telling them where they will find each answer, is one way of helping out.
Probably a more common approach is to offer prebuilt analysis scenarios, which would be packages of reports and/or recommended analysis steps to handle specific projects. It’s a more sophisticated version of the same notion: figure out what questions people are likely to have and lead them through the process of acquiring answers. There is a faint whiff of condescension to this—a serious analyst might be insulted at the implication that she needs help. But the real question is whether non-analysts would have the patience to work through even this sort of guided presentation. The vendors who offer such scenarios tell me that users appreciate them, but I’ve never heard a user praise them directly.
The ultimate fallback, of course, is to have someone else do the analysis for you. One of my favorite sayings—which nobody has ever found as witty as I do, alas—is that the best user interface ever invented is really the telephone: as in, pick up the telephone and tell somebody else to answer your question.. Many of the weighty pronouncements I see about how automated systems can never replace the insight of human beings really come down to this point.
But if that’s really the case, are we just kidding ourselves by trying to make analytics accessible to non-power users? Should we stop trying and simply build power tools to make the real experts as productive as possible? And even if that’s correct, must we still pretend to care about non-power users because they are often control the purchase decision?
On reflection, this is a silly line of thought. Business users need to make business decisions and they need to have the relevant information presented to them in ways they can understand. Automated systems make sense but still must run under the supervision of real human beings. There is no reason to try to turn business users into analysts. But each type of user should be given the tools it needs to do its job.
Personally I find those kinds of reports intrinsically fascinating, especially when they have fun charts and sliders to play with. But managers without the time for leisurely exploration—shall we call it “data tourism”?—need an easier way to get into the data and find exactly what they want. Starting with a list of questions they might ask, and telling them where they will find each answer, is one way of helping out.
Probably a more common approach is to offer prebuilt analysis scenarios, which would be packages of reports and/or recommended analysis steps to handle specific projects. It’s a more sophisticated version of the same notion: figure out what questions people are likely to have and lead them through the process of acquiring answers. There is a faint whiff of condescension to this—a serious analyst might be insulted at the implication that she needs help. But the real question is whether non-analysts would have the patience to work through even this sort of guided presentation. The vendors who offer such scenarios tell me that users appreciate them, but I’ve never heard a user praise them directly.
The ultimate fallback, of course, is to have someone else do the analysis for you. One of my favorite sayings—which nobody has ever found as witty as I do, alas—is that the best user interface ever invented is really the telephone: as in, pick up the telephone and tell somebody else to answer your question.. Many of the weighty pronouncements I see about how automated systems can never replace the insight of human beings really come down to this point.
But if that’s really the case, are we just kidding ourselves by trying to make analytics accessible to non-power users? Should we stop trying and simply build power tools to make the real experts as productive as possible? And even if that’s correct, must we still pretend to care about non-power users because they are often control the purchase decision?
On reflection, this is a silly line of thought. Business users need to make business decisions and they need to have the relevant information presented to them in ways they can understand. Automated systems make sense but still must run under the supervision of real human beings. There is no reason to try to turn business users into analysts. But each type of user should be given the tools it needs to do its job.
Anther Unforgivably Long Post on Lifetime Value
A few weeks ago I wrote a long series of posts about the uses of Lifetime Value. I made several references to a “Lifetime Value system” that would support many of these applications. I’ve now taken the logical next step and built a sample system, just to see how well what I described hung together in practice. This only took a few days work, using a very slick and efficient business intelligence tool called QlikTech (for which Client X Client is a reseller, if anyone is interested). I’m happy to report that the results were more than satisfactory—the system exceeds my expectations for the amount of useful information it makes easily accessible. And it’s lots of fun to play with, in a geeky sort of way.
I’ll go through the contents shortly. But first you need a bit of context. You’ll recall from my earlier posts that my fundamental premise was that starting with Lifetime Value and then breaking it into components is a powerful way to understand what’s happening in a business. In the test system, I started with 3.3 million transactions relating to 1.1 million customers. Dates ranged from 2002 through mid-2006. Customers were assigned to groups based on the year of their first order (Start Year), the product purchased in that order and the promotion source. Transactions were further classified by ”Life Year”—that is, whether the purchase within the first year, second year, third year or fourth year since the original order. I also calculated a summary figure at the customer level for whether each customer was “active” (that is, had at least one transaction) during each Life Year. For each transaction, I had available the marketing cost, revenue (positive or negative, since refunds were included in the data), product cost (cost of goods plus fulfillment). I also had the product purchased in each transaction, which I summarized into three categories: original purchase, repurchase of the original product (renewal), and purchase of a different product (cross sell).
Once the actual data had been summarized, I built a simple forecasting system that used it to project future results for each customer group with less than four years of history. The projections included the same elements (costs, revenues, etc.) as the original transactions. This meant I had four “life years” of data, either actual or forecast or a combination, for each set of customers. This was all the data I needed for my analyses.
The user interface of the system is organized into tabs, each designed to answer a particular question. Because QlikTech makes it very easy to drill down along multiple dimensions, I could view reports in each tab for the business as a whole or for subsets such as a particular product, source, start year, life year, etc. This effectively meant I could examine results for different segments, sliced pretty much whatever way I wanted.
The questions and their associated tabs are:
- How does total performance compare across products? (Overview tab). This is the natural starting point. It shows the number of new customers, lifetime value per customer, and total lifetime value added (i.e., number of customers x value per customer). This shows the total amount of value created by each product. On this and all other tabs, I can see the data both in tables and on charts that highlight different elements.
Incidentally, the system is set up so I can choose how many Life Years to include in the LTV calculation and what discount rate to apply to future years. These values are controlled with sliders so I can change them and see the numbers and charts adjust instantly. I told you this was geeky fun.
- Which products show the greatest change in value? (Variance tab). To my mind, this is the key to the system because it lets the user identify products that merit closer examination because their performance has changed, either for better or worse. It does this by calculating the values for 2004 and 2005 for Lifetime Value and its two major components (number of new customers and value per customer). The system then calculates the change in each of these elements from year to year. It then does a standard variance calculation showing how much of the total change in LTV added was due to quantity (change in new customers x 2004 value per customer), how much to rate (change in value per customers x 2004 new customers), and how much to the combination of changes (change in value per customer x change in new customers). Users can sort the product list on any of these elements. This means they can quickly identify which products, say, had the greatest improvement in value per customer or worst drop in number of new customers. The ones with the biggest changes are the ones you want to look at first.
- How has LTV changed over time? (Detail by Start Year). This shows whether a product is strengthening or weakening over time by breaking out the three main LTV measures (value added, new customers, value per customer) by Start Year. ‘Nuff said.
- How much value is earned in the first, second, third, etc. year of the customer’s lifetime? (Detail by Life Year). This breaks down the total value per customer figure to see how much is earned in each Life Year. It also multiples this times the number of starting customers to show the net dollar amount earned in each year. This helps companies understand the cash flows associated with a new customer, and in particular how quickly they are recouping the acquisition cost.
- How have the source mix and LTV by source changed over time? (Detail by Source). This shows the number of new customers, value per customer, and LTV added, broken down by source by start year. Thus it shows both the change in mix and the change in performance by source. This helps users understand some of the dynamics driving the changes in the value for the product as a whole. The charts on this tab are particularly helpful, making it easy to see which sources are growing and shrinking, how performance within each source is changing, and which sources are providing higher or lower fractions of the total value.
- How many customers remain active in the first, second, third, etc. year after they begin? (Active Customers by Year). This shows the active customer counts by Life Year for each Start Year group. Seeing how long customers remain active helps to illustrate attrition and longevity—but, unlike summary measures, it shows the drop-off patterns from year to year. Showing how the values change for different Start Year groups show whether results are getting better or worse over time. This might reflect either a change in the nature of the customers acquired during different years, or a change in customer satisfaction with their experience with the company. This tab (like most others, although I haven’t mentioned it) also shows these values further broken down by source, so users can see whether a change in over-all results is due to a change in source mix, can compare performance for different sources (which usually does vary significantly), and can see how the sources themselves change over time. Again, charts make this much easier to understand than raw tables.
- How much value is earned from original, renewal and cross sell orders, and how is this distributed over time? (Value by Order Type). This shows the net value per starting customer for the different types of orders (original, renewal and cross sell). It further shows these broken down by Life Year. It’s particularly important in businesses which depend on “loss leaders” to bring in new customers and then make their money by selling them other products. Gaining this holistic view of the customer is one of the major reasons to look at Lifetime Value rather than simply analyzing product sales on their own.
- How much do revenue, marketing costs and product costs contribute to total value, and how does this change over time? (Value by Value Type). This is yet another way of looking at the lifetime value components, in this case considering the revenue, marketing costs and products costs. These are further broken out by Life Year, which particularly highlights the impact of acquisition costs (which by definition occur during the first Life Year). Further splitting the results by source makes even clearer which sources succeed because their acquisition costs are low, and which perform well because their customers are of high quality. This insight can help suggest which sources have the best prospects for further growth and which might need to be cut back.
- How is attrition impacting customer value and how is it changing? (LTV Components Overview). This shows some detail within the value per customer component, breaking it down by value per original order, value on later orders (renewals plus cross sell), and active years per customer. Note that the math isn’t quite right here—value on later orders actually is calculated by (active years per customer x later order value per active year). I didn’t show that final component because it’s redundant and there is enough going on already. The most concrete new piece of information shown in this tab is the active years per customer figure. This is shown by Start Year to see any changes over time. The tab also shows original value, which was already presented in the Value by Order Type tab, although not by Start Year. Later value was also shown in Value by Order Type, although there it was split between renewal and cross sell.
- What are the detailed performance measures? (LTV Components by Source). This tab shows the finest level of component analysis. It breaks the original order value into acquisition cost and gross margin (revenue – product cost), and breaks the later order value into value per year and active years per customer. These are divided by source (for all Start Years combined) and by Start Year (for all sources combined.)
- What are detailed performance measures by source over time? (LTV Components by Year). This also shows the acquisition cost, gross margin, later value per year and active years per customer, but now broken down by source by Start Year. This provides the most precise view of how these measures have changed over time.
- What are active customers worth in later years? (Value per Active Customer). This shows the values earned per active customer in later Life Years. Note that all previous figures looked at value per starting customer. The value per active customer figure is of course higher in any given year, since there are always fewer active than starting customers once you get past Life Year 1. Value per active customer gives some indication of what you might spend to retain these customers, although of course a proper comparison would be to look ahead over the same time horizon as your original Lifetime Value calculations. I couldn’t do this for most groups in my sample system because I didn’t forecast beyond 2006. A more sophisticated forecasting system wouldn’t have that limit and would thus give a true LTV per active customer. Results in this tab are also broken down by Start Year so any trends become apparent.
- How does value per active customer change by source? (Value per Active Customer Detail). This adds a source break-down to the value per active customer by Life Year by Start Year. Again the primary purpose is to help understand what currently active customers are worth, this time at a source level.
- Has across-the-board performance changed significantly from one calendar year to another? (Value by Transaction Year). This shows value per Life Year per starting customer, but it’s organized by calendar year rather than Start Year. This would highlight any changes affecting all customers at the same time, such as an across-the-board price increase, general fall-off or improvement due to economic conditions, or the result of a fulfillment problem. Values in this tab are not discounted relative to the customer Start Year, so they will differ from the figures in the Detail by Life Year tab unless the discount rate is set to 0.
Whew! You definitely deserve a prize if you’ve actually read this far. I know this is dense stuff, but it’s also fascinating (to me, at least) just how much can be extracted from a relatively simple set of information. Trust me that it’s much more exciting when it’s your own data you’re looking at, and you can suddenly see information that’s been hidden for years.
It also helps that the charts have pretty colors.
I’ll go through the contents shortly. But first you need a bit of context. You’ll recall from my earlier posts that my fundamental premise was that starting with Lifetime Value and then breaking it into components is a powerful way to understand what’s happening in a business. In the test system, I started with 3.3 million transactions relating to 1.1 million customers. Dates ranged from 2002 through mid-2006. Customers were assigned to groups based on the year of their first order (Start Year), the product purchased in that order and the promotion source. Transactions were further classified by ”Life Year”—that is, whether the purchase within the first year, second year, third year or fourth year since the original order. I also calculated a summary figure at the customer level for whether each customer was “active” (that is, had at least one transaction) during each Life Year. For each transaction, I had available the marketing cost, revenue (positive or negative, since refunds were included in the data), product cost (cost of goods plus fulfillment). I also had the product purchased in each transaction, which I summarized into three categories: original purchase, repurchase of the original product (renewal), and purchase of a different product (cross sell).
Once the actual data had been summarized, I built a simple forecasting system that used it to project future results for each customer group with less than four years of history. The projections included the same elements (costs, revenues, etc.) as the original transactions. This meant I had four “life years” of data, either actual or forecast or a combination, for each set of customers. This was all the data I needed for my analyses.
The user interface of the system is organized into tabs, each designed to answer a particular question. Because QlikTech makes it very easy to drill down along multiple dimensions, I could view reports in each tab for the business as a whole or for subsets such as a particular product, source, start year, life year, etc. This effectively meant I could examine results for different segments, sliced pretty much whatever way I wanted.
The questions and their associated tabs are:
- How does total performance compare across products? (Overview tab). This is the natural starting point. It shows the number of new customers, lifetime value per customer, and total lifetime value added (i.e., number of customers x value per customer). This shows the total amount of value created by each product. On this and all other tabs, I can see the data both in tables and on charts that highlight different elements.
Incidentally, the system is set up so I can choose how many Life Years to include in the LTV calculation and what discount rate to apply to future years. These values are controlled with sliders so I can change them and see the numbers and charts adjust instantly. I told you this was geeky fun.
- Which products show the greatest change in value? (Variance tab). To my mind, this is the key to the system because it lets the user identify products that merit closer examination because their performance has changed, either for better or worse. It does this by calculating the values for 2004 and 2005 for Lifetime Value and its two major components (number of new customers and value per customer). The system then calculates the change in each of these elements from year to year. It then does a standard variance calculation showing how much of the total change in LTV added was due to quantity (change in new customers x 2004 value per customer), how much to rate (change in value per customers x 2004 new customers), and how much to the combination of changes (change in value per customer x change in new customers). Users can sort the product list on any of these elements. This means they can quickly identify which products, say, had the greatest improvement in value per customer or worst drop in number of new customers. The ones with the biggest changes are the ones you want to look at first.
- How has LTV changed over time? (Detail by Start Year). This shows whether a product is strengthening or weakening over time by breaking out the three main LTV measures (value added, new customers, value per customer) by Start Year. ‘Nuff said.
- How much value is earned in the first, second, third, etc. year of the customer’s lifetime? (Detail by Life Year). This breaks down the total value per customer figure to see how much is earned in each Life Year. It also multiples this times the number of starting customers to show the net dollar amount earned in each year. This helps companies understand the cash flows associated with a new customer, and in particular how quickly they are recouping the acquisition cost.
- How have the source mix and LTV by source changed over time? (Detail by Source). This shows the number of new customers, value per customer, and LTV added, broken down by source by start year. Thus it shows both the change in mix and the change in performance by source. This helps users understand some of the dynamics driving the changes in the value for the product as a whole. The charts on this tab are particularly helpful, making it easy to see which sources are growing and shrinking, how performance within each source is changing, and which sources are providing higher or lower fractions of the total value.
- How many customers remain active in the first, second, third, etc. year after they begin? (Active Customers by Year). This shows the active customer counts by Life Year for each Start Year group. Seeing how long customers remain active helps to illustrate attrition and longevity—but, unlike summary measures, it shows the drop-off patterns from year to year. Showing how the values change for different Start Year groups show whether results are getting better or worse over time. This might reflect either a change in the nature of the customers acquired during different years, or a change in customer satisfaction with their experience with the company. This tab (like most others, although I haven’t mentioned it) also shows these values further broken down by source, so users can see whether a change in over-all results is due to a change in source mix, can compare performance for different sources (which usually does vary significantly), and can see how the sources themselves change over time. Again, charts make this much easier to understand than raw tables.
- How much value is earned from original, renewal and cross sell orders, and how is this distributed over time? (Value by Order Type). This shows the net value per starting customer for the different types of orders (original, renewal and cross sell). It further shows these broken down by Life Year. It’s particularly important in businesses which depend on “loss leaders” to bring in new customers and then make their money by selling them other products. Gaining this holistic view of the customer is one of the major reasons to look at Lifetime Value rather than simply analyzing product sales on their own.
- How much do revenue, marketing costs and product costs contribute to total value, and how does this change over time? (Value by Value Type). This is yet another way of looking at the lifetime value components, in this case considering the revenue, marketing costs and products costs. These are further broken out by Life Year, which particularly highlights the impact of acquisition costs (which by definition occur during the first Life Year). Further splitting the results by source makes even clearer which sources succeed because their acquisition costs are low, and which perform well because their customers are of high quality. This insight can help suggest which sources have the best prospects for further growth and which might need to be cut back.
- How is attrition impacting customer value and how is it changing? (LTV Components Overview). This shows some detail within the value per customer component, breaking it down by value per original order, value on later orders (renewals plus cross sell), and active years per customer. Note that the math isn’t quite right here—value on later orders actually is calculated by (active years per customer x later order value per active year). I didn’t show that final component because it’s redundant and there is enough going on already. The most concrete new piece of information shown in this tab is the active years per customer figure. This is shown by Start Year to see any changes over time. The tab also shows original value, which was already presented in the Value by Order Type tab, although not by Start Year. Later value was also shown in Value by Order Type, although there it was split between renewal and cross sell.
- What are the detailed performance measures? (LTV Components by Source). This tab shows the finest level of component analysis. It breaks the original order value into acquisition cost and gross margin (revenue – product cost), and breaks the later order value into value per year and active years per customer. These are divided by source (for all Start Years combined) and by Start Year (for all sources combined.)
- What are detailed performance measures by source over time? (LTV Components by Year). This also shows the acquisition cost, gross margin, later value per year and active years per customer, but now broken down by source by Start Year. This provides the most precise view of how these measures have changed over time.
- What are active customers worth in later years? (Value per Active Customer). This shows the values earned per active customer in later Life Years. Note that all previous figures looked at value per starting customer. The value per active customer figure is of course higher in any given year, since there are always fewer active than starting customers once you get past Life Year 1. Value per active customer gives some indication of what you might spend to retain these customers, although of course a proper comparison would be to look ahead over the same time horizon as your original Lifetime Value calculations. I couldn’t do this for most groups in my sample system because I didn’t forecast beyond 2006. A more sophisticated forecasting system wouldn’t have that limit and would thus give a true LTV per active customer. Results in this tab are also broken down by Start Year so any trends become apparent.
- How does value per active customer change by source? (Value per Active Customer Detail). This adds a source break-down to the value per active customer by Life Year by Start Year. Again the primary purpose is to help understand what currently active customers are worth, this time at a source level.
- Has across-the-board performance changed significantly from one calendar year to another? (Value by Transaction Year). This shows value per Life Year per starting customer, but it’s organized by calendar year rather than Start Year. This would highlight any changes affecting all customers at the same time, such as an across-the-board price increase, general fall-off or improvement due to economic conditions, or the result of a fulfillment problem. Values in this tab are not discounted relative to the customer Start Year, so they will differ from the figures in the Detail by Life Year tab unless the discount rate is set to 0.
Whew! You definitely deserve a prize if you’ve actually read this far. I know this is dense stuff, but it’s also fascinating (to me, at least) just how much can be extracted from a relatively simple set of information. Trust me that it’s much more exciting when it’s your own data you’re looking at, and you can suddenly see information that’s been hidden for years.
It also helps that the charts have pretty colors.