Sage Software(www.sagesoftware.com) is one of those amorphous software companies that have grown by acquiring many products and keeping them pretty much separate. The company does have a focus: it sells accounting and CRM software on small and mid-sized business. But under that umbrella its Web site lists thirty different products, including well-known brands Peachtree, DacEasy, Accpac and MAS for accounting, ACT! contact management, and SalesLogix and Sage CRM for CRM.
This broad product line poses a particular challenge in writing a white paper that does what white papers are supposed to do: give objective-sounding information that subtly pushes readers toward the sponsor’s products. With so many different products, Sage can’t simply promote the features of any one of them.
But Sage’s paper “17 Rules of the Road for CRM”, available here, rises splendidly to the task. It does offer some very sound advice, from taking a broad perspective (“1. CRM is more than a product, it’s a philosophy”; “2. Customers are everywhere: clients, vendors, employees, mentors”) to careful preparation (“5. Planning pays”, “6. Prepare for product demos”) to deployment (“14. Implementation method is as important as product choice”, “15. Training can’t be ‘on the job’”, “16. Test, or crash and burn”, “17. Focus on CRM goals: improve customer satisfaction, shorten sales cyclces, and increase revenue”). Yet it also throws in some points that are tailored to supporting Sage against its competitors.
(Come to think of it, Sage sells mostly through channel partners who provide the consulting, selection and implementation services highlighted in the points listed above. So even these points are really leading readers to Sage strengths.)
Specifically, Sage CRM faces sells against three types of challengers: point solutions such as contact management or customer service systems; enterprise software such as Siebel/Oracle or SAP CRM; and hosted systems such as Salesforce.com and RightNow. There are white paper rules targeted to each:
- point solutions: “3. Don’t confuse CRM with contact management”, “8. CRM is not a point solution”, “10. Multi-channel access is the only way to go”, “13. CRM is not for any single department, it’s for the whole company”
- enterprise software: “4. CRM solutions are different for midsized companies”, “12. High cost does not necessarily mean high value”
- hosted systems: “7. Implement current technology” (don’t rely on promised future features; include back-office integration and customizability), “9. Speed ROI through back-office integration”, “11. Look for true platform flexibility” (ability to switch between hosted and installed versions)
These points are perfectly valid—the only one I might question is whether you really need a product that can switch between hosted and installed versions. I’m simply noting, with genuine admiration, how nicely Sage has presented them in ways that support its particular interests. It’s always fun to see a master at work.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Thursday, November 30, 2006
Wednesday, November 29, 2006
David Raab in Webinar
I'll be presenting a Webinar on "Getting Started with Marketing Automation", sponsored by Unica, on Wednesday December 6 at 11:30. Eastern / 8:30 Pacific. Click here to register.
One Final Post on Multi-Variate Testing
It’s been fun to explore multi-variate testing in some depth over these last few entries, but I fear you Dear Readers will get bored if I continue to focus on a single topic. Also, my stack of white papers to review is getting taller and I do so enjoy critiquing them. (At the top of the pile is Optimost’s “15 Critical Questions to Ask a Multivariable Testing Provider” available here. This covers many of the items I listed on Monday although of course it puts an Optimost-centric spin on them. Yes I read it before compiling my own list; let’s just call that “research”.)
Before I leave the topic altogether, let me share some final tidbits. One is that I’m told it’s not possible to force a particular combination of elements into a Taguchi test plan because the combinations are determined by the Taguchi design process itself. I’m not 100% sure this is correct: I suspect that if you specified a particular combination as a starting point, you could design a valid plan around it. But the deeper point, which certainly does make sense, is that substituting a stronger for weaker combination within an active test plan would almost surely invalidate the results. The more sensible method is to keep tests short, and either complete the current test unchanged or replace it with a new one if an important result becomes clear early on. Remember that this applies to multi-variate tests, where results from all test combinations are aggregated to read the final results. In a simpler A/B test, you have more flexibility.
The second point, which is more of a Note To Self, is that multi-variate testing still won't let me use historical data to measure the impact of different experiences on future customer behavior. I want to do this for value formulas in the Customer Experience Matrix. The issue is simple: multi-variate tests require valid test data. This means the test system must determine which contents are presented, and everything else about the test customers must be pretty much the same. Historical data won’t meet these conditions: either everyone saw the same content, or the content was selected by some existing business rule that introduces its own bias. The same problem really applies to any analytical technique, even things like regression that don’t require data generated from structured tests. When a formal test is possible, multi-variate testing can definitely help to measure experience impacts. But, as one of the vendors pointed out to me yesterday, it’s difficult for companies to run tests that last long enough to measure long-term outcomes like lifetime value.
Before I leave the topic altogether, let me share some final tidbits. One is that I’m told it’s not possible to force a particular combination of elements into a Taguchi test plan because the combinations are determined by the Taguchi design process itself. I’m not 100% sure this is correct: I suspect that if you specified a particular combination as a starting point, you could design a valid plan around it. But the deeper point, which certainly does make sense, is that substituting a stronger for weaker combination within an active test plan would almost surely invalidate the results. The more sensible method is to keep tests short, and either complete the current test unchanged or replace it with a new one if an important result becomes clear early on. Remember that this applies to multi-variate tests, where results from all test combinations are aggregated to read the final results. In a simpler A/B test, you have more flexibility.
The second point, which is more of a Note To Self, is that multi-variate testing still won't let me use historical data to measure the impact of different experiences on future customer behavior. I want to do this for value formulas in the Customer Experience Matrix. The issue is simple: multi-variate tests require valid test data. This means the test system must determine which contents are presented, and everything else about the test customers must be pretty much the same. Historical data won’t meet these conditions: either everyone saw the same content, or the content was selected by some existing business rule that introduces its own bias. The same problem really applies to any analytical technique, even things like regression that don’t require data generated from structured tests. When a formal test is possible, multi-variate testing can definitely help to measure experience impacts. But, as one of the vendors pointed out to me yesterday, it’s difficult for companies to run tests that last long enough to measure long-term outcomes like lifetime value.
Tuesday, November 28, 2006
Distinguishing among Multi-Variate Testing Products (I'm going to regret this)
My last two posts listed many factors to consider when evaluating multi-variate Web testing systems. Each factor is important, which in practical terms means it can get you fired (if it prevents you from using a system you’ve purchased). So there’s really no way to avoid researching each factor in detail before making a choice.
And yet…there are so many factors. If you’re not actively engaged in a selection process, isn’t there a smaller number of items you might keep in mind when mentally classifying the different products?
One way to answer this is to look at the features which truly appear to distinguish the different vendors—things that either are truly unique, or that the vendor emphasizes in their own promotions. Warning: “unique” is a very dangerous term to use about software. Some things that are unique do not matter; some things that vendors believe are unique are not; some things that are unique in a technical sense can be accomplished using other, perfectly satisfactory approaches.
A list of distinguishing features only makes sense if you know what is commonly available. In general (and with some exceptions), you can expect a multi-variate testing system to:
- have an interface that lets marketers set up tests with minimal support from Web site technicians
- support Taguchi method multi-variate testing and simpler designs such as A/B splits
- use segmentation to deliver different tests to different visitors
- use Javascript snippets on each Web page to call a test engine which returns test content
- use persistent cookies, and sometimes stored profiles, to recognize repeat visitors
- provide real time reporting of test results
That said, here is what strikes me as the single most distinguishing feature each of the multi-variate testing vendors (listed alphabetically). No doubt each vendor has other items it would like to add—I’ve listed just one feature per vendor to make as clear as possible that this isn’t a comprehensive description.
- Offermatica: can run multi-page and multi-session tests. This isn’t fully unique, but some products only test components within a single page.
- Optimost: offers “optimal design” in addition to the more common Taguchi method for multi-variate testing. According to Optimost, "optimal design" does a better job than Taguchi of dealing with relationships among variables.
- SiteSpect: delivers test content by intercepting and replacing Web traffic rather than inserting Javascript snippets. This can be done by an on-site appliance or a hosted service. (click here to see a more detailed explanation from SiteSpect in a comment on yesterday’s post.)
- Verster: uses AJAX/DHTML to generate test contents within the visitor’s browser rather than inserting them before the page is sent. All test content remains on the client’s Web server.
There are (at least!) two more vendors who offer multi-variate testing but are not exactly focused in this area:
- Kefta: tightly integrates multi-variate testing results with business rules and system-generated visitor profiles used to select content. Kefta considers itself a “dynamic targeting” system.
- Memetrics: supports “marketing strategies optimization” with installed software to build “choice models” of customer preferences across multiple channels. Also has a conventional, hosted page optimization product using A/B and multi-variate methods.
And yet…there are so many factors. If you’re not actively engaged in a selection process, isn’t there a smaller number of items you might keep in mind when mentally classifying the different products?
One way to answer this is to look at the features which truly appear to distinguish the different vendors—things that either are truly unique, or that the vendor emphasizes in their own promotions. Warning: “unique” is a very dangerous term to use about software. Some things that are unique do not matter; some things that vendors believe are unique are not; some things that are unique in a technical sense can be accomplished using other, perfectly satisfactory approaches.
A list of distinguishing features only makes sense if you know what is commonly available. In general (and with some exceptions), you can expect a multi-variate testing system to:
- have an interface that lets marketers set up tests with minimal support from Web site technicians
- support Taguchi method multi-variate testing and simpler designs such as A/B splits
- use segmentation to deliver different tests to different visitors
- use Javascript snippets on each Web page to call a test engine which returns test content
- use persistent cookies, and sometimes stored profiles, to recognize repeat visitors
- provide real time reporting of test results
That said, here is what strikes me as the single most distinguishing feature each of the multi-variate testing vendors (listed alphabetically). No doubt each vendor has other items it would like to add—I’ve listed just one feature per vendor to make as clear as possible that this isn’t a comprehensive description.
- Offermatica: can run multi-page and multi-session tests. This isn’t fully unique, but some products only test components within a single page.
- Optimost: offers “optimal design” in addition to the more common Taguchi method for multi-variate testing. According to Optimost, "optimal design" does a better job than Taguchi of dealing with relationships among variables.
- SiteSpect: delivers test content by intercepting and replacing Web traffic rather than inserting Javascript snippets. This can be done by an on-site appliance or a hosted service. (click here to see a more detailed explanation from SiteSpect in a comment on yesterday’s post.)
- Verster: uses AJAX/DHTML to generate test contents within the visitor’s browser rather than inserting them before the page is sent. All test content remains on the client’s Web server.
There are (at least!) two more vendors who offer multi-variate testing but are not exactly focused in this area:
- Kefta: tightly integrates multi-variate testing results with business rules and system-generated visitor profiles used to select content. Kefta considers itself a “dynamic targeting” system.
- Memetrics: supports “marketing strategies optimization” with installed software to build “choice models” of customer preferences across multiple channels. Also has a conventional, hosted page optimization product using A/B and multi-variate methods.
Monday, November 27, 2006
Still More on Multi-Variate Testing (Really Pushing It for a Monday)
My last entry described in detail the issues relating to automated deployment of multi-variate Web test results. But automated deployment is just one consideration in evaluating such systems. Here is an overview of some others.
- segmentation: as pointed out in a comment by Kefta’s Mark Ogne, “testing can only provide long term, fruitful answers within a relatively homogeneous group of people.” Of course, finding the best ways to segment a Web site’s visitors is a challenge in itself. But assuming this has been accomplished, the testing system should be able to identify the most productive combination of components for each segment. Ideally the segments would be defined using all types of information, including visitor source (e.g. search words), on-site behavior (previous clicks), and profiles (based on earlier visits or other transactions with the company). For maximum effectiveness, the system should be able to set up different test plans for each segment.
- force or exclude specified combinations: there may be particular combinations you must test, such as a previous winner or the boss’s favorite. You may wish to exclude other combinations, perhaps because you’ve tested them before. The system should make this easy to do.
- allow linkages among test components: certain components may only make sense in combination; for example, service plans may only be offered for some products, or some headlines may be related to specific photos. The testing system must allow the user to define such connections and ensure only the appropriate combinations are displayed. This should accommodate more than simple one-to-one relationships: for example, three different photos might be compatible with the same headline, while three different headlines might be compatible with just one of those photos. Such linkages, and tests in general, should extend across more than a single Web page so each visitor sees consistent treatments throughout the site.
- allow linkages across visits: treatments for the same visitor should also be consistent across site visits. Although this is basically an extension of the need for page-to-page consistency, the technical solutions are different. Session-to-session consistency implies a persistent cookie or user profile or both, and is harder to achieve because of visitor behavior such as deleting cookies, logging in from different machines, or using different online identities.
- measure results across multiple pages and multiple visits: even when the components being tested reside on a single page, it’s often important to look at behaviors elsewhere on the site. For example, different versions of the landing page may attract customers with different buying patterns. The system must able to capture such results and use them to evaluate test performance. It should also be able to integrate behaviors from outside of the Web site, such as phone orders or store visits. As with linkages among test components, different technologies may be involved when measuring results within a single page, across multiple pages, across visits and across channels. This means a system’s capabilities for each type of measurement must be evaluated separately.
- allow multiple success measures. Different tests may target different behaviors, such as capturing a name, generating an inquiry or placing an order. The test system must be able to handle this. In addition, users may want to measure multiple behaviors as part of a single test: say, average order size, number of orders, and profit margin. The system should be able to capture and report on these as well. As discussed in last Wednesday’s post, it can be difficult to combine several measures into one value for the test to maximize. But the system should at least be able to show the expected results of the tested combinations in terms of each measure.
- account for interactions among variables: this is a technical issue and one where vendors who use different test designs make claims that only an expert can assess. The fundamental concern is that specific combinations of components may yield results that are different from what would be predicted by viewing them independently. To take a trivial example, a headline and body text that gave conflicting information would probably depress results. Be sure to explore how any vendor you consider handles this issue and make sure you are comfortable with their approach.
- reporting: the basic output of a multi-variate test is a report showing how different elements performed, by themselves and in combination with others. Beyond that, you want help in understanding what this means: ranking of elements by importance; ranking of alternatives within each element; confidence statistics indicating how reliable the results are; any apparent interaction effects; estimated results for the best combination if it was not actually tested. A multi-variate test generates a great deal of data, so efficient, understandable presentation is critical. In addition to their actual reporting features, some vendors provide human analysts to review and explain results.
- integration difficulty and performance: the multi-variate testing systems all take over some aspect of Web page presentation by controlling certain portions of your pages. The work involved to set this up and the speed and reliability with which test pages are rendered are critical factors in successful deployment. Specific issues include the amount of new code that must be embedded in each page, how much this code changes from test to test, how much volume the system can handle (in number of pages rendered and complexity of the content), how result measurement is incorporated, how any cookies and visitor profiles are managed, and mechanisms to handle failures such as unavailable servers or missing content.
- impact on Web search engines: this is another technical issue, but a fairly straightforward one. Content managed by the testing system is generally not part of the static Web pages read by the “spiders” that search engines to use index the Web. The standard solution seems to be to put the important search terms in a portion of the static page that visitors will not see but the spiders will still read. Again, you need to understand the details of each vendor’s approach, and in particular how much work is involved in keeping the invisible search tags consistent with the actual, visible site.
- hosted vs. installed deployment: all of the multi-variate testing products are offered as hosted solutions. Memtrics and SiteSpect also offer installed options; the others don’t seem to but I can’t say for sure. Yet even hosted solutions can vary in details such as where test content is stored and whether software for the user interface is installed locally. If this is a major concern in your company, check with the vendors for the options available.
- test setup: last but certainly not least, what’s involved in actually setting up a test on the system? How much does the user need to know about Web technology, the details of the site, test design principles, and the mechanics of the test system itself? How hard is it to set up a new test and how hard to make changes? Does the system help to prevent users from setting up tests that conflict with each other? What kind of security functions are available—in a large organization, there may be separate managers for different site sections, for content management, and for approvals after the test is defined. How are privacy concerns addressed? What training does the company provide and what human assistance is available for technical, test design and marketing issues? The questions could go on, but the basic point is you need to walk through the process from start to finish with each vendor and imagine what it would be like to do this on a regular basis. If the system is too hard to use, then it really doesn’t matter what else it’s good at.
- segmentation: as pointed out in a comment by Kefta’s Mark Ogne, “testing can only provide long term, fruitful answers within a relatively homogeneous group of people.” Of course, finding the best ways to segment a Web site’s visitors is a challenge in itself. But assuming this has been accomplished, the testing system should be able to identify the most productive combination of components for each segment. Ideally the segments would be defined using all types of information, including visitor source (e.g. search words), on-site behavior (previous clicks), and profiles (based on earlier visits or other transactions with the company). For maximum effectiveness, the system should be able to set up different test plans for each segment.
- force or exclude specified combinations: there may be particular combinations you must test, such as a previous winner or the boss’s favorite. You may wish to exclude other combinations, perhaps because you’ve tested them before. The system should make this easy to do.
- allow linkages among test components: certain components may only make sense in combination; for example, service plans may only be offered for some products, or some headlines may be related to specific photos. The testing system must allow the user to define such connections and ensure only the appropriate combinations are displayed. This should accommodate more than simple one-to-one relationships: for example, three different photos might be compatible with the same headline, while three different headlines might be compatible with just one of those photos. Such linkages, and tests in general, should extend across more than a single Web page so each visitor sees consistent treatments throughout the site.
- allow linkages across visits: treatments for the same visitor should also be consistent across site visits. Although this is basically an extension of the need for page-to-page consistency, the technical solutions are different. Session-to-session consistency implies a persistent cookie or user profile or both, and is harder to achieve because of visitor behavior such as deleting cookies, logging in from different machines, or using different online identities.
- measure results across multiple pages and multiple visits: even when the components being tested reside on a single page, it’s often important to look at behaviors elsewhere on the site. For example, different versions of the landing page may attract customers with different buying patterns. The system must able to capture such results and use them to evaluate test performance. It should also be able to integrate behaviors from outside of the Web site, such as phone orders or store visits. As with linkages among test components, different technologies may be involved when measuring results within a single page, across multiple pages, across visits and across channels. This means a system’s capabilities for each type of measurement must be evaluated separately.
- allow multiple success measures. Different tests may target different behaviors, such as capturing a name, generating an inquiry or placing an order. The test system must be able to handle this. In addition, users may want to measure multiple behaviors as part of a single test: say, average order size, number of orders, and profit margin. The system should be able to capture and report on these as well. As discussed in last Wednesday’s post, it can be difficult to combine several measures into one value for the test to maximize. But the system should at least be able to show the expected results of the tested combinations in terms of each measure.
- account for interactions among variables: this is a technical issue and one where vendors who use different test designs make claims that only an expert can assess. The fundamental concern is that specific combinations of components may yield results that are different from what would be predicted by viewing them independently. To take a trivial example, a headline and body text that gave conflicting information would probably depress results. Be sure to explore how any vendor you consider handles this issue and make sure you are comfortable with their approach.
- reporting: the basic output of a multi-variate test is a report showing how different elements performed, by themselves and in combination with others. Beyond that, you want help in understanding what this means: ranking of elements by importance; ranking of alternatives within each element; confidence statistics indicating how reliable the results are; any apparent interaction effects; estimated results for the best combination if it was not actually tested. A multi-variate test generates a great deal of data, so efficient, understandable presentation is critical. In addition to their actual reporting features, some vendors provide human analysts to review and explain results.
- integration difficulty and performance: the multi-variate testing systems all take over some aspect of Web page presentation by controlling certain portions of your pages. The work involved to set this up and the speed and reliability with which test pages are rendered are critical factors in successful deployment. Specific issues include the amount of new code that must be embedded in each page, how much this code changes from test to test, how much volume the system can handle (in number of pages rendered and complexity of the content), how result measurement is incorporated, how any cookies and visitor profiles are managed, and mechanisms to handle failures such as unavailable servers or missing content.
- impact on Web search engines: this is another technical issue, but a fairly straightforward one. Content managed by the testing system is generally not part of the static Web pages read by the “spiders” that search engines to use index the Web. The standard solution seems to be to put the important search terms in a portion of the static page that visitors will not see but the spiders will still read. Again, you need to understand the details of each vendor’s approach, and in particular how much work is involved in keeping the invisible search tags consistent with the actual, visible site.
- hosted vs. installed deployment: all of the multi-variate testing products are offered as hosted solutions. Memtrics and SiteSpect also offer installed options; the others don’t seem to but I can’t say for sure. Yet even hosted solutions can vary in details such as where test content is stored and whether software for the user interface is installed locally. If this is a major concern in your company, check with the vendors for the options available.
- test setup: last but certainly not least, what’s involved in actually setting up a test on the system? How much does the user need to know about Web technology, the details of the site, test design principles, and the mechanics of the test system itself? How hard is it to set up a new test and how hard to make changes? Does the system help to prevent users from setting up tests that conflict with each other? What kind of security functions are available—in a large organization, there may be separate managers for different site sections, for content management, and for approvals after the test is defined. How are privacy concerns addressed? What training does the company provide and what human assistance is available for technical, test design and marketing issues? The questions could go on, but the basic point is you need to walk through the process from start to finish with each vendor and imagine what it would be like to do this on a regular basis. If the system is too hard to use, then it really doesn’t matter what else it’s good at.
Wednesday, November 22, 2006
More on Web Optimization: Automated Deployment
I’ve learned a bit more about Web optimization system since yesterday’s post. Both Memetrics and Offermatica have clarified that they do in fact support some version of automated deployment of winning test components. It’s quite possible that other multi-variate testing systems do this as well: as I hope I made clear yesterday, I haven’t researched each product in depth.
While we’re on the topic, let’s take a closer look at automated deployment. It’s one of key issues related to optimization systems.
The first point to consider is that automated anything is a double-edged sword: it saves work for users, often letting them react more quickly and manage in greater detail than they could if manual action were required. But automation also means a system can make mistakes which may or may not be noticed and corrected by human supervisors. This is not an insurmountable problem: there are plenty of techniques to monitor automated systems and to prevent them from making big mistakes. But those techniques don’t appear by themselves, so it’s up to users to recognize they are needed and demand they be deployed.
With multi-variate Web testing in particular, automated deployment forces you to face a fundamental issue in how you define a winner. Automated systems aim to maximize a single metric, such as conversion rate or revenue per visit. Some products may be able to target several metrics simultaneously, although I haven’t seen any details. (The simplest approach is to combine several different metrics into one composite. But this may not capture the types of constraints that are important to you, such as capacity limits or volume targets. Incorporating these more sophisticated relationships is the essence of true optimization.) Still, even vendors whose algorithms target just one metric can usually track and report on several metrics. If you want to consider multiple metrics when picking a test winner, automated deployment will work only if your system can automatically include those multiple metrics in its winner selection process.
A second consideration is automatic cancellation of poorly performing options within an on-going test. Making a bad offer is a wasted opportunity: it drags down total results and precludes testing something else which could be more useful. Of course, some below-average performance is inevitable. Finding what does and doesn’t work is why we test in the first place. But once an option has proven itself ineffective, we’d like to stop testing it as soon as possible.
Ideally the system would automatically drop the worst combinations from its test plan and replace them with the most promising alternatives. The whole point of multi-variate testing is that it tests only some combinations and estimates the results of the rest. This means it can identify untested combinations that work better than anything that's actually been tried. But you never know if the system’s estimates are correct: there may be random errors or relationships among variables (“interactions”) that have gone undetected. It’s just common sense—and one of the ways to avoid automated mistakes—to test such combinations before declaring them the winner. If a system cannot add those combinations to the test plan automatically, benefits are delayed as the user waits for the end of the initial test, reads the results, and sets up a another test with the new combination included.
So far we’ve been discussing automation within the testing process itself. Automated deployment is something else: applying the test winner to the production system—that is, to treatment of all site visitors. This is technically not so hard for Web testing systems, since they already control portions of the production Web site seen by all visitors. So deployment simply means replacing the current default contents with the test winner. The only things to look for are (a) whether the system actually lets you specify default contents that go to non-test visitors and (b) whether it can automatically change those default contents based on test results.
Of course, there will be details about what triggers such a replacement: a specified time period, number of tests, confidence level, expected improvement, etc. Plus, you will want some controls to ensure the new content is acceptable: marketers often test offers they are not ready to roll out. At a minimum, you’ll probably want notification when a test has been converted to the new default. You may even choose to forego fully automated deployment and have the system request your approval before it makes a change.
One final consideration. In some environments, tests are running continuously. This adds its own challenges. For example, how do you prevent one test from interfering with another? (Changes from a test on the landing page might impact another test on the checkout page.) Automated deployment increases the chances of unintentional interference along these lines. Continuous tests also raise the issue of how heavily to weight older vs. newer results. Customer tastes do change over time, so you want to react to trends. But you don’t want to overreact to random variations or temporary situations. Of course, one solution is to avoid continuous tests altogether, and periodically start fresh with new tests instead. But if you’re trying to automate as much as possible, this defeats your purpose. The alternative is to look into what options the system provides to deal with these situations and assess whether they meet your needs.
This is much longer post than usual but it’s kind of a relaxed day at the office and this topic has (obviously) been on my mind recently. Happy Thanksgiving.
While we’re on the topic, let’s take a closer look at automated deployment. It’s one of key issues related to optimization systems.
The first point to consider is that automated anything is a double-edged sword: it saves work for users, often letting them react more quickly and manage in greater detail than they could if manual action were required. But automation also means a system can make mistakes which may or may not be noticed and corrected by human supervisors. This is not an insurmountable problem: there are plenty of techniques to monitor automated systems and to prevent them from making big mistakes. But those techniques don’t appear by themselves, so it’s up to users to recognize they are needed and demand they be deployed.
With multi-variate Web testing in particular, automated deployment forces you to face a fundamental issue in how you define a winner. Automated systems aim to maximize a single metric, such as conversion rate or revenue per visit. Some products may be able to target several metrics simultaneously, although I haven’t seen any details. (The simplest approach is to combine several different metrics into one composite. But this may not capture the types of constraints that are important to you, such as capacity limits or volume targets. Incorporating these more sophisticated relationships is the essence of true optimization.) Still, even vendors whose algorithms target just one metric can usually track and report on several metrics. If you want to consider multiple metrics when picking a test winner, automated deployment will work only if your system can automatically include those multiple metrics in its winner selection process.
A second consideration is automatic cancellation of poorly performing options within an on-going test. Making a bad offer is a wasted opportunity: it drags down total results and precludes testing something else which could be more useful. Of course, some below-average performance is inevitable. Finding what does and doesn’t work is why we test in the first place. But once an option has proven itself ineffective, we’d like to stop testing it as soon as possible.
Ideally the system would automatically drop the worst combinations from its test plan and replace them with the most promising alternatives. The whole point of multi-variate testing is that it tests only some combinations and estimates the results of the rest. This means it can identify untested combinations that work better than anything that's actually been tried. But you never know if the system’s estimates are correct: there may be random errors or relationships among variables (“interactions”) that have gone undetected. It’s just common sense—and one of the ways to avoid automated mistakes—to test such combinations before declaring them the winner. If a system cannot add those combinations to the test plan automatically, benefits are delayed as the user waits for the end of the initial test, reads the results, and sets up a another test with the new combination included.
So far we’ve been discussing automation within the testing process itself. Automated deployment is something else: applying the test winner to the production system—that is, to treatment of all site visitors. This is technically not so hard for Web testing systems, since they already control portions of the production Web site seen by all visitors. So deployment simply means replacing the current default contents with the test winner. The only things to look for are (a) whether the system actually lets you specify default contents that go to non-test visitors and (b) whether it can automatically change those default contents based on test results.
Of course, there will be details about what triggers such a replacement: a specified time period, number of tests, confidence level, expected improvement, etc. Plus, you will want some controls to ensure the new content is acceptable: marketers often test offers they are not ready to roll out. At a minimum, you’ll probably want notification when a test has been converted to the new default. You may even choose to forego fully automated deployment and have the system request your approval before it makes a change.
One final consideration. In some environments, tests are running continuously. This adds its own challenges. For example, how do you prevent one test from interfering with another? (Changes from a test on the landing page might impact another test on the checkout page.) Automated deployment increases the chances of unintentional interference along these lines. Continuous tests also raise the issue of how heavily to weight older vs. newer results. Customer tastes do change over time, so you want to react to trends. But you don’t want to overreact to random variations or temporary situations. Of course, one solution is to avoid continuous tests altogether, and periodically start fresh with new tests instead. But if you’re trying to automate as much as possible, this defeats your purpose. The alternative is to look into what options the system provides to deal with these situations and assess whether they meet your needs.
This is much longer post than usual but it’s kind of a relaxed day at the office and this topic has (obviously) been on my mind recently. Happy Thanksgiving.
Tuesday, November 21, 2006
Sorting Out the Web Optimization Options
Everybody wants to get the best results from their Web site, and plenty of vendors are willing to help. I’ve been trying to make sense of the different Web site optimization vendors, and have tentatively decided they fall into four groups:
- Web analytics. These do log file or page beacon analysis to track page views by visitors. Examples are Coremetrics, Omniture, WebSideStory, and WebTrends. They basically can tell you how visitors are moving through your site, but then it’s up to you to figure out what to do about it. So far as I know, they lack formal testing capabilities other than reporting on tests you might set up separately.
- multi-variate testing. These systems let users define a set of elements to test, build an efficient test matrix that tries them in different combinations, execute the tests, and report on the results. Examples are Google Website Optimizer, Offermatica, Optimost, SiteSpect and Vertster. These systems serve the test content into user-designated page slots, which lets them control what each visitor sees. Their reports estimate the independent and combined impact of different test elements, and may go so far as to recommend an optimal combination of components. But it’s up to the user to apply the test results to production systems. [Since writing this I've learned that at least some vendors can automatically deploy the winning combination. You'll need to check with the individual vendors for details.]
- discrete choice models. These resemble multi-variate testing but use a different mathematic approach. They present different combinations of test elements to users, observe their behavior, and create predictive models with weights for the different categories of variables. This provides a level of abstraction that is unavailable in the multi-variate testing results, although I haven’t quite decided whether this really matters. So far as I can tell, only one vendor, Memetrics, has built choice models into a Web site testing system. (Others including Fair Isaac and MarketingNPV offer discrete choice testing via Web surveys.) Like the multi-variate systems, Memetrics controls the actual Web site content served in the tests. It apparently does have the capability to move winning rules into production.
- behavioral targeting. These systems monitor visitor behavior and serve each person the content most likely to meet business objectives, such as sales or conversions. Vendors include Certona, Kefta, Touch Clarity, and [x+1]; vendors with similar technology for ad serving include Accipiter, MediaPlex, and RevenueScience. These systems automatically build predictive models that select the most productive content for each visitor, refine the models as new results accumulate, and serve the recommended content. However, they test each component independently and can only test offers. This means they cannot answer questions about combinations of, say, headline and pricing, or which color or page layout is best.
Clearly these are very disparate tools. I’m listing them together because they all aim to help companies improve results from their Web sites, and all thus compete for the attention and budgets of marketers who must decide which projects to tackle first. I don’t know whether there’s a logical sequence in which they should be employed or some way to make them all work together. But clarifying the differences among them is a first step to making those judgments.
- Web analytics. These do log file or page beacon analysis to track page views by visitors. Examples are Coremetrics, Omniture, WebSideStory, and WebTrends. They basically can tell you how visitors are moving through your site, but then it’s up to you to figure out what to do about it. So far as I know, they lack formal testing capabilities other than reporting on tests you might set up separately.
- multi-variate testing. These systems let users define a set of elements to test, build an efficient test matrix that tries them in different combinations, execute the tests, and report on the results. Examples are Google Website Optimizer, Offermatica, Optimost, SiteSpect and Vertster. These systems serve the test content into user-designated page slots, which lets them control what each visitor sees. Their reports estimate the independent and combined impact of different test elements, and may go so far as to recommend an optimal combination of components. But it’s up to the user to apply the test results to production systems. [Since writing this I've learned that at least some vendors can automatically deploy the winning combination. You'll need to check with the individual vendors for details.]
- discrete choice models. These resemble multi-variate testing but use a different mathematic approach. They present different combinations of test elements to users, observe their behavior, and create predictive models with weights for the different categories of variables. This provides a level of abstraction that is unavailable in the multi-variate testing results, although I haven’t quite decided whether this really matters. So far as I can tell, only one vendor, Memetrics, has built choice models into a Web site testing system. (Others including Fair Isaac and MarketingNPV offer discrete choice testing via Web surveys.) Like the multi-variate systems, Memetrics controls the actual Web site content served in the tests. It apparently does have the capability to move winning rules into production.
- behavioral targeting. These systems monitor visitor behavior and serve each person the content most likely to meet business objectives, such as sales or conversions. Vendors include Certona, Kefta, Touch Clarity, and [x+1]; vendors with similar technology for ad serving include Accipiter, MediaPlex, and RevenueScience. These systems automatically build predictive models that select the most productive content for each visitor, refine the models as new results accumulate, and serve the recommended content. However, they test each component independently and can only test offers. This means they cannot answer questions about combinations of, say, headline and pricing, or which color or page layout is best.
Clearly these are very disparate tools. I’m listing them together because they all aim to help companies improve results from their Web sites, and all thus compete for the attention and budgets of marketers who must decide which projects to tackle first. I don’t know whether there’s a logical sequence in which they should be employed or some way to make them all work together. But clarifying the differences among them is a first step to making those judgments.
Monday, November 20, 2006
'Big Ideas' Must Be Rigorously Measured
Last Friday, I clipped a BusinessWeek (www.businessweek.com) article that listed a “set of integrated business disciplines” that create “exemplary customer experiences”. The disciplines include customer-facing “moments of truth”, well articulated brand values, close integration of technology and people, “co-creation” of experiences with customers, and an “ecosystem approach” to encompass all related products and services. (See “The Importance of Great Customer Experiences and the Best Ways to Deliver Them”, available here.) It’s a bit jargon-heavy for my taste, but does make the point that there’s more to Customer Experience Management than direct customer / employee interactions. Of course, that’s a key premise of our work at Client X Client. It was particularly on my mind because I had just written about a survey that seemed to equate customer experience with customer service (see my entry for November 16) .
Later in the weekend, I spent some time researching Web optimization systems. As I dug deeper into the details of rigorous testing methods and precision delivery systems, the “integrated business disciplines” mentioned in the BusinessWeek piece began to look increasingly insubstantial. How could the concrete measurements of Web optimization vendors ever be applied to notions such as “moment of truth”? But, if the value of those notions can’t be measured, how can we expect managers to care about them?
The obvious answer is that “big ideas” really can’t be measured because they’re just too, well, big. (The implication, of course, is that anybody who even tries to measure them is impossibly small-minded). But that won't do. We know that the ideas’ value will in fact ultimately be measured in the only metric that really matters, which is long-term profit. And, since business profit is ultimately determined by customer values, we find ourselves facing yet again the core mission of the Customer Experience Matrix: bridging the gap between the soft and squishy notions of customer experience and the cold, hard measures of customer value.
My point is not that the Customer Experience Matrix, Client X Client and Yours Truly are all brilliant. It’s that working on “big ideas” of customer experience doesn’t exempt anyone from figuring out how those ideas will translate into actual business value. If anything, the bigger the idea, the more important it is to work through the business model that shows what makes it worth pursuing. Making, or even just testing, customer experience changes without such a model is simply irresponsible.
Later in the weekend, I spent some time researching Web optimization systems. As I dug deeper into the details of rigorous testing methods and precision delivery systems, the “integrated business disciplines” mentioned in the BusinessWeek piece began to look increasingly insubstantial. How could the concrete measurements of Web optimization vendors ever be applied to notions such as “moment of truth”? But, if the value of those notions can’t be measured, how can we expect managers to care about them?
The obvious answer is that “big ideas” really can’t be measured because they’re just too, well, big. (The implication, of course, is that anybody who even tries to measure them is impossibly small-minded). But that won't do. We know that the ideas’ value will in fact ultimately be measured in the only metric that really matters, which is long-term profit. And, since business profit is ultimately determined by customer values, we find ourselves facing yet again the core mission of the Customer Experience Matrix: bridging the gap between the soft and squishy notions of customer experience and the cold, hard measures of customer value.
My point is not that the Customer Experience Matrix, Client X Client and Yours Truly are all brilliant. It’s that working on “big ideas” of customer experience doesn’t exempt anyone from figuring out how those ideas will translate into actual business value. If anything, the bigger the idea, the more important it is to work through the business model that shows what makes it worth pursuing. Making, or even just testing, customer experience changes without such a model is simply irresponsible.
Friday, November 17, 2006
Ion Group Survey Stresses Importance of Service
“People consider personalized, intelligent and convenient contact the most important elements of added value a company can offer.”
Insight or cliché? It really depends on who said it and why.
In this case, the quote is from UK-based marketing services provider Ion Group (http://www.iongroup.co.uk/). Ion sent a three-question email survey to 1,090 representative UK consumers and analyzed the results. We don't know how many responded. (Click here
for the study.)
Just knowing this tells us something about the quote: it isn't derived from a very big or detailed project, so the results are at best directional. In fact, they are barely better than anecdotal.
But the real question is, “Most important elements of added value” compared to what? Ion, to its credit, published the actual survey questions. It asked “What aspects of companies that you buy from do you consider offer the most value to you?” and gave a list that can be paraphrased (with their relative ranking) as:
- friendly, knowledgeable staff (127)
- open/contactable 12-24 hours a day (124)
- company can access my information (116)
- well known brand (104)
- nationwide network of outlets (104)
- environmentally friendly policies (102)
- loyalty scheme (98)
- send offers I’m interested in (93)
- periodically check whether I’m happy with my purchase (85)
- endorsed by celebrities (42)
It’s an interesting list and interesting rankings. Celebrities don’t matter – cool! (But bear in mind that these are UK consumers, not Americans.) Well targeted offers aren’t very important either – hmm, maybe not to consumers but how about to the companies that sell them things? Still, there is a clear message here: the top three items all boil down to service.
But wait - did you notice something odd?
Ion’s list is limited mostly to service considerations. Yet value is typically considered a combination of quality, service and price. So Ion is really just ranking alternatives within the service domain.
Why would Ion Group limit its survey in this fashion? A look at their Web site gives a hint. They offer event marketing, mystery shopping, contact center, fulfillment, affinity partnerships, lists, loyalty programs, and similar services. In other words, product quality and price are rarely within their control. Their survey focuses on what they know.
Fair enough. Still, it’s easy to misinterpret Ion’s findings unless you dig into the survey details. I wish they had been more explicit about the scope.
Insight or cliché? It really depends on who said it and why.
In this case, the quote is from UK-based marketing services provider Ion Group (http://www.iongroup.co.uk/). Ion sent a three-question email survey to 1,090 representative UK consumers and analyzed the results. We don't know how many responded. (Click here
for the study.)
Just knowing this tells us something about the quote: it isn't derived from a very big or detailed project, so the results are at best directional. In fact, they are barely better than anecdotal.
But the real question is, “Most important elements of added value” compared to what? Ion, to its credit, published the actual survey questions. It asked “What aspects of companies that you buy from do you consider offer the most value to you?” and gave a list that can be paraphrased (with their relative ranking) as:
- friendly, knowledgeable staff (127)
- open/contactable 12-24 hours a day (124)
- company can access my information (116)
- well known brand (104)
- nationwide network of outlets (104)
- environmentally friendly policies (102)
- loyalty scheme (98)
- send offers I’m interested in (93)
- periodically check whether I’m happy with my purchase (85)
- endorsed by celebrities (42)
It’s an interesting list and interesting rankings. Celebrities don’t matter – cool! (But bear in mind that these are UK consumers, not Americans.) Well targeted offers aren’t very important either – hmm, maybe not to consumers but how about to the companies that sell them things? Still, there is a clear message here: the top three items all boil down to service.
But wait - did you notice something odd?
Ion’s list is limited mostly to service considerations. Yet value is typically considered a combination of quality, service and price. So Ion is really just ranking alternatives within the service domain.
Why would Ion Group limit its survey in this fashion? A look at their Web site gives a hint. They offer event marketing, mystery shopping, contact center, fulfillment, affinity partnerships, lists, loyalty programs, and similar services. In other words, product quality and price are rarely within their control. Their survey focuses on what they know.
Fair enough. Still, it’s easy to misinterpret Ion’s findings unless you dig into the survey details. I wish they had been more explicit about the scope.
Thursday, November 16, 2006
Foxes in the Henhouse: Entellium and SpringCM Advise on Hosted Service Agreements
Back on September 26, I criticized a paper from Entellium (www.entellium.com) that I felt ignored the need to identify business requirements before looking at system functionality. It's uncomfortable to write negative things, but I did feel better when I saw a paper from Entellium itself make a similar point: “Unfortunately, most hosted CRM buyers spend 95 percent of their time focusing on the features and functions that a solution contains, and nearly no time on what happens after the sales contract is signed.” Exactly.
This quote is from “Buyer Beware: Tips for Getting the Best Agreement from Hosted Application Vendors” available here. The paper concerned with contract terms, not service contracts. Still, it reinforces my point that too much attention is paid to features and functions to the exclusion of everything else.
In any case, I’m pleased to say I liked this Entellium paper much better. For one thing it’s short—just three pages—and gets right to the point. It proposes negotiations in four areas:
- technical support and training (“live training should be free to all your employees regardless of when you hired them.”)
- service level standards (make sure you have a written Service Level Agreement that guarantees 99.5% uptime and that you know your rights concerning data back-up and access)
- long term contracts (vendors should be willing to work month-to-month; any longer term contract should guarantee a service level)
- access to data (demand immediate and full data export at any time in a usable format).
A few of the details are apparently tailored to Entellium’s particular offering—for example, it seems a bit odd to focus specifically on “live, web-based training”. But these are the right issues to address.
If you want to look at these issue in more depth, hosted content management vendor SpringCM (http://www.springcm.com/) has a 12 page document “In Pursuit of P.R.A.I.S.E: Delivering on the Service Proposition” available here. P.R.A.I.S.E. is an acronym for:
- Performance
- Reliability
- Availablility
- Information Stewardship (security and backup)
- Scability
- Enterprise Dependability
This paper covers vendor evaluation as well as contract negotiating points, so its scope is broader than the Entellium paper. It provides specific questions to ask and the answers to listen for, which is very useful. Although SpringCM is a content management specialist, the recommendations themselves are general enough to apply to CRM and other hosted systems as well.
This quote is from “Buyer Beware: Tips for Getting the Best Agreement from Hosted Application Vendors” available here. The paper concerned with contract terms, not service contracts. Still, it reinforces my point that too much attention is paid to features and functions to the exclusion of everything else.
In any case, I’m pleased to say I liked this Entellium paper much better. For one thing it’s short—just three pages—and gets right to the point. It proposes negotiations in four areas:
- technical support and training (“live training should be free to all your employees regardless of when you hired them.”)
- service level standards (make sure you have a written Service Level Agreement that guarantees 99.5% uptime and that you know your rights concerning data back-up and access)
- long term contracts (vendors should be willing to work month-to-month; any longer term contract should guarantee a service level)
- access to data (demand immediate and full data export at any time in a usable format).
A few of the details are apparently tailored to Entellium’s particular offering—for example, it seems a bit odd to focus specifically on “live, web-based training”. But these are the right issues to address.
If you want to look at these issue in more depth, hosted content management vendor SpringCM (http://www.springcm.com/) has a 12 page document “In Pursuit of P.R.A.I.S.E: Delivering on the Service Proposition” available here. P.R.A.I.S.E. is an acronym for:
- Performance
- Reliability
- Availablility
- Information Stewardship (security and backup)
- Scability
- Enterprise Dependability
This paper covers vendor evaluation as well as contract negotiating points, so its scope is broader than the Entellium paper. It provides specific questions to ask and the answers to listen for, which is very useful. Although SpringCM is a content management specialist, the recommendations themselves are general enough to apply to CRM and other hosted systems as well.
Wednesday, November 15, 2006
More Thoughts on Visualizing the Customer Experience
I did end up creating a version of the Matrix demonstration system I described yesterday, using a very neat tool from Business Objects called Crystal Xcelsius (www.xcelsius.com). If you’d like a look, send me an email at draab@clientxclient.com. You’ll get an interactive Matrix embedded within an Adobe pdf.
The demonstration does what I wanted, but I’m not pleased with the results. I think the problem is that it violates the central Matrix promise of displaying information on a single page. Sliding through time periods, like frames in a movie, doesn’t show relationships among different interactions at a glance. The demonstration system attempts to overcome this by showing current and future interactions in each cell. But because the future interactions could occur tomorrow or next year, this still doesn’t give a meaningful representation of the relationships among the events.
I’m toying with an alternative approach similar to the “swim lanes” frequently used to diagram business processes. We’d have to make time period an explicit dimension of the new Matrix, and let the other dimension be either channel, contact category, or both combined. (The combination could be treated by defining a column or “lane” for each contact category, and using different colored bubbles within each lane to represent different channels.) I don’t know whether I’ll have time to actually build a sample version of this and can’t quite prejudge whether it will work: it sounds like it might be too complicated to understand at a glance.
Of course, whether any solution “works” depends on the goal. Client X Client CEO Michael Hoffman was actually happy with the version I created yesterday, since only wanted to illustrate the point that it’s possible to predict what customers at one stage in their lifecycle are likely to do next. The details of timing are not important in that context.
We’ve also been discussing whether the Matrix should illustrate contacts with a single individual (presumably an ‘average’ customer or segment member) or should show contacts for a group of customers. In case that distinction isn’t clear: following a single customer might show, one store visit, one purchase and one service call, while a group of fifty customers might make fifty store visits, ten purchases and three service calls. Lifetime value results would be dramatically different in the two cases.
I’ve also toyed with a display that adjusts the contact probabilities based on the selected time period: to continue the previous example, the probability of any one customer making a service call is 3 in 50 at the time of a store visit, but 3 in 10 at the time of a purchase. Decisions made at different points in time need to reflect the information available at that time. Adjusting the probabilities in the Matrix as the time period changes would illustrate this nicely.
Note that all these different approaches could be derived from the same database of transactions classified by channel, contact type, and time period.
Obviously we can’t pursue all these paths, but it’s worth listing a few just as a reminder that there are many options and we need to consciously choose the ones that make sense for a particular situation.
The demonstration does what I wanted, but I’m not pleased with the results. I think the problem is that it violates the central Matrix promise of displaying information on a single page. Sliding through time periods, like frames in a movie, doesn’t show relationships among different interactions at a glance. The demonstration system attempts to overcome this by showing current and future interactions in each cell. But because the future interactions could occur tomorrow or next year, this still doesn’t give a meaningful representation of the relationships among the events.
I’m toying with an alternative approach similar to the “swim lanes” frequently used to diagram business processes. We’d have to make time period an explicit dimension of the new Matrix, and let the other dimension be either channel, contact category, or both combined. (The combination could be treated by defining a column or “lane” for each contact category, and using different colored bubbles within each lane to represent different channels.) I don’t know whether I’ll have time to actually build a sample version of this and can’t quite prejudge whether it will work: it sounds like it might be too complicated to understand at a glance.
Of course, whether any solution “works” depends on the goal. Client X Client CEO Michael Hoffman was actually happy with the version I created yesterday, since only wanted to illustrate the point that it’s possible to predict what customers at one stage in their lifecycle are likely to do next. The details of timing are not important in that context.
We’ve also been discussing whether the Matrix should illustrate contacts with a single individual (presumably an ‘average’ customer or segment member) or should show contacts for a group of customers. In case that distinction isn’t clear: following a single customer might show, one store visit, one purchase and one service call, while a group of fifty customers might make fifty store visits, ten purchases and three service calls. Lifetime value results would be dramatically different in the two cases.
I’ve also toyed with a display that adjusts the contact probabilities based on the selected time period: to continue the previous example, the probability of any one customer making a service call is 3 in 50 at the time of a store visit, but 3 in 10 at the time of a purchase. Decisions made at different points in time need to reflect the information available at that time. Adjusting the probabilities in the Matrix as the time period changes would illustrate this nicely.
Note that all these different approaches could be derived from the same database of transactions classified by channel, contact type, and time period.
Obviously we can’t pursue all these paths, but it’s worth listing a few just as a reminder that there are many options and we need to consciously choose the ones that make sense for a particular situation.
Tuesday, November 14, 2006
Waltzing Through Time with the Customer Experience Matrix
One of my favorite business truisms is “you manage what you measure.” The 21st century corollary may well be “you manage what you visualize,” since nearly every modern reporting system relies on graphs and diagrams to present information that makes sense and highlights priorities as efficiently as possible.
The Customer Experience Matrix is part of this trend, since its core promise (though not the only one!) is to build a single picture showing all customer contacts across all channels and life stages. The simple act of visualizing those contacts gives managers a first step towards controlling, coordinating and ultimately optimizing them.
But how exactly do you present those contacts? Loyal readers of this blog are presumably familiar with the basic Matrix layout: channels down the side and contact categories across the top. The definition of channels is fairly intuitive, but contact categories can be a bit slippery. We usually call them “life stages”, suggesting they follow a linear sequence. But they are really events that can happen in different order and even be repeated. For example, customers may ask questions about a product before, during, or after the actual purchase.
This means is that any attempt to plot a customer’s course through the Matrix ends up with a scribble of leaps, loops, cycles and branches as customers jump from one contact to the next. There is nothing wrong with this: the world is truly that complex. But it’s hard to draw neatly.
We’ve been working lately on a demonstration system to deal with this. The practical business purpose is to help managers identify the linkages between current and future events, so they can see where improved customer treatments would have the greatest benefit. This doesn’t tell the managers what those treatments should be, but it does let them know where the opportunities lie. Since managers’ time is limited, it’s important that they focus on their energies on the most productive possibilities.
The demonstration uses a standard Customer Experience Matrix that is linked to a database of transactions classified in the two standard Matrix dimensions (channel and contact category) plus a dimension of time. Since we’re working with individual customers, time is measured relative t
The Customer Experience Matrix is part of this trend, since its core promise (though not the only one!) is to build a single picture showing all customer contacts across all channels and life stages. The simple act of visualizing those contacts gives managers a first step towards controlling, coordinating and ultimately optimizing them.
But how exactly do you present those contacts? Loyal readers of this blog are presumably familiar with the basic Matrix layout: channels down the side and contact categories across the top. The definition of channels is fairly intuitive, but contact categories can be a bit slippery. We usually call them “life stages”, suggesting they follow a linear sequence. But they are really events that can happen in different order and even be repeated. For example, customers may ask questions about a product before, during, or after the actual purchase.
This means is that any attempt to plot a customer’s course through the Matrix ends up with a scribble of leaps, loops, cycles and branches as customers jump from one contact to the next. There is nothing wrong with this: the world is truly that complex. But it’s hard to draw neatly.
We’ve been working lately on a demonstration system to deal with this. The practical business purpose is to help managers identify the linkages between current and future events, so they can see where improved customer treatments would have the greatest benefit. This doesn’t tell the managers what those treatments should be, but it does let them know where the opportunities lie. Since managers’ time is limited, it’s important that they focus on their energies on the most productive possibilities.
The demonstration uses a standard Customer Experience Matrix that is linked to a database of transactions classified in the two standard Matrix dimensions (channel and contact category) plus a dimension of time. Since we’re working with individual customers, time is measured relative t
Monday, November 13, 2006
A Short Tale about The Long Tail
I picked up a copy of Chris Anderson’s The Long Tail this weekend for my son, only because the local bookstore didn’t have any more substantive marketing books available. (Yes, there is irony here: I purchased a book about unlimited choice because my choices were limited.) There was only time to a scan a few chapters before I gave it to him, so I can’t tell if the book gives any advice to marketers for how to deal with the phenomenon it describes. (More irony: a good search mechanism would have answered my question more efficiently, but all the book could offer was an old-fashioned table of contents and perhaps an index.)
Of course, the question I really care about is how the Long Tail relates to Customer Experience Management. As it happens, the book does mention “slots” in the physical sense of shelf space filled with products. This is slightly different from but definitely related to the way we at Client X Client use “slots” to describe opportunities to present a message to a customer. Both uses of the term agree that the number of slots is virtually infinite, but the focus of the Long Tail seems to be on helping customers find the products they want, while the crux of Customer Experience Management is choosing messages and other experience components without the customer necessarily making any effort. That is, while Long Tail marketing is presumably about better search tools, niche information sources and specialized communities, Customer Experience Management is about tailoring all marketing and operational activities to customer needs. (Or, more precisely, about treating customers so they act in ways that meet business needs.)
Long Tail economics do expand the options available for customer experiences, and of course they may require changes in the business models of particular companies. But it doesn’t seem they change anything fundamental about how we analyze and optimize the customer experience. I’ll let you know if reading the rest of the book changes my mind.
Of course, the question I really care about is how the Long Tail relates to Customer Experience Management. As it happens, the book does mention “slots” in the physical sense of shelf space filled with products. This is slightly different from but definitely related to the way we at Client X Client use “slots” to describe opportunities to present a message to a customer. Both uses of the term agree that the number of slots is virtually infinite, but the focus of the Long Tail seems to be on helping customers find the products they want, while the crux of Customer Experience Management is choosing messages and other experience components without the customer necessarily making any effort. That is, while Long Tail marketing is presumably about better search tools, niche information sources and specialized communities, Customer Experience Management is about tailoring all marketing and operational activities to customer needs. (Or, more precisely, about treating customers so they act in ways that meet business needs.)
Long Tail economics do expand the options available for customer experiences, and of course they may require changes in the business models of particular companies. But it doesn’t seem they change anything fundamental about how we analyze and optimize the customer experience. I’ll let you know if reading the rest of the book changes my mind.
Thursday, November 09, 2006
WebSideStory Sponsors Sound Search Advice
I wasn’t really planning to write about Web analytics for the third day in a row, but the most interesting thing to come across my desk yesterday, apart from the cat, was a white paper “The Other Search: Making the Most of Site Search to Optimize the Total Customer Experience” written by Patricia Seybold Group for Web analytics vendor WebSideStory (www.websidestory.com). The paper is available here here on the WebSideStory site.
The paper starts with six pages arguing that in-site search is an important way of serving visitors and gathering insights into their interests. It's a bit of overkill but the details are helpful.
Seven more pages then describe a “5-Step Plan to Boost your Results”. (I don’t know why “your” is not capitalized but it's printed that way twice.) The steps are:
1. “Baseline Current Performance and Offers”: assess site performance and staffing.
2. “Cross-Pollinate Internet and Site Search”: identify common in-site search terms and use them in your paid search advertising.
3. “Monitor Site Search Traffic Daily—and React”: adjust offers, site contents and the search results themselves based on visitor searches.
4. “Test Impact of Changes”
5. “Track Site KPIs”
Next comes a brief discussion of KPIs (Key Performance Indicators) including a "starter set" of two dozen specific metrics. The main point is that KPIs measuring site performance should be tied to larger corporate objectives. An interesting secondary point is that some KPIs measure site goals while others measure visitor goals, and both are important.
The final section provides a pretty good list of functions that site search engines should provide to meet these objectives.
This is a very good paper: well written, lots of specific suggestions, and no blatant distortions due to vendor sponsorship. Enjoy.
The paper starts with six pages arguing that in-site search is an important way of serving visitors and gathering insights into their interests. It's a bit of overkill but the details are helpful.
Seven more pages then describe a “5-Step Plan to Boost your Results”. (I don’t know why “your” is not capitalized but it's printed that way twice.) The steps are:
1. “Baseline Current Performance and Offers”: assess site performance and staffing.
2. “Cross-Pollinate Internet and Site Search”: identify common in-site search terms and use them in your paid search advertising.
3. “Monitor Site Search Traffic Daily—and React”: adjust offers, site contents and the search results themselves based on visitor searches.
4. “Test Impact of Changes”
5. “Track Site KPIs”
Next comes a brief discussion of KPIs (Key Performance Indicators) including a "starter set" of two dozen specific metrics. The main point is that KPIs measuring site performance should be tied to larger corporate objectives. An interesting secondary point is that some KPIs measure site goals while others measure visitor goals, and both are important.
The final section provides a pretty good list of functions that site search engines should provide to meet these objectives.
This is a very good paper: well written, lots of specific suggestions, and no blatant distortions due to vendor sponsorship. Enjoy.
More Thoughts on Web Analytics
Yesterday I wrote about the strategic choices facing Web analytics vendors as their core product matures. They have three basic choices:
- keep focused on Web analytics, improving their products and appealing to the most demanding users as a ‘best of breed” solution.
- expand into related Web functions, such as offer targeting, in-site search, content management, search engine marketing, and campaign analysis.
- expand into non-Web areas, in particular multi-channel customer analytics.
Specialized vendors in many other fields have faced similar choices in the past. They have generally found that competitive products from non-specialist vendors continually improve, reducing the number of companies willing to pay extra for an advanced "best of breed" solution.
Web analytics vendors will face the same dynamic. They may avoid immediate problems simply because so many companies have yet to purchase their first Web analysis system. This means sales can increase even if the vendor loses market share. There may also be a sizeable segment of customers who want only a bare-bones, stand-alone solution. But while serving this group could be a viable business, these customers are likely to be highly price sensitive and hence not terribly profitable.
This all suggests that some sort of expansion is inevitable for companies that wish to remain independent. A quick look at the leading Web analytics vendors (Coremetrics, Omniture, WebTrends, WebSideStory) shows that they are expanding into some or all of the Web-related areas listed above. It makes perfect sense: the customer behavior data that they extract for Web analysis is the foundation on which the other applications are built. (Content management is something of an exception, but is a logical extension because the other applications need to be aware of what content is available.)
Expansion into non-Web analytics is a less popular choice, although Coremetrics (in combination with IBM Websphere) and WebTrends are making some efforts in that direction. This is definitely a harder path to follow, since it means selling to customers outside the vendors’ existing user base. But, for exactly this reason, it also allows the vendors to expand the scope of their involvement with clients and potentially to place themselves at the center of enterprise customer management efforts. And, as yesterday’s post noted, the large volume and complexity of Web data has already forced the Web analysis vendors to build tools so powerful that extending them to other channels should be relatively easy (at least on a technical level).
Of course, from my own perspective as someone concerned with Customer Experience Management, I have a strong vested interest is seeing cross-channel customer analytics become more widely available. So I do hope the Web analytics vendors continue to pursue this option.
- keep focused on Web analytics, improving their products and appealing to the most demanding users as a ‘best of breed” solution.
- expand into related Web functions, such as offer targeting, in-site search, content management, search engine marketing, and campaign analysis.
- expand into non-Web areas, in particular multi-channel customer analytics.
Specialized vendors in many other fields have faced similar choices in the past. They have generally found that competitive products from non-specialist vendors continually improve, reducing the number of companies willing to pay extra for an advanced "best of breed" solution.
Web analytics vendors will face the same dynamic. They may avoid immediate problems simply because so many companies have yet to purchase their first Web analysis system. This means sales can increase even if the vendor loses market share. There may also be a sizeable segment of customers who want only a bare-bones, stand-alone solution. But while serving this group could be a viable business, these customers are likely to be highly price sensitive and hence not terribly profitable.
This all suggests that some sort of expansion is inevitable for companies that wish to remain independent. A quick look at the leading Web analytics vendors (Coremetrics, Omniture, WebTrends, WebSideStory) shows that they are expanding into some or all of the Web-related areas listed above. It makes perfect sense: the customer behavior data that they extract for Web analysis is the foundation on which the other applications are built. (Content management is something of an exception, but is a logical extension because the other applications need to be aware of what content is available.)
Expansion into non-Web analytics is a less popular choice, although Coremetrics (in combination with IBM Websphere) and WebTrends are making some efforts in that direction. This is definitely a harder path to follow, since it means selling to customers outside the vendors’ existing user base. But, for exactly this reason, it also allows the vendors to expand the scope of their involvement with clients and potentially to place themselves at the center of enterprise customer management efforts. And, as yesterday’s post noted, the large volume and complexity of Web data has already forced the Web analysis vendors to build tools so powerful that extending them to other channels should be relatively easy (at least on a technical level).
Of course, from my own perspective as someone concerned with Customer Experience Management, I have a strong vested interest is seeing cross-channel customer analytics become more widely available. So I do hope the Web analytics vendors continue to pursue this option.
Wednesday, November 08, 2006
What's Next for Web Analytics?
I recently heard a senior manager from one of the major Web analytics firms brag in public that specialists in Web analytics had “won” in their competition with general-purpose business intelligence vendors. This struck me as premature, to say the least. He might as well have raised a “Mission Accomplished” banner and snarled “bring ‘em on.”
The substance of the vendor’s argument was that the general-purpose vendors’ technologies cannot handle the high volume of data generated by Web logs and interactions, leading users to buy systems that are specifically designed for Web data.
I’m not so sure that conventional technologies are really incapable of supporting Web volumes, particularly since business intelligence vendors have several technologies to choose from. The growth in memory-based databases in particular seems to have removed several constraints that previously hindered business intelligence system performance.
But even if the technology premise is correct, the competitive battle is far from over. The major business intelligence vendors have the money to buy Web analytics should they need to, and a history making such acquisitions when necessary to fill out their product lines. In fact, Web analytics vendors including Sane Solutions, Urchin and ClickTracks have already been purchased, although none by business intelligence companies.
Realistically, the Web analytics vendors will face the same choice as all other specialized software developers: expand into other areas or be acquired by someone else. (Eat or be eaten.) Whatever paths they take, the race is far from over.
The substance of the vendor’s argument was that the general-purpose vendors’ technologies cannot handle the high volume of data generated by Web logs and interactions, leading users to buy systems that are specifically designed for Web data.
I’m not so sure that conventional technologies are really incapable of supporting Web volumes, particularly since business intelligence vendors have several technologies to choose from. The growth in memory-based databases in particular seems to have removed several constraints that previously hindered business intelligence system performance.
But even if the technology premise is correct, the competitive battle is far from over. The major business intelligence vendors have the money to buy Web analytics should they need to, and a history making such acquisitions when necessary to fill out their product lines. In fact, Web analytics vendors including Sane Solutions, Urchin and ClickTracks have already been purchased, although none by business intelligence companies.
Realistically, the Web analytics vendors will face the same choice as all other specialized software developers: expand into other areas or be acquired by someone else. (Eat or be eaten.) Whatever paths they take, the race is far from over.
Tuesday, November 07, 2006
TeaLeaf Captures Customer Experience, but Doesn't Tame It
Could any white paper title grab my attention more quickly than “The Five Essentials of Customer Experience Management”? Probably not. Even knowing the paper is from TeaLeaf Technology (www.tealeaf.com), which captures a browser-eye-view of Web sessions, and is therefore limited to online experiences, doesn’t really dim my curiousity. After all, Web experiences are important in themselves and learning how to manage them might provide insights that carry over to other media.
Alas, the paper takes a narrow view of online experience management, focused entirely on the problem that TeaLeaf solves. The five “essentials” in the paper’s title are just elaborations of the TeaLeaf theme:
- “visibility”: capture and record individual experiences
- “detection”: identify problems by inspecting online experiences
- “analysis”: diagnose issues by reproducing the experience
- “reproduce”: reduce support costs by reproducing the experience
- “positive experience”: detect obstacles by reviewing online experiences
I may be overstating the similarity of these five items but not by much.
The gap between capturing Web sessions and fully managing the Web experience should be self-evident. The obvious question for TeaLeaf is, how will managers make sense of so much detailed data? Some higher organization is essential to identify patterns and common issues. TeaLeaf does address this with a facility to define expected event flows within a Web session and identify deviations from those flows as potential problems. I believe it can also flag sessions that generate particular types of messages, including error messages. Managers would then examine these sessions to find the conditions that led to the problem. The white paper touches on these functinos but does not describe them in detail.
Of course, truly managing the customer experience requires measuring the impact of each interaction on future customer behavior. TeaLeaf isn’t built to do this, so it makes no sense to complain that it doesn’t. TeaLeaf is a tactical tool that helps to address a limited set of problems. In that sense, it can indeed help to improve the customer experience, so long as it’s placed within a larger strategic framework.
Alas, the paper takes a narrow view of online experience management, focused entirely on the problem that TeaLeaf solves. The five “essentials” in the paper’s title are just elaborations of the TeaLeaf theme:
- “visibility”: capture and record individual experiences
- “detection”: identify problems by inspecting online experiences
- “analysis”: diagnose issues by reproducing the experience
- “reproduce”: reduce support costs by reproducing the experience
- “positive experience”: detect obstacles by reviewing online experiences
I may be overstating the similarity of these five items but not by much.
The gap between capturing Web sessions and fully managing the Web experience should be self-evident. The obvious question for TeaLeaf is, how will managers make sense of so much detailed data? Some higher organization is essential to identify patterns and common issues. TeaLeaf does address this with a facility to define expected event flows within a Web session and identify deviations from those flows as potential problems. I believe it can also flag sessions that generate particular types of messages, including error messages. Managers would then examine these sessions to find the conditions that led to the problem. The white paper touches on these functinos but does not describe them in detail.
Of course, truly managing the customer experience requires measuring the impact of each interaction on future customer behavior. TeaLeaf isn’t built to do this, so it makes no sense to complain that it doesn’t. TeaLeaf is a tactical tool that helps to address a limited set of problems. In that sense, it can indeed help to improve the customer experience, so long as it’s placed within a larger strategic framework.
Monday, November 06, 2006
Innovative Systems Pushes Prototyping
Let’s say you need to integrate some customer data. What’s it going to cost you?
You might think a white paper titled “Insider’s Guide to the True Cost of Data Quality Software” from Innovative Systems, Inc. (www.innovativesystems.com) would provide some useful insights. And I suppose it does, by listing five cost categories that make sense:
- vendor evaluation and selection
- contract negotiations
- installation, training and software tuning
- ongoing use and maintenance
- licensing and subsequent year fees
But the real thrust of this paper is that you can save a lot of time and money by skipping a structured selection process and building a prototype.
It just about goes without saying that prototyping is a particular strength of Innovative Systems. According to the white paper itself, its products achieve the “fastest implementation in the industry” through “unmatched knowledgebases” (no pun intended, I suspect) that minimize custom tuning.
I’m not so sure I accept this particular claim, since several other vendors also offer specialized knowledgebases. But my real quarrel is with the notion of prototyping as a “faster, more cost-effective alternative to a formal vendor review.”
Simply put, a prototype is not a thorough test of product’s abilities. In most cases, it is limited to a sample of data from a subset of sources with a cursory review of the results. So prototypers learn quite a bit about the system set-up, user interface and reporting, but much less about true quality and scalability. Although I strongly encourage prototyping as part of structured selection process, it does not replace careful requirements definition and evaluation of alternative products against those requirements. Excluding products that cannot perform a quick prototype may make sense for Innovative Systems, but not for companies that want to ensure they find the best system for their needs. Similarly, minimizing the cost of a solution without taking into account the quality of its results is short-sighted. After all, the cheapest solution is to buy nothing at all—but that obviously does not give an adequate result.
You might think a white paper titled “Insider’s Guide to the True Cost of Data Quality Software” from Innovative Systems, Inc. (www.innovativesystems.com) would provide some useful insights. And I suppose it does, by listing five cost categories that make sense:
- vendor evaluation and selection
- contract negotiations
- installation, training and software tuning
- ongoing use and maintenance
- licensing and subsequent year fees
But the real thrust of this paper is that you can save a lot of time and money by skipping a structured selection process and building a prototype.
It just about goes without saying that prototyping is a particular strength of Innovative Systems. According to the white paper itself, its products achieve the “fastest implementation in the industry” through “unmatched knowledgebases” (no pun intended, I suspect) that minimize custom tuning.
I’m not so sure I accept this particular claim, since several other vendors also offer specialized knowledgebases. But my real quarrel is with the notion of prototyping as a “faster, more cost-effective alternative to a formal vendor review.”
Simply put, a prototype is not a thorough test of product’s abilities. In most cases, it is limited to a sample of data from a subset of sources with a cursory review of the results. So prototypers learn quite a bit about the system set-up, user interface and reporting, but much less about true quality and scalability. Although I strongly encourage prototyping as part of structured selection process, it does not replace careful requirements definition and evaluation of alternative products against those requirements. Excluding products that cannot perform a quick prototype may make sense for Innovative Systems, but not for companies that want to ensure they find the best system for their needs. Similarly, minimizing the cost of a solution without taking into account the quality of its results is short-sighted. After all, the cheapest solution is to buy nothing at all—but that obviously does not give an adequate result.
Friday, November 03, 2006
More on Mobile Phones
I saw a fascinating presentation yesterday by Thomas Fellger, CEO of iconmobile GmbH (www.iconmobile.com) , Berlin-based developer of applications and services for mobile communications. The notion I found most striking was the cell phone as a consumer’s “remote control” to access other media. A simple example was sending an SMS message to register for a loyalty program, which would in turn trigger messages by email, access to a Web site, and so on. The cell phone itself could also act as a replacement for a membership card, by either sending additional messages when a customer made a related purchase or by displaying a bar code or number to be scanned. Fellger also described vastly more elaborate and creative approaches to using the mobile phone encourage interaction among consumers.
This reinforces the position I took yesterday that the cell phone is a device that interacts with several channels, rather than a channel by itself. It also reinforces yesterday's comments on the importance of location and context. But, to be honest, yesterday’s post missed the notion of the mobile phone as a new medium in its own right. Mobile communications present unique capabilities and opportunities that marketers will learn over time to exploit. I also didn’t give enough emphasis to how mobile phones shift control to their owners, letting them initiate communications with marketers (as opposed to just receiving them) and collaborate with other consumers.
This reinforces the position I took yesterday that the cell phone is a device that interacts with several channels, rather than a channel by itself. It also reinforces yesterday's comments on the importance of location and context. But, to be honest, yesterday’s post missed the notion of the mobile phone as a new medium in its own right. Mobile communications present unique capabilities and opportunities that marketers will learn over time to exploit. I also didn’t give enough emphasis to how mobile phones shift control to their owners, letting them initiate communications with marketers (as opposed to just receiving them) and collaborate with other consumers.
Thursday, November 02, 2006
Are Smart Phones a Channel?
Today’s “smart phones” can receive text messages and emails, view Web pages and vidoes, run software applications—and, oh yes, make phone calls. This raises two questions: (1) which phone is coolest and what excuse can I find to get it? and (2) how do incorporate smart phones into the Customer Experience Management? The first question may be more interesting but I’ll focus on the second anyway.
The natural tendency is to see smart phones as a channel. After all, a campaign management system sees text messages as another output format (SMS), the same as it sees email and direct mail. Since those other two are clearly channels, SMS must be one as well.
The analogy was already a bit weak because mobile phones always had the ability to receive messages in a second format: voice. It breaks down totally in the case of smart phones, where the device can receive many different kinds of messages. Considering the phone as a channel makes no more sense than considering your office desk as a channel, since you can receive messages there too.
In fact, as the desk analogy suggests, the real key to smart phones is location. What matters about a message is not the device that delivers it, but the context in which it is received. The right message for someone at home may be different for that same person at the office, and different again when they’re traveling or in a retail store. Whether the message is delivered in email or SMS or Web page or voice format, or whether it comes through a smart phone, PC or kiosk, makes less difference and is more than anything a matter of customer preference.
From a Customer Experience Management viewpoint, the real trick is to capture the context. It may be provided by the underlying technology (cell phone location), provided by the customer (if you ask them), inferred from their behavior, appended from outside sources (local weather reports) or inherent in the delivery mechanism (an ATM machine or shelf-talker). If context is known during the interaction, you can use it to adjust treatments as they occur. Even if you don’t find it out until later, context provides data for analysis and better predictions of future customer behavior.
Context and location have always been standard metadata within the Customer Experience Matrix, so the Matrix can accommodate smart phones without missing a beat. This illustrates one of the things I like about the Matrix: a Matrix-based system would have been capturing context all along, so the necessary history would be available immediately when you needed it to start analyzing context-related behavior to build context-based treatment rules. Otherwise, you would have had a considerable delay while you set up the data capture mechanisms and then waited to gather enough information to be useful.
The multi-format nature of the smart phone also illustrates why treatment rules should exist outside of channel systems. Since the smart phone can deliver text, email, Web and other messages, which likely originate in separate channel systems, it’s critical that the messages provided by those systems be consistent. Although a customer should never get a different price depending on which channel she is using, it’s even more embarrassing if the different prices are delivered on the same physical device. Only a centralized decision-making system can ensure that all the channel systems give the same result.
The natural tendency is to see smart phones as a channel. After all, a campaign management system sees text messages as another output format (SMS), the same as it sees email and direct mail. Since those other two are clearly channels, SMS must be one as well.
The analogy was already a bit weak because mobile phones always had the ability to receive messages in a second format: voice. It breaks down totally in the case of smart phones, where the device can receive many different kinds of messages. Considering the phone as a channel makes no more sense than considering your office desk as a channel, since you can receive messages there too.
In fact, as the desk analogy suggests, the real key to smart phones is location. What matters about a message is not the device that delivers it, but the context in which it is received. The right message for someone at home may be different for that same person at the office, and different again when they’re traveling or in a retail store. Whether the message is delivered in email or SMS or Web page or voice format, or whether it comes through a smart phone, PC or kiosk, makes less difference and is more than anything a matter of customer preference.
From a Customer Experience Management viewpoint, the real trick is to capture the context. It may be provided by the underlying technology (cell phone location), provided by the customer (if you ask them), inferred from their behavior, appended from outside sources (local weather reports) or inherent in the delivery mechanism (an ATM machine or shelf-talker). If context is known during the interaction, you can use it to adjust treatments as they occur. Even if you don’t find it out until later, context provides data for analysis and better predictions of future customer behavior.
Context and location have always been standard metadata within the Customer Experience Matrix, so the Matrix can accommodate smart phones without missing a beat. This illustrates one of the things I like about the Matrix: a Matrix-based system would have been capturing context all along, so the necessary history would be available immediately when you needed it to start analyzing context-related behavior to build context-based treatment rules. Otherwise, you would have had a considerable delay while you set up the data capture mechanisms and then waited to gather enough information to be useful.
The multi-format nature of the smart phone also illustrates why treatment rules should exist outside of channel systems. Since the smart phone can deliver text, email, Web and other messages, which likely originate in separate channel systems, it’s critical that the messages provided by those systems be consistent. Although a customer should never get a different price depending on which channel she is using, it’s even more embarrassing if the different prices are delivered on the same physical device. Only a centralized decision-making system can ensure that all the channel systems give the same result.