Wednesday, December 27, 2006

List of Marketing and Branding Blogs

Here is list of marketing and branding blogs, courtesy of Chris Brown at Branding and Marketing. This is part of project to increase traffic for less-well-known blogs in the area. See The Viral Garden for more information. I haven't looked at every blog on this list but did remove quite a few from the original that didn't seem relevant.

Being Peter Kim
Pow! Right Between The Eyes! Andy Nulman’s Blog About Surprise
Billions With Zero Knowledge
Kinetic Ideas
Unconventional Thinking
The Copywriting Maven
MapleLeaf 2.0
Shotgun Marketing Blog
Customers Rock!
Two Hat Marketing
The Emerging Brand
The Branding Blog
Drew's Marketing Minute
Golden Practices
Tell Ten Friends
Flooring the Consumer
Hee-Haw Marketing
Scott Burkett's Pothole on the Infobahn
On Influence & Automation
Servant of Chaos
Small Surfaces
Presentation Zen
Dmitry Linkov
John Wagner
Nick Rice
CKs Blog
¡Hola! Oi! Hi!
Shut Up and Drink the Kool-Aid!
Social Media on the fly
Marketing Nirvana
Multi-Cult Classics
Logic + Emotion
Branding & Marketing
Bob Sutton
SMogger Social Media Blog
Freaking Marketing
Really Small Fish
The Orange Yeti
What's your brand mantra?
John Windsor
Experience Curve
Josh Hallet - Hyku
Henry Jenkins
Occam's Razor
Juice Analytics
Rimm-Kaufman Group
Sports Marketing 2.0
Business Enterprise Management
Digital Solid
Acxiom Direct
Marketing Measurement Today
Marketing Geek
Customer Experience Matrix

Friday, December 22, 2006

Why Smartphones Matter

It may seem that I’m obsessed with smartphones, but I’m truly not. I was about to drop the topic when my wife came home yesterday from meeting a new client who turned out to be—you guessed it—developing smartphone content.

So now I’m thinking about it again. Just what makes me feel smartphones are so important? In part, it’s the “third screen” notion I wrote about yesterday: the idea that the smartphone really has the potential to be as important a device as a TV or personal computer. This means there are still opportunities to develop applications that will make some people a lot of money and enrich the lives of many others. Naturally, this gets my attention.

But there’s more to it. When I really think about it, what I find intriguing about smartphones is their potential to break down the barriers that have traditionally separated marketers from their prey…er, I mean, their customers. In conventional consumer marketing, companies knew basically nothing about individual customers. Even in traditional database marketing, the only information readily available about individuals was what you sent them and what they had purchased from you. This gives a very narrow, episodic view of the customer’s life. (Yes you could enhance a file with various demographic and some financial information, but it was all still pretty general and often unreliable.) On the Web, you might know something about browsing behavior but typically just within your own Web site. A search engine or ISP might have a frighteningly complete view of personal behavior, but privacy rules and business interests have so far prevented that information from being shared with marketers.

The smartphone has three characteristics that mark a radical change in the data available to marketers: real-time location, interaction capabilities, and always-on.

- Always-on is critical because it means the customer is always accessible (unless they choose not to be) so it’s possible to initiate interactions without waiting for that initial customer contact. Obviously this has to be done in a non-intrusive manner; I’ll get to that in a minute.

- Location provides what we at Client X Client refer to as “context”: an understanding of where the customer is physically, which tells a great deal about her likely situation at that moment: shopping or working or driving or home; in-town or on a business trip or on vacation; weather and traffic conditions; advertisements she’s likely to have seen recently; out partying or working late. As that last example suggests, time of day and day of week in combination with location provide even more insight.

- Interaction capabilities add more than the obvious advantage of giving you a way to reach the customer or for her to reach out to you. They enable other types of data capture such as reading a bar code by capturing it in the phone’s camera, displaying information on the camera’s screen for someone else to capture, or exchanging codes via SMS messages. Plus of course there are all kinds of creative activities, such as coordination of group events and viewing downloaded content, that smartphone communication makes possible.

I hope right now you’re saying to yourself, “Hang on there, cowboy—you’re not planning to track every movement of every customer at every moment, are you? Even if the phone companies would share that with you, it would be one heck of a privacy invasion.” Indeed it would, and no I’m not.

But there are plenty of situations where the customer would welcome your involvement, or even be happy to have you lurking in the background ready to help. The obvious example is automotive telematics services like General Motors OnStar, which call your car’s built-in cell phone when an airbag deploys to ask if you need help. There are many other, less dire, situations where consumers might also like to hear from businesses or simply have the business be aware of their location.

Yes, these could be Starbucks coupons as you're walking down the street. I'm as sick of that example as you are, but am required by law to mention it. More creatively, imagine a restaurant location service that automatically lists the restaurants nearest to your current location when you call. That won’t always be the question you want answered, but it will often reduce typing on those tiny little keys. Location awareness would also simplify finding radio stations, gas stations and public restrooms. Maybe a cab-calling service could automatically connect you with whichever company can reach your current location the soonest, or let you balance speed against price. How about simplifying airline check-in by doing it via phone—the boarding pass could be displayed on the screen instead of printed, which might actually be more secure than the current system. Could you check into a hotel by smartphone and have the phone act as your room key?

I’m not so sure about that last one. Hotels might want human contact during check-in to cement relationships with their customers. Such considerations are important when deciding which services to deploy. But I’m already seeing self-service check-in at hotels, so apparently some have already decided the human contact is expendable.

The applications I've suggested are technically possible today and may already exist. So much the better. The point is, there are many situations where a smartphone could provide real value to customers, either by speeding access to useful information or by streamlining an operational business process. These are not intrusions that people will find annoying, but benefits that they might actually pay for—even though in many cases businesses would gladly bear the cost. The key to all this, of course, is informed consent.

What it all comes down to is the customer experience. (You saw that coming, didn’t you?) Smartphones are intriguing because they open up new types of customer experiences, including some significant improvements over existing models. In that sense, they represent a huge opportunitiy.

OK, maybe I am a little bit obsessed.

Thursday, December 21, 2006

Smartphones Are Yet Another Reason for Customer Experience Management

Yesterday I wrote about the opportunities presented by smartphones for radically new business intelligence applications. The December 11 issue of eWeek had a special advertising section sponsored by VeriSign Inc. that frames the topic more elegantly with a notion of “three screens”: television, computer and smartphone. (“New Opportunities for Three Screens & Beyond”, eWeek, December 11, 2006. In a presumably unintended irony, I could not find a copy online.)

VeriSign argues that consumers will use all three media as part of “one interactive communication and media system.” This implies both that they “expect to access any content at any time on any device” and that different devices will be used in different, complementary ways based on their nature. It points to several trends including:

- the distinction between the screens is blurring as TV becomes more interactive and PCs act like telephones
- social networking sites are creating new media networks
- more content is becoming available on more devices
- synergies across devices creates a more interactive, engaging environment (e.g., text message polling on “American Idol”)
- physical and digital worlds will interact (e.g., location-based coupons on mobile phones)

From my point of view, these are all gravy. I was happy with the notion of “three screens” itself, since it nicely gives the smartphone parity with the TV and PC as a device worth managing in its own right.

Even closer to my heart, the paper uses the language of “experience”—as in, “three-screen experience” and “users want the experience to be simple, fast, reliable, and secure”. I didn’t see the phrase “customer experience” anywhere, but the connection is still there. In fact, the coordinating activities across three screens really requires a customer experience management framework (for example, Client X Client’s Customer Experience Matrix) to be effective. This means the three screen view can be part of the case for customer experience management: it’s another important challenge (or opportunity) that companies can only meet by adopting customer experience management techniques. Which is what they need to hear. A lot.

Wednesday, December 20, 2006

Business Intelligence on Smart Phones: Not Just Humbug

I’m a bit behind on my reading so I just spotted a piece in the December 11, 2006 issue of InformationWeek about accessing business intelligence software on a mobile phone. (See “Power Of A Data Warehouse In The Palm Of Your Hand” available here.) The author is highly skeptical of the notion: “It remains to be seen how many mobile professionals actually need to slice and dice data from handheld devices. Data analysis has long been the realm of data warehouses and the fattest of client computers, not small screens and keypads.” On the other hand, she notes that Information Builders has already released a mobile-enabled product, Cognos is planning one for early next year, and Business Objects is working on a prototype.

In a world where people view TV shows on their iPods and make Web purchases on their cell phone, it’s dangerous to predict what people won’t do on small screens. Yes it’s unlikely that the heads-down statisticians will stream from their cubicles to data mine on a park bench: they need those big displays, powerful workstations and fast network connections. But there are plenty of prebuilt analyses that can be called up with a couple of keystrokes. Most would relate to limited situations, such as activities with a particular customer or product. As smart phones become more powerful, weary-shouldered road warriors will be increasingly eager to transfer applications that still force them to carry a laptop.

The more intriguing question is what new business intelligence functions a smart phone platform would make possible. Smart phones have at least two capabilities that regular computers do not: location awareness and visual input. It’s easy to imagine how an insurance adjuster or real estate agent might combine these with business intelligence to do on-the-spot analysis of damage estimates, fraud probability, or market value. A geo-spatial application might help field workers rearrange their routes to accommodate changes in schedule or traffic conditions. Even with a confined space such as a shopping mall, office building or airport, analytical software on mobile devices might send cashiers, gate agents and other workers to where they are needed most, improving both employee productivity and customer experience.

There are certainly other, more creative applications. The point is that a smart phone is more than a computer with a really tiny keyboard. Business intelligence vendors who want to take full advantage of the smart phone platform will not just convert their existing applications, but add new and unique ones that were never before possible.

Tuesday, December 19, 2006

Onyx Reinvents Itself as a Process Manager

Onyx Software ( is a veteran player in mid-market CRM. It’s been years since I looked at their software. When they were purchased earlier this year by M2M Holdings, which already owned mid-market ERP vendor Made2Manage Systems, it seemed like still further evidence that traditional on-premise CRM software is a dying breed.

But my never-ending quest for white papers did turn up a piece from Onyx entitled “Customer Process Management: The Real-time Enterprise depends on the merging of CRM and BPM”, available here. This title includes many of my favorite buzz words, and in particular the conjunction of CRM and BPM (business process management), a particular interest around Client X Client. (In fact, we own the domain, which you can visit to see one of the ugliest Web sites ever created. No, I don’t know who the guy in the picture is.)

The Onyx paper itself turns out to be quite good. It argues, under nice big headings so you don’t have to work very hard, that (a) processes involving a customer are demanding, (b) customer-facing process management requires a new approach to BPM, and (c) customer process management optimizes business results. Specifically, Onyx says, customer-facing processes are different because they are constantly changing and integrate with data and other processes throughout the company.

I’m not sure those characteristics are really unique to customer-facing processes, but they’re certainly important. The paper also shoehorns in the need for well-documented processes to comply with government regulations such as Sarbanes-Oxley—a valid if somewhat peripheral argument.

This is a classic white paper strategy: define a need readers may not have realized they had, create a bit of urgency, and offer a solution. What was intriguing given my outdated knowledge of Onyx was that they would consider themselves the answer to this problem. But it turns out the product has been totally rebuilt while I wasn't looking, using XML for both data storage and business rules. Per their Web site:

“Onyx software uses a unique Dispatch Object Model. The Onyx Transaction Manager is the dispatch engine which queries The Onyx Enterprise Dictionary (the metadata repository) to determine the series of standard and custom steps to execute for any given business transaction. This model provides the abstraction layer necessary for true flexibility and eliminates the need to modify or replace source code to change workflow, extend the logical object model or change the physical schema.”


What the white paper does, of course, is play to the strength of on-premise solutions, as systems that are more easily customized and integrated than their hosted competitors. (See my posts of November 30 through December 4, 2006 for more on this subject.) The positioning also means Onyx continues to be an option for companies that are not using the M2M ERP system.

By coincidence, Onyx parent M2M Holdings announced yesterday that it is acquiring KNOVA (, a leader in knowledge management systems for customer service. That’s a reasonable extension of their product line, and another sign that there’s still some life in the old Onyx business.

Monday, December 18, 2006

Tools to Measure Buzz and Distinguishing Customer Centric Marketing from Customer Experience Management

Sunday’s The New York Times had a long article on buzz measurement vendor Nielsen BuzzMetrics ( (“Brands For the Chattering Masses”, Sunday Business, December 17, 2006, page 1). Naturally this caught my eye since I had been pondering how to measure buzz for Customer Experience Management (see entry for December 8). I haven’t decided whether to do a detailed examination of this particular topic, but did take a quick pass through the Web sites of the major vendors. In addition to Nielsen BuzzMetrics, these include:

- Umbria (,
- Cymfony (,
- Brandimensions BrandIntel (,
- Biz360 (,
- MotiveQuest (, and
- Dow Jones’ Factiva (

If you're interested, the Cymfony Web site offers a free copy of a Forrester Research report on “brand monitoring systems”. Like most analyst reports, this provides some useful perspective but not much detail.

These products do more than simply to count mentions in the manner of Google Trends. They also characterize the mentions in terms of tone, hostility, etc., and give some sense of how brands relate to other concepts within the public consciousness. This calls for some serious text analysis technology, often supplemented by data visualization and/or substantial human effort. The technical and intellectual challenges are quite interesting, although tracking brands is less inherently exciting than, say, tracking terrorists (which uses many of the same techniques.)

Speaking of relationships among concepts, I did want to clarify one point from last week’s posts on the attitude of marketers towards customer centricity. When database marketers talk about being customer centric, they have in mind something very specific: planning contact streams that are optimized around the customer, as opposed to planning campaigns designed to sell particular products. Customer Experience Management is a vastly broader concept, involving all aspects of a company's operations.

Both ideas do share the notion of using customer value as the chief way to evaluate business decisions. Marketing could adopt customer-centric contact management without the entire company adopting customer experience management. To do this, marketing would have to overcome the political obstacles of a product-centric culture. That’s a tall order, but not insurmountable assuming the financial benefits can be proven.

Friday, December 15, 2006

Customer Centricity Isn't Marketers' Major Concern--Yet

Reality can be so annoying. Contrary to my initial impression that “customer centricity” was the major topic at the National Center for Database Marketing conference this week, a close look at the program shows just four of the 45 sessions had this as their focus. The most common topic by far was data analysis and modeling—based on descriptions, I classify 15 sessions in that group. This includes three related to multi-variate testing (e.g. conjoint, discrete choice or factorial designs) and four on Web analytics and targeting.

The next most popular topic was system development and technologies, with ten sessions. This conflicts with another of my informal impressions, that the show had shifted from “how to build a system” to “what to do with the system you’ve built”. Of course, I could argue there used to be many more “how to build” sessions, and that’s probably true. But the larger point remains: even though people have been building database marketing systems for many years by now, it’s still hard and there are always new technologies to incorporate. So the topic will continue to be popular.

The balance of the conference was more fragmented. I counted six sessions on customer list acquisition and enhancement; five related to marketing programs such as loyalty and multi-channel coordination; four on being cuustomer-centric; three on business-to-business marketing; and two concerned with marketing results measurement.

Of course, these are somewhat arbitrary categorizations. They reflect only what the event planners thought was important and not which sessions actually attracted attendees. And it is after all a database marketing conference, so the audience has specialized interests to begin with.

But despite all these caveats, it would be denying reality to claim that moving to customer-centricity is a common priority. Our experience at the Client X Client booth reinforces this: people were very interested in the dashboard technology we displayed, but less intrigued with the customer experience concepts we find so exciting. That doesn’t mean customer-centricity and customer experience management are unimportant or won’t become even more important in the future. But it does mean we have to balance long-term evangelism against marketers’ immediate needs and be sure to address both.

As an aside, the conference program does confirm my impression that the exhibit hall contained many more marketing service vendors than software companies. I count 37 service vendors and 17 software companies. I don't have figures from previous years but am quite certain the proportion of software companies used to be much higher. This clearly reflects the shake-out in the software business in the past few years, and probably also growth in marketing services. I suspect that service agencies are growing faster than the industry as a whole: it seems part of the general outsourcing trend, as well as recognition that the marketing technologies are getting increasingly sophisticated, and therefore harder to run in-house. We need to factor that into our plans as well.

Thursday, December 14, 2006

Making the Change to Customer Centricity

When I raised the question yesterday of how to apply customer-centric principles in a product-centric company, you may have thought I was being cute and already had an answer. Sorry. Even after thinking about it on the long trip home, I haven’t come up with a solution. The best I can give is some thoughts that seem promising.

I suppose there’s a preliminary question of whether customer-centricity is a goal worth pursuing. It’s surely not a goal in itself. Businesses exist to create value for their stakeholders, which you can define narrowly as the owners or more broadly as owners, workers, customers, neighbors, and larger community. Either way, the only reason to be customer-centric is because it creates value—or, more precisely, because it creates more value than other organizations. I’m willing to argue that is indeed true, at least if the alternative is a traditional product-based structure. But I’ll also argue that if some other approach creates just as much value, or you can get the benefits of customer centricity without actually being customer-centric, that’s just as good.

This is worth keeping in mind because it leads to the notion of “simulating” a customer centric organization. By this, I mean creating an organization that acts as if it’s customer centric, even if it truly isn’t. I know we’re headed into religious territory if we have to worry about what is “truly” customer-centric, and don’t want to go in that direction. But I bring up the notion of simulation because one potential transition strategy is to develop metrics that would apply to a customer-centric organization, even if the organization itself hasn’t changed. This should be technically possible and would certainly give managers a way to view things from the customer-centric perspective.

With luck, the metrics would highlight opportunities for improvement and management would want to pursue them. But it’s important to recognize that the metrics won’t have much motivational value if they are not tied to compensation plans, and that it’s bad practice to tie compensation to things people can’t affect. So simply compensating people on customer-centric metrics without changing the organization so they are able to affect those metrics would be a bad idea. On the other hand, just introducing the metrics could be a useful educational measure and part of a larger transition strategy. Nor is it unheard of to base some compensation on such metrics. Some companies already do this by tying compensation to customer satisfaction scores.

Another notion is to work within the product-based structure but incent some individuals to work in customer-centric ways. Perhaps you create a “customer advocate” within each product team who is compensated on cross-sales of other products. (These would be sales of other products made during the sales or service process of the original or parent product.) The customer advocate would be motivated to seek out cross sale opportunities, which would naturally lead to a broader examination of customer needs than is required to sell the parent product itself. The customer advocate position might eventually break free of specific products and exist as an independent function dedicated to promoting cross-sales in general. Since the customer advocate has an inherently customer-centric outlook, this could be a transition step towards a true customer-based organization.

A second notion, still without fundamental changes to the existing organization, might be a “customer experience manager” (or, less glamorously, “segment manager”) who is assigned to increase business from a particular customer segment. BestBuy did something along these lines with well-publicized success. (Click here for a press release.) The experience manager would develop programs that reconfigure or supplement existing activities to appeal to her target customers. Giving each experience manager a budget for testing, and then financing roll-outs based on test results, would provide the necessary mechanisms for customer-centric programs to grow naturally within the usual business processes. A variation of this approach is to start with a single segment—presumably high value customers—and see what can be done for them in particular.

Of these three notions, metrics is probably the easiest for a marketer to deploy: it’s only numbers and doesn’t require organization or incentive changes. But it’s also the easiest approach to ignore. Customer advocates and experience managers are more likely to yield hard proof for the value of customer-centricity, but they require senior management commitment even for a trial. Still, an initial trial could be quite small—maybe just a project rather than a position—so marketers could possibly sell it on that basis. The point is that real organizational change will happen only if it is shown to create real financial value, so you have to generate evidence one way or another.

I know this entry is already too long but I wanted to quickly jot down a couple of other notions that occurred to me as I was preparing it. (That’s what happens when you’re stuck on a plane.) One is that the three major stages in the purchase cycle—marketing (acquisition), sales (purchase) and service (use)—each have different natural organizations. Marketing is naturally oriented to customer segments and media; sales is naturally oriented to products and channels; service is naturally oriented to individual customers and products. Since most companies are driven by sales, it makes sense that product-based organizations is dominant. This also explains why the push for customer-centricity comes primarily from marketing and to a lesser extent from customer service. I suspect this insight has additional implications to help manage the transition to customer-centricity, but haven’t worked them out. (It was only a two hour flight.)

The other notion is that needs are the connection point between products and customers. That is, customers have needs that products fill. I suppose we already knew that, but it seems to imply that the transition from product- to customer-based organizations could have an intermediate stage of being a needs-based organization. Quite honestly I don’t know what that would look like, but I’m mentioning it here in case it sparks a flash of insight for someone else. If it helps any, I think a more correct statement of the model is that most products meet a single primary need; several different products might meet the same need; customers have many needs; and different customers have different needs. When I diagram that, it reminds me a neural network. Again, how or even if whether this is of use, I really don’t know. But it strikes me as kind of interesting.

Wednesday, December 13, 2006

Shocking Thought: Maybe Customer Centricity Isn't for Everyone

I’ve been at the National Center for Database Marketing conference these past few days. As usual, I’ve spent most of my time in the exhibit hall gossiping with friends…I mean, doing research. This doesn’t leave much time for attending the sessions. But from what I see on the conference schedule, the buzzword of the moment is “customer-centric”.

Given what we do at Client X Client, I thought this was good news. So I was quite surprised to hear several attendees questioning the concept—as in “we’re not sure customer centricity is for us.”

This reminds me of the boy who goes to a girl’s home to pick her up for a date. The girl’s father pulls him aside and says, “Young man, I want to know whether your intentions are honorable.” The boy looks puzzled for a moment, and then his face lights up. “You mean I have a choice?”

It had not occurred to me that anyone had, or wanted, a choice about whether to be customer-centric. The people asking the question were experienced database marketers, so it’s not that they don’t know any better. All raised the same issue: they work in product-oriented companies, and it simply wasn’t clear they could or should restrict sales of one product to promote another product or to limit total customer contacts, even if this might increase total value per customer.

This is not a new issue, and it’s easy for me as a consultant to simply say, "Yes you should." But these people are not in a position to make the issue go away by reorganizing their company around customers. Realistically, many successful companies will remain product oriented for the foreseeable future.

I can suggest optimization methods that would illustrate the financial benefit of placing limits on customer contact. They can even maintain minimum sales levels per product. But that is a technical solution to a political problem. I doubt it will succeed very often.

What’s more needed is a transition strategy that allows companies to get some of the benefits of customer centricity without abandoning a product orientation. I have no immediate idea what this strategy would look like but will give it some thought. Any suggestions are welcome.

Tuesday, December 12, 2006

Random Notes on Building Business Intelligence Systems

I sat in a three-hour presentation on business intelligence systems and took exactly three lines of notes. These were:

- brand metrics vs. database marketing metrics. This is an interesting point: brand marketers traditionally look at things like awareness levels, while database marketers look at things like retention rates. Per my comments last week on the role of brands in customer experience management, we need to incorporate brand-style measures into our customer value models.

- deliver something useful every 90-120 days: this was in the context of how to keep up momentum on a long-term project. Good thing to keep in mind.

- success stories: also in the context of promoting new systems. The point is you need a few anecdotes to illustrate early success; these then will be repeated endlessly as justification for the project. The classic example is the “beer and diapers” analysis to a justify data warehouse—a possibly apocryphal tale about how market basket analysis showed that beer and diapers were often purchased together on weekends, and that sales increased when they were put next to each other. The notion of success stories—call them legends—is really important in building and maintaining system support.

Not the most productive three hours I’ve ever spent, but so it goes.

Sunday, December 10, 2006

Making the Case for Customer Experience Management

I’m still pondering the question of what it would take for Customer Experience Management to be the Next Big Thing. Clearly it comes down to becoming a “do this or die” proposition: as in, “your company must adopt Customer Experience Management or it will be overtaken by others who do”. That’s what worked for CRM, Six Sigma, Business Process Reengineering, and probably whatever came before; it’s working today for concepts like Innovation and technologies like RFID and Service Oriented Architectures.

It’s no mystery why “do or die” works: managers can only focus a handful of strategic initiatives at a time, so anything that isn’t a life or death matter will be trumped by something that is. (Do people still know what “trumped” means? Do they realize it comes from the card game of bridge, and not Donald Trump or trumpeting, either of which could be plausible sources for a concept of “making more noise than anything else”?)

So far I’ve come up with two arguments for Customer Experience Management that strike me as sufficiently frightening. One, which I mentioned last week, is that product and channel proliferation make it impossible to continue with conventional business organizations. The key phrase here is “make it impossible”. Given a choice between “change” and “no change”, most people will choose the status quo. It’s only after they are convinced that some change is inevitable that they’ll consider which change they find most appealing (or, least distasteful).

The proliferation argument is good because it’s clear to everyone that product and channel proliferation are real. That proliferation makes existing organizational structures obsolete is a bit harder to prove, although any business struggling to work out a sensible plan for multi-channel activities will be receptive. (The formal argument is that traditional organizations treat products and channels as independent profit centers, but now there are so many products and channels that they must be coordinated to be effective.) The biggest stretch is the claim that the new structure must be organized around customers and governed by Customer Experience Management principles. I happen to believe this is true and am perfectly willing to argue it. Some of the iconic business successes of the recent past—Google search, Apple iPod,Westin Starwood Heavenly Bed—can be explained in terms of customer experience.

The other argument I find promising just happens to match our positioning here at Client X Client: that the way to maximize business value is to increase the lifetime value of each customer, and the way to do that is to maximize the impact on lifetime value of each customer experience. As that statement illustrates, this is a somewhat complicated case to make, which weighs heavily against it. On the other hand, the argument does have pretty much irrefutable intellectual rigor. At least, I can’t find any logical flaws.

But you still have to show people why, if this is such a good idea, businesses haven’t been doing it for years. My answer is that the technology to do the measurements is only now becoming available. That’s also my answer when people ask why I can’t point to companies that have already adopted the strategy and succeeded with it.

The good news is, there are many cases where less comprehensive value measurements have proven fruitful. There are also industries where lifetime value is a well-established concept. So it’s relatively easy to portray this approach as the natural next step in an on-going evolution. Unfortunately, that argument appeals more to pioneers than followers. It’s also harder to link that argument to the all-important fear of being left behind. (Harder, but not impossible: you only have to convince people that adopters will gain real competitive advantage. It’s also early enough to promote the benefits of being first.)

I think both of these arguments have a chance at transforming Customer Experience Management from interesting to essential. If there are other, more compelling approaches I'd love to hear about them. What's certain is that unless Customer Experience Management becomes more than an exhortation to “be nice to your customers”, it will never have the impact it deserves.

Friday, December 08, 2006

Reading the Hype Meter for Customer Experience Management

Dale Wolf’s comment on my Wednesday post suggests hopefully “it could be that CEM recognition is about to get its due” based on the increasing frequency of “customer experience manager” as a job title. This got me to wondering how you would really measure the progress of a concept towards a buzzword tipping point, or whatever you want to call it.

My initial benchmark, which I still think is a good one, is a cover on BusinessWeek. The thought here is that represents mainstream business reporting. Remember that my real criterion for buzzword success is that companies feel they need something but don’t really understand it. I consider that success because that’s when they hire consultants. So the first BusinessWeek cover is a pretty good indication that an idea is considered both new and important. So far as I recollect, Customer Experience Management hasn't had its cover yet. And I would have noticed.

Another indicator would be trade conferences. There is apparently just one series of conferences explicitly titled “Customer Experience Management”, run by the Conference Board. Better than nothing but not too impressive—think how many CRM conferences there are.

Then I happened to notice that Hedge Funds for Dummies has just been published (I’m not making that up). Another excellent measure of mainstream cultural presence. A visit to (still not kidding) turns up RFID for Dummies, Six Sigma for Dummies, Service Oriented Architecture for Dummies, even Sarbanes-Oxley for Dummies (do you really want to see your CFO reading that one?) Of course, there are several titles related to CRM software (Microsoft CRM, Goldmine, ACT!, as well as generic topics like Branding for Dummies, Customer Service for Dummies and Marketing for Dummies. But no Customer Experience Management for Dummies.

How about the big consultancies—McKinsey, Accenture, and such? I’m sure they do customer experience management work, but the few I checked don’t seem to have practices with that title. No buzz-power there.

I considered in-flight magazines, another key pathway into the executive brain case. But I couldn’t find any index of articles to check. Still, I can’t recall seeing any articles on the topic in my personal travels.

Wikipedia was interesting: it has brief article on “customer experience management” based mostly Bernd Schmitt’s work. It quoted Schmitt’s broad concept of CEM as "the process of strategically managing a customer's entire experience with a product or a company", but then later referred to CEM as only an “approach” to relationship marketing.

Wikipedia has an even shorter entry under “customer experience”, but the definition is so good (i.e., close to my own), that I’ll quote it in full:

“Customer experience is the quality of the experience as apprehended by a customer resulting from direct or indirect contact with any touch point of a company, including marketing, branding, customer service, support, in-store experience, product design, service or Web site, etc. Customer experience in this broader sense also includes "User Experience", which as the name suggests, is concerned with, and limited to, direct usage of a product.

“The quality of the customer experience at any touch point individually can affect the overall relationship a customer has with a company. For example, a customer with a very high opinion of a company and its products may have a complete turn-around after a negative post-sales service customer experience. Or a company with an otherwise fine track record at many customer touch-points may create a negative experience through a poorly executed marketing communication piece or practice.”

All these are interesting but still pretty subjective. Maybe the clearest measure of buzz is Google Trends, which counts how often a term is searched for and how often in appears in Google News articles. “Customer experience management” didn’t have enough volume to return any results—which is probably all we really need to know. But I tried “customer experience” and got some data. I compared it to “customer relationship”, and to my surprise found the two were about the same in news reference volume. “Customer experience” even seems to pull ahead a bit during 2006. “Customer relationship” has two to three times as many searches as “customer experience”, but even that ratio isn’t as high as I had imagined. Click here for the results.

Promising – except then I thought to try “CRM”. Now the ratio in favor of CRM is so high you can barely guess it—maybe 5:1 for news and 20:1 for search. I tried “CEM”, but that can stand for many different things so the results weren’t useful. For sake of comparison, I threw in SOA, RFID and Six Sigma, which all run at about half as many searches as CRM and the same or slightly more news articles. Click here for a look.

What about the blogosphere? Somewhat similar results: Technorati finds 55,275 mentions of “customer experience management” and a similar 51,545 of “customer relationship management”, but 153,303 for “CRM”.

Should we even bother with YouTube? Why not? “Customer relationship management” has 17 entries and “CRM” has 123. “Customer experience management” gets all of seven. (“Customer service”, where you probably don’t want to appear, has 578.)

This research isn’t simply a goose chase. Measuring buzz is something marketers need to learn to do as community-based channels become more important. But so far as the hype level for customer experience management goes: I’m still not impressed.

Thursday, December 07, 2006

Interactions are for Companies, Experiences are for Brands

Yesterday’s Webinar seemed to go well, based on the little feedback available. In case you missed it, my slides are available here on the Client X Client Web site. The gist was that there are too many products and channels today for the traditional product manager / channel manager organization to function. Instead, companies have to organize around the customer.

One of the concepts I didn’t have time to develop at length in the presentation was the difference between interactions and experiences. Basically, interactions are direct contacts between a company and a customer. This means the company is aware of the interaction as it’s happening, and has at least the potential to change the treatments it delivers. Experiences, on the other hand, are any contact that the customer associates with the company, whether or not the company is actively involved.

A call center interchange is an interaction, because somebody from the company (the call center agent) can choose which messages to present. (Whether the company actively manages those choices is another question, and an important one.) A television advertisement is not an interaction, because the company doesn’t know who is seeing it and can’t alter the message based on who they are or how they react. But the television ad is still an experience, because the customer knows the company is involved and therefore adjusts her opinion of the company based on what she sees.

Other examples of experiences that are not interactions: installing and using a product (assuming there is no direct contact with the company, such as an activation process); having a product repaired by a third party; discussing a product with another consumer. Of course, the company still has some impact on these experiences through its original actions in creating the TV ad, designing the product, or training its dealers. But the experience of the individual customer is beyond the company’s control.


Another way to look at this is that an interaction is with the company, while an experience is with the brand. (Feel free to ignore that sentence if you don’t think brands are important. If this were an interaction, I’d only show that sentence to people who care about brands. But I can’t customize the blog based on reader attitudes. The best I can do is try to create an experience with a positive impact on all readers. I’m doing that right now by acknowledging that not everyone thinks in terms of brands.) Brands are relevant here because brands are a based on consumer attitudes, and attitude is affected by all events a consumer associates with an entity.

Conversely, brand is not affected by an event when the customer doesn’t know an entity was involved. If my notebook computer catches fire because of a battery problem, I’ll blame Sony if I know they made the battery, but Dell if I only know who made the computer. If I don’t even know that, I’ll probably blame the store that sold it to me. If I know everybody’s identity, I might blame them all. But chances are the brand whose image will be most heavily impacted will be the (known) brand closest to the problem.

I’m not sure I’ve made a convincing case here for the importance of brand as part of the distinction between experiences and interactions. I suppose I feel the association makes sense because interactions and companies are both concrete entities, while experiences and brands are both broader concepts. So each just seems to go with the other. Maybe that’s not the most rational justification you’ve ever seen, but it’s the best I can offer.

I hope you enjoyed the experience.

Wednesday, December 06, 2006

Customer Experience Management Needs More Hype (I'm Serious)

The phrase “customer experience” pops up more and more often in vendor promotions and other business discussions. But somehow “customer experience management” (CEM) doesn’t seem to have reached the status of a truly hot buzzword. By that, I mean there is little hype suggesting that CEM is the solution to all your problems, or that your company must adopt CEM or fall hopelessly behind its competitors.

This may or may not be a good thing. Hype in general is rather unattractive because it substitutes mindless conformity for serious thought. But hype also supports the large amount of continuous effort required for new ideas to penetrate the collective consciousness of the business world. They need the repetition provided by years of articles, conference presentations, and vendor promotion. From a purely practical standpoint, you need enough interest for analysts, journalists, academics and consultants to make a living talking about a topic before much has really happened with it. Without the weight of hype behind them, good ideas vanish before they are adopted.

One obstacle to the hyping of customer experience may be that the term is used in different ways. Web marketers think of it as what site visitors see, and talk about optimizing the experience in terms of managing page flows, improving response times and delivering interactive graphics. Customer service vendors use the term in connection with problem resolution and customer satisfaction. Others, including Client X Client, view of customer experience as encompassing all contacts between a customer and a brand. Obviously we’re right and they’re wrong, but that’s not the point. I don’t think the conflicting definitions are really the problem. After all, any successful buzzword is adapted by different people to mean different things.

Rather, my personal theory is that CEM hasn’t taken off because technology vendors haven’t wrapped it into a package. I can definitely buy a CRM system or a Business Intelligence system or Services Oriented Architecture technology, but who—apart from the specialized Web and customer service vendors--offers “CEM Software”? It’s not merely that people selling these systems have marketing budgets to promote the notion, although that’s part of it. It’s also that businesspeople prefer problems they can solve by buying something. Otherwise, it’s like going to the doctor and being told you have an incurable disease—or, more precisely, a disease he has no medicine to cure but you can keep under control with diet and exercise. That’s just such hard work.

It’s also why support groups like Weight Watchers or Alcoholics Anonymous are so important. People want some sort of help with their problems, even it’s just talking about them with fellow sufferers. Of course, customer experience management is not a disease. It’s an opportunity to do things better. But the point remains that people will shy away from addressing it unless they are offered help. More formally, they need a structured approach that can help them manage the process with a good chance of success. Whether this is consulting or education or technology, I suspect customer experience management will never really enter the hype cycle until someone provides it.

Tuesday, December 05, 2006

Surado White Paper: Small Print, Big Heart

I suppose it’s petty to complain about the size of the type used in a white paper, but you would think something called “The CEO’s Guide to CRM Success” would recognize that senior managers might struggle with a six point font. Yet somehow that tiny type—presumably chosen to keep the paper small enough for busy readers—fits with the general feel of this document, available here from Surado Solutions ( There is a slightly excessive but sincere and ultimately charming earnestness about the paper, which crams a case study, ten best practices, fifteen key benefits, exhortation for senior management leadership and advice on choosing a system into its three squint-inducing pages. These people may lack a bit of polish, but they are certainly trying as hard as they can. (Did I just call a CRM white paper “charmingly earnest”? I need a vacation, or at least to start reviewing wine.)

There are two specific reasons I’m fond of this paper. The first is that it opens with a proper perspective on CRM: “CRM is first and foremost a strategy and corporate philosophy that puts the customer at the center of business operations so as to increase profits by improving customer acquisition and retention.” The second is that the mini-case study describes a selection process that started with a detailed set of business objectives. That’s exactly the right thing to do, but so many projects start elsewhere and so many vendor white papers don’t try to improve the situation.

Other portions of the paper are equally sound, but somewhat scattershot. The “10 Steps to a Successful CRM Initiative” run from “1. Business executives must ‘own’ CRM projects” through “5. Get expert advice from technologists” to “10. Identify tangible and measurable links to business performance.” All true, but not exactly a comprehensive, step-by-step process. The list of key benefits is similarly random, with something for everybody from Six Sigma fanatics to customer service managers. The section on Choosing a System is the least balanced, focusing mostly on the need for integration. Needless to say, integration is a major selling point of Surado’s product, an on-premise CRM package for small to mid-size businesses. But a little self-promotion is more than forgivable.

In short, you won’t find any brilliant new insights here, but it’s a relatively painless reminder of some important items for your project checklist. If you don’t mind the eye strain.

Monday, December 04, 2006

Channel Partners for Hosted Software: Help or Hurt?

Last Friday I listed the competitive strengths of different types of CRM vendors. Turns out that eWeek ( had something similar on its mind. “SaaS: More, not less, channel” (eWeek, November 27, 2006), available here, argues that hosted systems will move beyond their current advantages of low cost and rapid implementation. The new positioning will draw on expert channel partners such as Value Added Resellers to help customers not simply install the software, but also train their staff and modify their business processes. Channel partners will also use the hosted systems as platforms for multiple applications and industry-specific “vertical” solutions. The eWeek article quotes, and heavily draws from, a report “SaaS 2.0: Software-as-a-Service as Next-Gen Business Platform” from Saugatuck Technology (

Sauguatuck seems to have done its homework, interviewing 40+ executives, surveying another 155, and speaking with more than a dozen vendors and investors. Perhaps as important, what they say makes perfect sense: all those experts selling and supporting on-premise software won’t go away just because hosted systems become more popular. Nor will end-users’ need for support suddenly vanish. So it’s reasonable to expect that the experts will shift to servicing the hosted platforms as demand moves in that direction.

But I don’t find this a pleasant prospect. Even in the on-premise world, services easily account for more than half the total cost of deploying packaged software: a typical multiplier is 3 to 4 times, and 10 times is not unheard of. Very little of that work is the software installation itself—most is in customization, integration and training. If the same channel partners provide the same services for on-premise solutions, the costs can’t change by much.

Of course, companies that really need a lot of customization and integration won’t have much of a choice. But one promise of hosted systems has been that users would be able to do more for themselves. Less customization would be needed because the systems could be configured without code changes. Integration would be simplified by standardized connections using technology like Service Oriented Architectures.

You can argue that even these sorts of configuration and SOA connections are beyond the capabilities of many organizations, and thus expert assistance will still be needed. That’s probably true to some degree but looking at what amateurs can do with things like Web mash-ups, it’s reasonable to hope that the tasks can be simplified to the point where users can do much more for themselves, and experts will accomplish the remaining work in a fraction of the time they now take with traditional technologies.

Here’s the problem: if the hosted vendors depend on channel partners to sell their systems, they may not invest in features that cut into those channel partners’ income by letting end-users do more for themselves. I recognize this sounds a bit paranoid, but channel partners do play a large role in developers’ decisions. The good news, I think, is that there will always be new hosted competitors without a large channel partner base. These firms will have every reason to keep their products simple. So hopefully the market will provide a range of solutions that let some users do things for themselves while making help available to other users who need it.

As to the business process redesign component of channel partner services: to some degree, that can be built into software. To a greater degree, though, I suspect that will always be something that requires significant human expertise. But process redesign and system deployment are quite different skills, so there’s a hope that simple-to-deploy systems will allow companies to hire process redesign specialists who are not also systems integrators. Allowing more firms to offer redesign services should increase competition and ultimately serve end-users better. But this also depends on the hosted software developers keeping their products simple enough that non-technicians can deploy them.

Friday, December 01, 2006

Building a Complete List of CRM Selection Criteria (or, Learning to Love Vendor White Papers)

Yesterday’s entry about Sage Software’s paper “17 Rules of the Road for CRM” may have given an overly positive impression. In stating that the points in the paper were all valid, I didn’t mean that the paper itself is a fair treatment of the subject. There are other, equally valid points that the paper leaves out.

What are these points? Let's invert Sage’s own approach, which was to highlight the weaknesses of its competitors. Start with a list of the competitors’ strengths.

- point solutions: low cost, fitness to task (either doing one thing simply or providing deep, sophisticated functionality), vertical specialization by industry or business function, and vendor expertise.

- enterprise software: inherent integration (shared data, technology and process flows), sophisticated functionality, vertical versions, customizability, and vendor size (which translates into greater resources, financial stability, and a large base of people familiar with the product).

- hosted systems: quick deployment, low initial investment, easy remote access via the Web, and little burden on in-house technical staff.

It’s easy enough to formulate these advantages as “rules” similar to those in the Sage white paper. Indeed, if you look at white papers from vendors in any of those categories, you’ll see that’s exactly what they do.

There’s nothing wrong with that. Cost, deployment time, integration, functionality, customizability, vertical adaptations, specialized expertise, remote access, technical burden and vendor stability are all important considerations in making a selection. My only point here is that you can’t expect to find them all in any one vendor’s white paper, because each vendor will pick only the points that play to its advantage.

On the other hand, if you collate papers from a variety of vendors, you can assemble a pretty complete set. It's just like Pokemon cards: collect them all!

Thursday, November 30, 2006

Sage Software Offers Sound Rules for CRM

Sage Software( is one of those amorphous software companies that have grown by acquiring many products and keeping them pretty much separate. The company does have a focus: it sells accounting and CRM software on small and mid-sized business. But under that umbrella its Web site lists thirty different products, including well-known brands Peachtree, DacEasy, Accpac and MAS for accounting, ACT! contact management, and SalesLogix and Sage CRM for CRM.

This broad product line poses a particular challenge in writing a white paper that does what white papers are supposed to do: give objective-sounding information that subtly pushes readers toward the sponsor’s products. With so many different products, Sage can’t simply promote the features of any one of them.

But Sage’s paper “17 Rules of the Road for CRM”, available here, rises splendidly to the task. It does offer some very sound advice, from taking a broad perspective (“1. CRM is more than a product, it’s a philosophy”; “2. Customers are everywhere: clients, vendors, employees, mentors”) to careful preparation (“5. Planning pays”, “6. Prepare for product demos”) to deployment (“14. Implementation method is as important as product choice”, “15. Training can’t be ‘on the job’”, “16. Test, or crash and burn”, “17. Focus on CRM goals: improve customer satisfaction, shorten sales cyclces, and increase revenue”). Yet it also throws in some points that are tailored to supporting Sage against its competitors.

(Come to think of it, Sage sells mostly through channel partners who provide the consulting, selection and implementation services highlighted in the points listed above. So even these points are really leading readers to Sage strengths.)

Specifically, Sage CRM faces sells against three types of challengers: point solutions such as contact management or customer service systems; enterprise software such as Siebel/Oracle or SAP CRM; and hosted systems such as and RightNow. There are white paper rules targeted to each:

- point solutions: “3. Don’t confuse CRM with contact management”, “8. CRM is not a point solution”, “10. Multi-channel access is the only way to go”, “13. CRM is not for any single department, it’s for the whole company”

- enterprise software: “4. CRM solutions are different for midsized companies”, “12. High cost does not necessarily mean high value”

- hosted systems: “7. Implement current technology” (don’t rely on promised future features; include back-office integration and customizability), “9. Speed ROI through back-office integration”, “11. Look for true platform flexibility” (ability to switch between hosted and installed versions)

These points are perfectly valid—the only one I might question is whether you really need a product that can switch between hosted and installed versions. I’m simply noting, with genuine admiration, how nicely Sage has presented them in ways that support its particular interests. It’s always fun to see a master at work.

Wednesday, November 29, 2006

David Raab in Webinar

I'll be presenting a Webinar on "Getting Started with Marketing Automation", sponsored by Unica, on Wednesday December 6 at 11:30. Eastern / 8:30 Pacific. Click here to register.

One Final Post on Multi-Variate Testing

It’s been fun to explore multi-variate testing in some depth over these last few entries, but I fear you Dear Readers will get bored if I continue to focus on a single topic. Also, my stack of white papers to review is getting taller and I do so enjoy critiquing them. (At the top of the pile is Optimost’s “15 Critical Questions to Ask a Multivariable Testing Provider” available here. This covers many of the items I listed on Monday although of course it puts an Optimost-centric spin on them. Yes I read it before compiling my own list; let’s just call that “research”.)

Before I leave the topic altogether, let me share some final tidbits. One is that I’m told it’s not possible to force a particular combination of elements into a Taguchi test plan because the combinations are determined by the Taguchi design process itself. I’m not 100% sure this is correct: I suspect that if you specified a particular combination as a starting point, you could design a valid plan around it. But the deeper point, which certainly does make sense, is that substituting a stronger for weaker combination within an active test plan would almost surely invalidate the results. The more sensible method is to keep tests short, and either complete the current test unchanged or replace it with a new one if an important result becomes clear early on. Remember that this applies to multi-variate tests, where results from all test combinations are aggregated to read the final results. In a simpler A/B test, you have more flexibility.

The second point, which is more of a Note To Self, is that multi-variate testing still won't let me use historical data to measure the impact of different experiences on future customer behavior. I want to do this for value formulas in the Customer Experience Matrix. The issue is simple: multi-variate tests require valid test data. This means the test system must determine which contents are presented, and everything else about the test customers must be pretty much the same. Historical data won’t meet these conditions: either everyone saw the same content, or the content was selected by some existing business rule that introduces its own bias. The same problem really applies to any analytical technique, even things like regression that don’t require data generated from structured tests. When a formal test is possible, multi-variate testing can definitely help to measure experience impacts. But, as one of the vendors pointed out to me yesterday, it’s difficult for companies to run tests that last long enough to measure long-term outcomes like lifetime value.

Tuesday, November 28, 2006

Distinguishing among Multi-Variate Testing Products (I'm going to regret this)

My last two posts listed many factors to consider when evaluating multi-variate Web testing systems. Each factor is important, which in practical terms means it can get you fired (if it prevents you from using a system you’ve purchased). So there’s really no way to avoid researching each factor in detail before making a choice.

And yet…there are so many factors. If you’re not actively engaged in a selection process, isn’t there a smaller number of items you might keep in mind when mentally classifying the different products?

One way to answer this is to look at the features which truly appear to distinguish the different vendors—things that either are truly unique, or that the vendor emphasizes in their own promotions. Warning: “unique” is a very dangerous term to use about software. Some things that are unique do not matter; some things that vendors believe are unique are not; some things that are unique in a technical sense can be accomplished using other, perfectly satisfactory approaches.

A list of distinguishing features only makes sense if you know what is commonly available. In general (and with some exceptions), you can expect a multi-variate testing system to:

- have an interface that lets marketers set up tests with minimal support from Web site technicians
- support Taguchi method multi-variate testing and simpler designs such as A/B splits
- use segmentation to deliver different tests to different visitors
- use Javascript snippets on each Web page to call a test engine which returns test content
- use persistent cookies, and sometimes stored profiles, to recognize repeat visitors
- provide real time reporting of test results

That said, here is what strikes me as the single most distinguishing feature each of the multi-variate testing vendors (listed alphabetically). No doubt each vendor has other items it would like to add—I’ve listed just one feature per vendor to make as clear as possible that this isn’t a comprehensive description.

- Offermatica: can run multi-page and multi-session tests. This isn’t fully unique, but some products only test components within a single page.

- Optimost: offers “optimal design” in addition to the more common Taguchi method for multi-variate testing. According to Optimost, "optimal design" does a better job than Taguchi of dealing with relationships among variables.

- SiteSpect: delivers test content by intercepting and replacing Web traffic rather than inserting Javascript snippets. This can be done by an on-site appliance or a hosted service. (click here to see a more detailed explanation from SiteSpect in a comment on yesterday’s post.)

- Verster: uses AJAX/DHTML to generate test contents within the visitor’s browser rather than inserting them before the page is sent. All test content remains on the client’s Web server.

There are (at least!) two more vendors who offer multi-variate testing but are not exactly focused in this area:

- Kefta: tightly integrates multi-variate testing results with business rules and system-generated visitor profiles used to select content. Kefta considers itself a “dynamic targeting” system.

- Memetrics: supports “marketing strategies optimization” with installed software to build “choice models” of customer preferences across multiple channels. Also has a conventional, hosted page optimization product using A/B and multi-variate methods.

Monday, November 27, 2006

Still More on Multi-Variate Testing (Really Pushing It for a Monday)

My last entry described in detail the issues relating to automated deployment of multi-variate Web test results. But automated deployment is just one consideration in evaluating such systems. Here is an overview of some others.

- segmentation: as pointed out in a comment by Kefta’s Mark Ogne, “testing can only provide long term, fruitful answers within a relatively homogeneous group of people.” Of course, finding the best ways to segment a Web site’s visitors is a challenge in itself. But assuming this has been accomplished, the testing system should be able to identify the most productive combination of components for each segment. Ideally the segments would be defined using all types of information, including visitor source (e.g. search words), on-site behavior (previous clicks), and profiles (based on earlier visits or other transactions with the company). For maximum effectiveness, the system should be able to set up different test plans for each segment.

- force or exclude specified combinations: there may be particular combinations you must test, such as a previous winner or the boss’s favorite. You may wish to exclude other combinations, perhaps because you’ve tested them before. The system should make this easy to do.

- allow linkages among test components: certain components may only make sense in combination; for example, service plans may only be offered for some products, or some headlines may be related to specific photos. The testing system must allow the user to define such connections and ensure only the appropriate combinations are displayed. This should accommodate more than simple one-to-one relationships: for example, three different photos might be compatible with the same headline, while three different headlines might be compatible with just one of those photos. Such linkages, and tests in general, should extend across more than a single Web page so each visitor sees consistent treatments throughout the site.

- allow linkages across visits: treatments for the same visitor should also be consistent across site visits. Although this is basically an extension of the need for page-to-page consistency, the technical solutions are different. Session-to-session consistency implies a persistent cookie or user profile or both, and is harder to achieve because of visitor behavior such as deleting cookies, logging in from different machines, or using different online identities.

- measure results across multiple pages and multiple visits: even when the components being tested reside on a single page, it’s often important to look at behaviors elsewhere on the site. For example, different versions of the landing page may attract customers with different buying patterns. The system must able to capture such results and use them to evaluate test performance. It should also be able to integrate behaviors from outside of the Web site, such as phone orders or store visits. As with linkages among test components, different technologies may be involved when measuring results within a single page, across multiple pages, across visits and across channels. This means a system’s capabilities for each type of measurement must be evaluated separately.

- allow multiple success measures. Different tests may target different behaviors, such as capturing a name, generating an inquiry or placing an order. The test system must be able to handle this. In addition, users may want to measure multiple behaviors as part of a single test: say, average order size, number of orders, and profit margin. The system should be able to capture and report on these as well. As discussed in last Wednesday’s post, it can be difficult to combine several measures into one value for the test to maximize. But the system should at least be able to show the expected results of the tested combinations in terms of each measure.

- account for interactions among variables: this is a technical issue and one where vendors who use different test designs make claims that only an expert can assess. The fundamental concern is that specific combinations of components may yield results that are different from what would be predicted by viewing them independently. To take a trivial example, a headline and body text that gave conflicting information would probably depress results. Be sure to explore how any vendor you consider handles this issue and make sure you are comfortable with their approach.

- reporting: the basic output of a multi-variate test is a report showing how different elements performed, by themselves and in combination with others. Beyond that, you want help in understanding what this means: ranking of elements by importance; ranking of alternatives within each element; confidence statistics indicating how reliable the results are; any apparent interaction effects; estimated results for the best combination if it was not actually tested. A multi-variate test generates a great deal of data, so efficient, understandable presentation is critical. In addition to their actual reporting features, some vendors provide human analysts to review and explain results.

- integration difficulty and performance: the multi-variate testing systems all take over some aspect of Web page presentation by controlling certain portions of your pages. The work involved to set this up and the speed and reliability with which test pages are rendered are critical factors in successful deployment. Specific issues include the amount of new code that must be embedded in each page, how much this code changes from test to test, how much volume the system can handle (in number of pages rendered and complexity of the content), how result measurement is incorporated, how any cookies and visitor profiles are managed, and mechanisms to handle failures such as unavailable servers or missing content.

- impact on Web search engines: this is another technical issue, but a fairly straightforward one. Content managed by the testing system is generally not part of the static Web pages read by the “spiders” that search engines to use index the Web. The standard solution seems to be to put the important search terms in a portion of the static page that visitors will not see but the spiders will still read. Again, you need to understand the details of each vendor’s approach, and in particular how much work is involved in keeping the invisible search tags consistent with the actual, visible site.

- hosted vs. installed deployment: all of the multi-variate testing products are offered as hosted solutions. Memtrics and SiteSpect also offer installed options; the others don’t seem to but I can’t say for sure. Yet even hosted solutions can vary in details such as where test content is stored and whether software for the user interface is installed locally. If this is a major concern in your company, check with the vendors for the options available.

- test setup: last but certainly not least, what’s involved in actually setting up a test on the system? How much does the user need to know about Web technology, the details of the site, test design principles, and the mechanics of the test system itself? How hard is it to set up a new test and how hard to make changes? Does the system help to prevent users from setting up tests that conflict with each other? What kind of security functions are available—in a large organization, there may be separate managers for different site sections, for content management, and for approvals after the test is defined. How are privacy concerns addressed? What training does the company provide and what human assistance is available for technical, test design and marketing issues? The questions could go on, but the basic point is you need to walk through the process from start to finish with each vendor and imagine what it would be like to do this on a regular basis. If the system is too hard to use, then it really doesn’t matter what else it’s good at.

Wednesday, November 22, 2006

More on Web Optimization: Automated Deployment

I’ve learned a bit more about Web optimization system since yesterday’s post. Both Memetrics and Offermatica have clarified that they do in fact support some version of automated deployment of winning test components. It’s quite possible that other multi-variate testing systems do this as well: as I hope I made clear yesterday, I haven’t researched each product in depth.

While we’re on the topic, let’s take a closer look at automated deployment. It’s one of key issues related to optimization systems.

The first point to consider is that automated anything is a double-edged sword: it saves work for users, often letting them react more quickly and manage in greater detail than they could if manual action were required. But automation also means a system can make mistakes which may or may not be noticed and corrected by human supervisors. This is not an insurmountable problem: there are plenty of techniques to monitor automated systems and to prevent them from making big mistakes. But those techniques don’t appear by themselves, so it’s up to users to recognize they are needed and demand they be deployed.

With multi-variate Web testing in particular, automated deployment forces you to face a fundamental issue in how you define a winner. Automated systems aim to maximize a single metric, such as conversion rate or revenue per visit. Some products may be able to target several metrics simultaneously, although I haven’t seen any details. (The simplest approach is to combine several different metrics into one composite. But this may not capture the types of constraints that are important to you, such as capacity limits or volume targets. Incorporating these more sophisticated relationships is the essence of true optimization.) Still, even vendors whose algorithms target just one metric can usually track and report on several metrics. If you want to consider multiple metrics when picking a test winner, automated deployment will work only if your system can automatically include those multiple metrics in its winner selection process.

A second consideration is automatic cancellation of poorly performing options within an on-going test. Making a bad offer is a wasted opportunity: it drags down total results and precludes testing something else which could be more useful. Of course, some below-average performance is inevitable. Finding what does and doesn’t work is why we test in the first place. But once an option has proven itself ineffective, we’d like to stop testing it as soon as possible.

Ideally the system would automatically drop the worst combinations from its test plan and replace them with the most promising alternatives. The whole point of multi-variate testing is that it tests only some combinations and estimates the results of the rest. This means it can identify untested combinations that work better than anything that's actually been tried. But you never know if the system’s estimates are correct: there may be random errors or relationships among variables (“interactions”) that have gone undetected. It’s just common sense—and one of the ways to avoid automated mistakes—to test such combinations before declaring them the winner. If a system cannot add those combinations to the test plan automatically, benefits are delayed as the user waits for the end of the initial test, reads the results, and sets up a another test with the new combination included.

So far we’ve been discussing automation within the testing process itself. Automated deployment is something else: applying the test winner to the production system—that is, to treatment of all site visitors. This is technically not so hard for Web testing systems, since they already control portions of the production Web site seen by all visitors. So deployment simply means replacing the current default contents with the test winner. The only things to look for are (a) whether the system actually lets you specify default contents that go to non-test visitors and (b) whether it can automatically change those default contents based on test results.

Of course, there will be details about what triggers such a replacement: a specified time period, number of tests, confidence level, expected improvement, etc. Plus, you will want some controls to ensure the new content is acceptable: marketers often test offers they are not ready to roll out. At a minimum, you’ll probably want notification when a test has been converted to the new default. You may even choose to forego fully automated deployment and have the system request your approval before it makes a change.

One final consideration. In some environments, tests are running continuously. This adds its own challenges. For example, how do you prevent one test from interfering with another? (Changes from a test on the landing page might impact another test on the checkout page.) Automated deployment increases the chances of unintentional interference along these lines. Continuous tests also raise the issue of how heavily to weight older vs. newer results. Customer tastes do change over time, so you want to react to trends. But you don’t want to overreact to random variations or temporary situations. Of course, one solution is to avoid continuous tests altogether, and periodically start fresh with new tests instead. But if you’re trying to automate as much as possible, this defeats your purpose. The alternative is to look into what options the system provides to deal with these situations and assess whether they meet your needs.

This is much longer post than usual but it’s kind of a relaxed day at the office and this topic has (obviously) been on my mind recently. Happy Thanksgiving.

Tuesday, November 21, 2006

Sorting Out the Web Optimization Options

Everybody wants to get the best results from their Web site, and plenty of vendors are willing to help. I’ve been trying to make sense of the different Web site optimization vendors, and have tentatively decided they fall into four groups:

- Web analytics. These do log file or page beacon analysis to track page views by visitors. Examples are Coremetrics, Omniture, WebSideStory, and WebTrends. They basically can tell you how visitors are moving through your site, but then it’s up to you to figure out what to do about it. So far as I know, they lack formal testing capabilities other than reporting on tests you might set up separately.

- multi-variate testing. These systems let users define a set of elements to test, build an efficient test matrix that tries them in different combinations, execute the tests, and report on the results. Examples are Google Website Optimizer, Offermatica, Optimost, SiteSpect and Vertster. These systems serve the test content into user-designated page slots, which lets them control what each visitor sees. Their reports estimate the independent and combined impact of different test elements, and may go so far as to recommend an optimal combination of components. But it’s up to the user to apply the test results to production systems. [Since writing this I've learned that at least some vendors can automatically deploy the winning combination. You'll need to check with the individual vendors for details.]

- discrete choice models. These resemble multi-variate testing but use a different mathematic approach. They present different combinations of test elements to users, observe their behavior, and create predictive models with weights for the different categories of variables. This provides a level of abstraction that is unavailable in the multi-variate testing results, although I haven’t quite decided whether this really matters. So far as I can tell, only one vendor, Memetrics, has built choice models into a Web site testing system. (Others including Fair Isaac and MarketingNPV offer discrete choice testing via Web surveys.) Like the multi-variate systems, Memetrics controls the actual Web site content served in the tests. It apparently does have the capability to move winning rules into production.

- behavioral targeting. These systems monitor visitor behavior and serve each person the content most likely to meet business objectives, such as sales or conversions. Vendors include Certona, Kefta, Touch Clarity, and [x+1]; vendors with similar technology for ad serving include Accipiter, MediaPlex, and RevenueScience. These systems automatically build predictive models that select the most productive content for each visitor, refine the models as new results accumulate, and serve the recommended content. However, they test each component independently and can only test offers. This means they cannot answer questions about combinations of, say, headline and pricing, or which color or page layout is best.

Clearly these are very disparate tools. I’m listing them together because they all aim to help companies improve results from their Web sites, and all thus compete for the attention and budgets of marketers who must decide which projects to tackle first. I don’t know whether there’s a logical sequence in which they should be employed or some way to make them all work together. But clarifying the differences among them is a first step to making those judgments.

Monday, November 20, 2006

'Big Ideas' Must Be Rigorously Measured

Last Friday, I clipped a BusinessWeek ( article that listed a “set of integrated business disciplines” that create “exemplary customer experiences”. The disciplines include customer-facing “moments of truth”, well articulated brand values, close integration of technology and people, “co-creation” of experiences with customers, and an “ecosystem approach” to encompass all related products and services. (See “The Importance of Great Customer Experiences and the Best Ways to Deliver Them”, available here.) It’s a bit jargon-heavy for my taste, but does make the point that there’s more to Customer Experience Management than direct customer / employee interactions. Of course, that’s a key premise of our work at Client X Client. It was particularly on my mind because I had just written about a survey that seemed to equate customer experience with customer service (see my entry for November 16) .

Later in the weekend, I spent some time researching Web optimization systems. As I dug deeper into the details of rigorous testing methods and precision delivery systems, the “integrated business disciplines” mentioned in the BusinessWeek piece began to look increasingly insubstantial. How could the concrete measurements of Web optimization vendors ever be applied to notions such as “moment of truth”? But, if the value of those notions can’t be measured, how can we expect managers to care about them?

The obvious answer is that “big ideas” really can’t be measured because they’re just too, well, big. (The implication, of course, is that anybody who even tries to measure them is impossibly small-minded). But that won't do. We know that the ideas’ value will in fact ultimately be measured in the only metric that really matters, which is long-term profit. And, since business profit is ultimately determined by customer values, we find ourselves facing yet again the core mission of the Customer Experience Matrix: bridging the gap between the soft and squishy notions of customer experience and the cold, hard measures of customer value.

My point is not that the Customer Experience Matrix, Client X Client and Yours Truly are all brilliant. It’s that working on “big ideas” of customer experience doesn’t exempt anyone from figuring out how those ideas will translate into actual business value. If anything, the bigger the idea, the more important it is to work through the business model that shows what makes it worth pursuing. Making, or even just testing, customer experience changes without such a model is simply irresponsible.

Friday, November 17, 2006

Ion Group Survey Stresses Importance of Service

“People consider personalized, intelligent and convenient contact the most important elements of added value a company can offer.”

Insight or cliché? It really depends on who said it and why.

In this case, the quote is from UK-based marketing services provider Ion Group ( Ion sent a three-question email survey to 1,090 representative UK consumers and analyzed the results. We don't know how many responded. (Click here
for the study.)

Just knowing this tells us something about the quote: it isn't derived from a very big or detailed project, so the results are at best directional. In fact, they are barely better than anecdotal.

But the real question is, “Most important elements of added value” compared to what? Ion, to its credit, published the actual survey questions. It asked “What aspects of companies that you buy from do you consider offer the most value to you?” and gave a list that can be paraphrased (with their relative ranking) as:

- friendly, knowledgeable staff (127)
- open/contactable 12-24 hours a day (124)
- company can access my information (116)
- well known brand (104)
- nationwide network of outlets (104)
- environmentally friendly policies (102)
- loyalty scheme (98)
- send offers I’m interested in (93)
- periodically check whether I’m happy with my purchase (85)
- endorsed by celebrities (42)

It’s an interesting list and interesting rankings. Celebrities don’t matter – cool! (But bear in mind that these are UK consumers, not Americans.) Well targeted offers aren’t very important either – hmm, maybe not to consumers but how about to the companies that sell them things? Still, there is a clear message here: the top three items all boil down to service.

But wait - did you notice something odd?

Ion’s list is limited mostly to service considerations. Yet value is typically considered a combination of quality, service and price. So Ion is really just ranking alternatives within the service domain.

Why would Ion Group limit its survey in this fashion? A look at their Web site gives a hint. They offer event marketing, mystery shopping, contact center, fulfillment, affinity partnerships, lists, loyalty programs, and similar services. In other words, product quality and price are rarely within their control. Their survey focuses on what they know.

Fair enough. Still, it’s easy to misinterpret Ion’s findings unless you dig into the survey details. I wish they had been more explicit about the scope.

Thursday, November 16, 2006

Foxes in the Henhouse: Entellium and SpringCM Advise on Hosted Service Agreements

Back on September 26, I criticized a paper from Entellium ( that I felt ignored the need to identify business requirements before looking at system functionality. It's uncomfortable to write negative things, but I did feel better when I saw a paper from Entellium itself make a similar point: “Unfortunately, most hosted CRM buyers spend 95 percent of their time focusing on the features and functions that a solution contains, and nearly no time on what happens after the sales contract is signed.” Exactly.

This quote is from “Buyer Beware: Tips for Getting the Best Agreement from Hosted Application Vendors” available here. The paper concerned with contract terms, not service contracts. Still, it reinforces my point that too much attention is paid to features and functions to the exclusion of everything else.

In any case, I’m pleased to say I liked this Entellium paper much better. For one thing it’s short—just three pages—and gets right to the point. It proposes negotiations in four areas:

- technical support and training (“live training should be free to all your employees regardless of when you hired them.”)

- service level standards (make sure you have a written Service Level Agreement that guarantees 99.5% uptime and that you know your rights concerning data back-up and access)

- long term contracts (vendors should be willing to work month-to-month; any longer term contract should guarantee a service level)

- access to data (demand immediate and full data export at any time in a usable format).

A few of the details are apparently tailored to Entellium’s particular offering—for example, it seems a bit odd to focus specifically on “live, web-based training”. But these are the right issues to address.

If you want to look at these issue in more depth, hosted content management vendor SpringCM ( has a 12 page document “In Pursuit of P.R.A.I.S.E: Delivering on the Service Proposition” available here. P.R.A.I.S.E. is an acronym for:

- Performance
- Reliability
- Availablility
- Information Stewardship (security and backup)
- Scability
- Enterprise Dependability

This paper covers vendor evaluation as well as contract negotiating points, so its scope is broader than the Entellium paper. It provides specific questions to ask and the answers to listen for, which is very useful. Although SpringCM is a content management specialist, the recommendations themselves are general enough to apply to CRM and other hosted systems as well.

Wednesday, November 15, 2006

More Thoughts on Visualizing the Customer Experience

I did end up creating a version of the Matrix demonstration system I described yesterday, using a very neat tool from Business Objects called Crystal Xcelsius ( If you’d like a look, send me an email at You’ll get an interactive Matrix embedded within an Adobe pdf.

The demonstration does what I wanted, but I’m not pleased with the results. I think the problem is that it violates the central Matrix promise of displaying information on a single page. Sliding through time periods, like frames in a movie, doesn’t show relationships among different interactions at a glance. The demonstration system attempts to overcome this by showing current and future interactions in each cell. But because the future interactions could occur tomorrow or next year, this still doesn’t give a meaningful representation of the relationships among the events.

I’m toying with an alternative approach similar to the “swim lanes” frequently used to diagram business processes. We’d have to make time period an explicit dimension of the new Matrix, and let the other dimension be either channel, contact category, or both combined. (The combination could be treated by defining a column or “lane” for each contact category, and using different colored bubbles within each lane to represent different channels.) I don’t know whether I’ll have time to actually build a sample version of this and can’t quite prejudge whether it will work: it sounds like it might be too complicated to understand at a glance.

Of course, whether any solution “works” depends on the goal. Client X Client CEO Michael Hoffman was actually happy with the version I created yesterday, since only wanted to illustrate the point that it’s possible to predict what customers at one stage in their lifecycle are likely to do next. The details of timing are not important in that context.

We’ve also been discussing whether the Matrix should illustrate contacts with a single individual (presumably an ‘average’ customer or segment member) or should show contacts for a group of customers. In case that distinction isn’t clear: following a single customer might show, one store visit, one purchase and one service call, while a group of fifty customers might make fifty store visits, ten purchases and three service calls. Lifetime value results would be dramatically different in the two cases.

I’ve also toyed with a display that adjusts the contact probabilities based on the selected time period: to continue the previous example, the probability of any one customer making a service call is 3 in 50 at the time of a store visit, but 3 in 10 at the time of a purchase. Decisions made at different points in time need to reflect the information available at that time. Adjusting the probabilities in the Matrix as the time period changes would illustrate this nicely.

Note that all these different approaches could be derived from the same database of transactions classified by channel, contact type, and time period.

Obviously we can’t pursue all these paths, but it’s worth listing a few just as a reminder that there are many options and we need to consciously choose the ones that make sense for a particular situation.