Tuesday’s post on matching technology prompted a contact from Zoomix, a data quality software vendor just entering the U.S. market.
Zoomix makes the best case I’ve seen for the use of statistical methods without predefined reference data within data quality systems. Basically, they apply machine learning techniques that let users train systems for specific data quality applications. Part of this training involves inferring general rules for matching and data cleansing. That much is fairly standard. But Zoomix also captures specific information—such as the equivalence of terms in different languages—and stores those as well. In other words, Zoomix builds its own reference data by capturing user decisions, rather than relying on external reference databases. (Zoomix isn’t fanatic about its approach: users can also import reference data if they wish.)
I suppose self-generated reference data is still reference data, but the larger point is that Zoomix’s approach means it can be applied to any type of data, not just the traditional name and address information for which standard reference files exist. This gives Zoomix the flexibility traditionally associated with purely statistical solutions. Similarly, Zoomix self-generates rules to identify concepts, extract attributes, and build classification schemes. All these capabilities are traditionally associated with external knowledge such as grammars and taxonomies.
These capabilities make Zoomix sound more like text analysis software—think Autonomy, ClearForest (just bought by Reuters) and Inxight (just purchased by BusinessObjects)—than traditional data quality or matching solutions. This conveniently supports the point of Tuesday’s post, that those two worlds are closer than commonly realized (by me, at least).
Zoomix’s approach combines the flexibility of statistical solutions with the power of domain-specific reference data. I might change my mind after I think about it more deeply, but my first impression is it could mark a significant improvement in data quality techniques.
Thursday, May 31, 2007
Wednesday, May 30, 2007
SAS Survey on Marketing Measurements Gives Odd Results
SAS Institute recently released a survey of senior executives’ attitudes towards marketing performance measurement. The primary result was what you’d expect: depending on the medium, just 7% to 14% of executives rate their marketing measurements “very effective”.
But the details were perplexing. Respondents found direct mail to be the significantly less well measured than other media: 33% said direct mail measurements were either “very” or “somewhat” effective, compared with 50% to 58% for advertising, collateral, public relations and events.
Since most marketers would consider direct mail to be the most rather than least measurable of channels, you have to wonder what caused this result. It may be that the people surveyed don’t use much direct mail—39% said they either don’t measure direct mail effectiveness or don’t know whether they measure it, roughly twice as many as the 17% to 22% who gave the same answers for other media. According to the study, the survey was conducted among “large and midsize companies across a range of industries,” which doesn’t tell us much about who they were but does suggest few were direct marketers.
The other striking result was that respondents ranked “operations/supply chain management” in seventh place (43%) on their list of divisions that play a vital role in achieving strategic goals. Sales (64%), customer service (61%), strategy/planning (59%), marketing (55%) and even product development (54%) and IT (45%) all ranked higher. Only finance (39%) and human resources (20%) were deemed less important.
Personally I would rank operations pretty much as number 1 when it comes to achieving goals, so it really surprised me to see such different results. Perhaps the managers in the survey really believe they are in commodity businesses, or perhaps they just have a narrow definition of “operations” that excludes most customer-facing activities. (Note that sales and customer service, which certainly are customer-facing, ranked numbers 1 and 2.) Still, any way you slice it, operational experiences make up the bulk of customer interactions and largely determine future behavior. It’s just hard to imagine managers not recognizing that.
Another way to look at this is that my reactions are classic examples of people refusing to accept information that contradicts their pre-existing beliefs. I’ll concede this is a possibility: maybe direct mail really is least measurable, and maybe operations are not all that strategic. Or at least maybe most managers really believe those things, whether or not they’re correct. But before I accept such odd results, I’d really want to know more about how the questions were asked, who answered them, and whether other research leads to similar conclusions. I certainly wouldn’t spend company money on the assumption that they’re correct.
But the details were perplexing. Respondents found direct mail to be the significantly less well measured than other media: 33% said direct mail measurements were either “very” or “somewhat” effective, compared with 50% to 58% for advertising, collateral, public relations and events.
Since most marketers would consider direct mail to be the most rather than least measurable of channels, you have to wonder what caused this result. It may be that the people surveyed don’t use much direct mail—39% said they either don’t measure direct mail effectiveness or don’t know whether they measure it, roughly twice as many as the 17% to 22% who gave the same answers for other media. According to the study, the survey was conducted among “large and midsize companies across a range of industries,” which doesn’t tell us much about who they were but does suggest few were direct marketers.
The other striking result was that respondents ranked “operations/supply chain management” in seventh place (43%) on their list of divisions that play a vital role in achieving strategic goals. Sales (64%), customer service (61%), strategy/planning (59%), marketing (55%) and even product development (54%) and IT (45%) all ranked higher. Only finance (39%) and human resources (20%) were deemed less important.
Personally I would rank operations pretty much as number 1 when it comes to achieving goals, so it really surprised me to see such different results. Perhaps the managers in the survey really believe they are in commodity businesses, or perhaps they just have a narrow definition of “operations” that excludes most customer-facing activities. (Note that sales and customer service, which certainly are customer-facing, ranked numbers 1 and 2.) Still, any way you slice it, operational experiences make up the bulk of customer interactions and largely determine future behavior. It’s just hard to imagine managers not recognizing that.
Another way to look at this is that my reactions are classic examples of people refusing to accept information that contradicts their pre-existing beliefs. I’ll concede this is a possibility: maybe direct mail really is least measurable, and maybe operations are not all that strategic. Or at least maybe most managers really believe those things, whether or not they’re correct. But before I accept such odd results, I’d really want to know more about how the questions were asked, who answered them, and whether other research leads to similar conclusions. I certainly wouldn’t spend company money on the assumption that they’re correct.
Tuesday, May 29, 2007
Convergence of Matching and Search
I’ve been looking at name and address matching software recently. This is a field I’ve stopped following closely because action has moved to the higher planes of Customer Data Integration and Master Data Management. If there’s any trend in the matching field, it’s the development of generic string matching techniques that can be used on any data, not just names and addresses.
Like all matching engines, these return a score indicating how closely two strings resemble each other. In name/address matching, the scoring method has usually been pretty simple, at most a “string distance” calculation counting the number of differences between one string and another. All the real work went into the data parsing and standardization used to split a record into its components (first name, last name, house number, street name, etc.) and to remove variations such as alternate spellings and nicknames vs. formal names.
The newer approaches—I’m thinking specifically of Netrics, although Choicemaker and SAS DataFlux also qualify to some degree—apply more advanced matching to the strings themselves. This means they have less need for parsing and standardization.
Let’s acknowledge that string matching can never find matches identified by standardization. “Peggy” and “Margaret” are simply not similar strings, so the only way a computer can know one is a nickname for the other is if a reference database makes the connection.. But advanced string matching can look for relationships among segments within strings, such as words in different sequences, that make parsing less essential. Since parsing is both computationally intensive and itself less than perfectly accurate, this offers definite advantages.
These advantages are particularly evident once you move beyond the highly structured world of names and addresses to other types of data. Here, external information is less likely to be available to help with standardization, so the ability to uncover subtle relationships between different strings becomes more important. Actually, “subtle” isn’t quite the right word here: any human would recognize that “Mary Smith Gallagher” and “Gallagher Mary” are probably the same person, while a simple matching algorithm would see almost no similarity.
What’s interesting is this sort of matching applies to the wider world of search at least as well as to the traditional world of data quality. Most discussions of search are warped by the gravitational field of Internet search engines, which leads them to focus on finding the most popular or authoritative content relating to a query string. But for other applications, such as text search, string similarity is a primary concern.
As with name and address matching, text search often contains a large component of parsing and standardization, which the text search people would label as “semantic analysis”. Again, this indisputably adds important value. But simple misspellings and partial information abound in search inputs, and often in the data being searched as well. An engine that cannot overcome such imperfections will be at best partially effective. This is where more sophisticated matching methods can help.
In short, I’m proposing there is some useful opportunity for cross-fertilization between matching software and search vendors. Not perhaps the most brilliant insight ever, but worth mentioning nevertheless.
Like all matching engines, these return a score indicating how closely two strings resemble each other. In name/address matching, the scoring method has usually been pretty simple, at most a “string distance” calculation counting the number of differences between one string and another. All the real work went into the data parsing and standardization used to split a record into its components (first name, last name, house number, street name, etc.) and to remove variations such as alternate spellings and nicknames vs. formal names.
The newer approaches—I’m thinking specifically of Netrics, although Choicemaker and SAS DataFlux also qualify to some degree—apply more advanced matching to the strings themselves. This means they have less need for parsing and standardization.
Let’s acknowledge that string matching can never find matches identified by standardization. “Peggy” and “Margaret” are simply not similar strings, so the only way a computer can know one is a nickname for the other is if a reference database makes the connection.. But advanced string matching can look for relationships among segments within strings, such as words in different sequences, that make parsing less essential. Since parsing is both computationally intensive and itself less than perfectly accurate, this offers definite advantages.
These advantages are particularly evident once you move beyond the highly structured world of names and addresses to other types of data. Here, external information is less likely to be available to help with standardization, so the ability to uncover subtle relationships between different strings becomes more important. Actually, “subtle” isn’t quite the right word here: any human would recognize that “Mary Smith Gallagher” and “Gallagher Mary” are probably the same person, while a simple matching algorithm would see almost no similarity.
What’s interesting is this sort of matching applies to the wider world of search at least as well as to the traditional world of data quality. Most discussions of search are warped by the gravitational field of Internet search engines, which leads them to focus on finding the most popular or authoritative content relating to a query string. But for other applications, such as text search, string similarity is a primary concern.
As with name and address matching, text search often contains a large component of parsing and standardization, which the text search people would label as “semantic analysis”. Again, this indisputably adds important value. But simple misspellings and partial information abound in search inputs, and often in the data being searched as well. An engine that cannot overcome such imperfections will be at best partially effective. This is where more sophisticated matching methods can help.
In short, I’m proposing there is some useful opportunity for cross-fertilization between matching software and search vendors. Not perhaps the most brilliant insight ever, but worth mentioning nevertheless.
Friday, May 25, 2007
On Second Thought, Maybe Web Advertisers Will Dominate Web Marketing Systems
Wednesday’s post about online marketing systems generated some interesting discussion. You’ll recall it drew a picture of several systems (Web ads, mobile messages, email, paid search, unpaid search) feeding traffic to a Web site, which in turn was served by several more systems (behavioral targeting, site optimization, Web analytics, real time interaction management) that manage customer treatments. I speculated that the most likely candidates for combining all these systems were either the Web platform vendors or enterprise marketing systems.
But as I reflect on the really big recent acquisitions -- Google / Doubleclick, Microsoft / aQuantive, WPP / 24/7 Real Media – I wonder whether the real heavyweights will be the Web advertising vendors. They’re not software companies in the normal sense, but their businesses are based heavily on technology, both for integrating with Web sites (relatively simple, I think) and for matching the right ads to the right viewers (very complicated). What makes them inherently powerful is they earn commissions on an external revenue stream, which most software companies do not. This gives them the financial resources and motivation to extend their capabilities into the other areas of online marketing. Offering a more complete set of ways to serve advertisers will both be a competitive advantage and make it harder for customers to go elsewhere. That’s always an appealing combination for a vendor.
This bodes ill for the many stand-alone online marketing firms. A few lucky ones will be acquired, but the rest may find themselves frozen out of a tightly integrated marketing infrastructure. I'd like to think marketers would have the vision to insist on openness to prevent that, since this is in the marketers' long-term interest. But the path-of-least-resistance appeal of built-in tools is one of the most reliable motivators in all of technology, so it's unlikely to fail this time around.
But as I reflect on the really big recent acquisitions -- Google / Doubleclick, Microsoft / aQuantive, WPP / 24/7 Real Media – I wonder whether the real heavyweights will be the Web advertising vendors. They’re not software companies in the normal sense, but their businesses are based heavily on technology, both for integrating with Web sites (relatively simple, I think) and for matching the right ads to the right viewers (very complicated). What makes them inherently powerful is they earn commissions on an external revenue stream, which most software companies do not. This gives them the financial resources and motivation to extend their capabilities into the other areas of online marketing. Offering a more complete set of ways to serve advertisers will both be a competitive advantage and make it harder for customers to go elsewhere. That’s always an appealing combination for a vendor.
This bodes ill for the many stand-alone online marketing firms. A few lucky ones will be acquired, but the rest may find themselves frozen out of a tightly integrated marketing infrastructure. I'd like to think marketers would have the vision to insist on openness to prevent that, since this is in the marketers' long-term interest. But the path-of-least-resistance appeal of built-in tools is one of the most reliable motivators in all of technology, so it's unlikely to fail this time around.
Labels:
marketing software
Thursday, May 24, 2007
Accenture Study Underlines Need to Measure Customer Service Technology Impact
Accenture released an intriguing study (registration required) earlier this week contrasting the views of high-tech executives and their customers regarding after-sales support.
Perhaps the most substantive finding was that while 74% of the executives who implemented new customer self-service systems believed they now had higher customer satisfaction, only 14% of their customers rated their experience as “much better”. Twenty two percent actually said service had gotten worse.
This is intriguing for two reasons. First, it shows that customers just don’t find service technology all that helpful. Specifically regarding online self-service, only 11% said it was a priority. (The highest priorities were solving problems completely [69%] and quickly [65%].) Maybe that isn’t really a surprise—plenty of people don’t like self-service tools, particularly for technical issues where a simple FAQ is unlikely to be helpful. I suspect most companies really know this, but implement them anyway to save money.
Which brings us to the second point. That companies think satisfaction has increased even when it hasn’t, suggests they aren’t bothering to measure it. I suppose this isn’t really a surprise either, but the optimist in me never quite wants to accept what a truly miserable job most firms do at customer management and how little they truly care.
The same issue appears in another gap uncovered by the study: 75% of executives feel they provide “above average” service while 78% of customers feel their service is “at or below average”. Yes, humans have a well-known tendency to overestimate themselves, but such delusions can only persist if they don’t bother to measure actual performance. Apparently, the great majority of executives aren’t bothering.
Taken together, these two factors (customer dislike of self-service, and company failure to measure results) hint that investment in self-service systems may actually be value-destroying. If the systems make customers feel service has gotten worse, they will be more likely to leave, and if companies don’t measure this, they’ll never know about it. In addition, self-service systems may not even save money, since people must eventually speak to a human to get their problems resolved anyway. (To the first point: the press release accompanying the study states that 81% of customers who rate their service satisfaction as “below average” plan to purchase from a different supplier in the future. To the second point, the study reports that 64% of customers had to access service channels two or more times to resolve their issue.)
All of this just reinforces the conventional wisdom that you have to measure the impact of a CRM project, and the only measure that matters is the impact on customer behavior (dare I mention…lifetime value?) But since so many people keep ignoring this most basic of principles, I guess it needs repeating.
Perhaps the most substantive finding was that while 74% of the executives who implemented new customer self-service systems believed they now had higher customer satisfaction, only 14% of their customers rated their experience as “much better”. Twenty two percent actually said service had gotten worse.
This is intriguing for two reasons. First, it shows that customers just don’t find service technology all that helpful. Specifically regarding online self-service, only 11% said it was a priority. (The highest priorities were solving problems completely [69%] and quickly [65%].) Maybe that isn’t really a surprise—plenty of people don’t like self-service tools, particularly for technical issues where a simple FAQ is unlikely to be helpful. I suspect most companies really know this, but implement them anyway to save money.
Which brings us to the second point. That companies think satisfaction has increased even when it hasn’t, suggests they aren’t bothering to measure it. I suppose this isn’t really a surprise either, but the optimist in me never quite wants to accept what a truly miserable job most firms do at customer management and how little they truly care.
The same issue appears in another gap uncovered by the study: 75% of executives feel they provide “above average” service while 78% of customers feel their service is “at or below average”. Yes, humans have a well-known tendency to overestimate themselves, but such delusions can only persist if they don’t bother to measure actual performance. Apparently, the great majority of executives aren’t bothering.
Taken together, these two factors (customer dislike of self-service, and company failure to measure results) hint that investment in self-service systems may actually be value-destroying. If the systems make customers feel service has gotten worse, they will be more likely to leave, and if companies don’t measure this, they’ll never know about it. In addition, self-service systems may not even save money, since people must eventually speak to a human to get their problems resolved anyway. (To the first point: the press release accompanying the study states that 81% of customers who rate their service satisfaction as “below average” plan to purchase from a different supplier in the future. To the second point, the study reports that 64% of customers had to access service channels two or more times to resolve their issue.)
All of this just reinforces the conventional wisdom that you have to measure the impact of a CRM project, and the only measure that matters is the impact on customer behavior (dare I mention…lifetime value?) But since so many people keep ignoring this most basic of principles, I guess it needs repeating.
Wednesday, May 23, 2007
Online Marketing Systems Are Still Very Fragmented
What with all the recent acquisitions in the digital marketing industry, I thought I’d draw a little diagram of all the components needed for a complete solution. The results were a surprise.
It’s not that I didn’t know what the pieces were. I’ve written about pretty much every variety of online marketing software here or elsewhere and have reviewed dozens of products in depth. But somehow I had assumed that the more integrated vendors had already assembled a fairly complete package. Now that I’m staring at the list of components, I see how far we have to go.
My diagram is divided into two main areas: traffic generation systems that lead visitors to a Web site, and visitor treatment systems that control what happens once people get there. The Web site itself sits in the middle.
The traffic generation side includes:
- email campaign systems like Responsys and Silverpop;
- search engine marketing like Efficient Frontier and Did-It;
- online advertising like Doubleclick and Tacoda;
- search engine optimization like Apex Pacific and SEO Elite; and
- mobile advertising like Knotice and Enpocket
(There are many more vendors in each category; these are just top-of-mind examples, and not necessarily the market leaders. If you’re familiar with these systems, you’ll immediately notice that search engine optimization is a world of $150 PC software, while all the other categories are dominated by large service providers. Not sure why this is, or if I’ve just missed something.)
The visitor treatment side includes:
- behavioral targeting systems like [x + 1] and Certona (this area overlaps heavily with online ad networks, which use similar technology to target ads they place on other people’s Web sites)
- site optimization and personalization like Offermatica and Optimost;
- Web analytics like Coremetrics and Webtrends
- real time interaction management like Infor Epiphany and Chordiant(and, arguably, a slew of online customer service systems)
The Web site that sits in between has its own set of components. These include Web application servers like IBM Websphere, Web development tools like Adobe Dreamweaver, and Web content management tools like Vignette. Although their focus is much broader than just marketing, they certainly support marketing systems and many include marketing functions that compete with the specialized marketing tools.
If you compare this list of applications with the handful of seemingly integrated products, you’ll see that no product comes close to covering all the bases. Demand generation systems like Vtrenz, Eloqua and Manticore combine email with some Web page creation and analytics. Some of the general purpose campaign managers like Unica and SmartFocus combine cross-channel customer management with email, interaction management and analytics. A few of the recent acquisitions (Acxiom Impact / Kefta, Omniture / Touch Clarity, Silverpop / Vtrenz) marry particular pairs of capabilities. Probably the most complete offerings are from platform vendors like Websphere, although those products are so sprawling it’s often hard to understand their full scope.
What this all means is the wave of consolidations among online marketing vendors has just begun. Moreover, online marketing itself is just one piece of marketing in general, so even a complete online marketing system could be trumped by an enterprise marketing suite. Viewed from another angle, online marketing is also just one component of a total online platform. So the enterprise marketing vendors and the Web platform vendors will find themselves competing as well—both for acquisitions and clients.
Interesting days are ahead.
It’s not that I didn’t know what the pieces were. I’ve written about pretty much every variety of online marketing software here or elsewhere and have reviewed dozens of products in depth. But somehow I had assumed that the more integrated vendors had already assembled a fairly complete package. Now that I’m staring at the list of components, I see how far we have to go.
My diagram is divided into two main areas: traffic generation systems that lead visitors to a Web site, and visitor treatment systems that control what happens once people get there. The Web site itself sits in the middle.
The traffic generation side includes:
- email campaign systems like Responsys and Silverpop;
- search engine marketing like Efficient Frontier and Did-It;
- online advertising like Doubleclick and Tacoda;
- search engine optimization like Apex Pacific and SEO Elite; and
- mobile advertising like Knotice and Enpocket
(There are many more vendors in each category; these are just top-of-mind examples, and not necessarily the market leaders. If you’re familiar with these systems, you’ll immediately notice that search engine optimization is a world of $150 PC software, while all the other categories are dominated by large service providers. Not sure why this is, or if I’ve just missed something.)
The visitor treatment side includes:
- behavioral targeting systems like [x + 1] and Certona (this area overlaps heavily with online ad networks, which use similar technology to target ads they place on other people’s Web sites)
- site optimization and personalization like Offermatica and Optimost;
- Web analytics like Coremetrics and Webtrends
- real time interaction management like Infor Epiphany and Chordiant(and, arguably, a slew of online customer service systems)
The Web site that sits in between has its own set of components. These include Web application servers like IBM Websphere, Web development tools like Adobe Dreamweaver, and Web content management tools like Vignette. Although their focus is much broader than just marketing, they certainly support marketing systems and many include marketing functions that compete with the specialized marketing tools.
If you compare this list of applications with the handful of seemingly integrated products, you’ll see that no product comes close to covering all the bases. Demand generation systems like Vtrenz, Eloqua and Manticore combine email with some Web page creation and analytics. Some of the general purpose campaign managers like Unica and SmartFocus combine cross-channel customer management with email, interaction management and analytics. A few of the recent acquisitions (Acxiom Impact / Kefta, Omniture / Touch Clarity, Silverpop / Vtrenz) marry particular pairs of capabilities. Probably the most complete offerings are from platform vendors like Websphere, although those products are so sprawling it’s often hard to understand their full scope.
What this all means is the wave of consolidations among online marketing vendors has just begun. Moreover, online marketing itself is just one piece of marketing in general, so even a complete online marketing system could be trumped by an enterprise marketing suite. Viewed from another angle, online marketing is also just one component of a total online platform. So the enterprise marketing vendors and the Web platform vendors will find themselves competing as well—both for acquisitions and clients.
Interesting days are ahead.
Labels:
marketing software,
web analytics
Tuesday, May 22, 2007
Business Objects Reinvents Itself
I receive a great many press releases from Business Objects. Mostly I swat them away like gnats, on the theory that Business Objects is Business Objects and the details don’t really matter. But they did catch my eye this morning with their announced purchase agreement for Inxight, one of the leading text analysis software vendors.
What made this intriguing to me was that I’ve always thought of Business Objects as providing reporting—that is, access to data--as opposed to the technically sophisticated analytics of a SAS or SPSS. Its high profile acquisitions such as Crystal Reports and the delightful Xcelsius product reflect this. Its backward integration in data preparation (Acta, acquired in 2002) and address processing (Postalsoft, acquired 2006) still fit this mold. So do its moves into enterprise performance management, dashboards and visualization. It’s all about getting, cleaning and distributing data.
But text analysis is different. It involves complex algorithms to interpret data. It’s the province of PhDs and rocket scientists. Yes, the connection with Business Objects is obvious enough—text analysis is another form of data preparation, another way to gather input for the reporting tools. But thinking of it that way is like hooking up a jet engine to a donkey cart. The technology of the engine is much more advanced than the vehicle it’s pulling.
Anyway, all this led me to take a closer look at Business Objects itself. Turns out they’ve been launching a whole new brand identity (in case you also missed that memo, check out the May 2 press release). I could do without the tagline (“Let there be light”—unoriginal yet meaningless) but the general notion seems to be “intelligent information”, which to me means “analytics”.
Viewed from this perspective, the Inxight acquisition makes more sense. Business Objects is repositioning itself away from the commodity businesses of data preparation and reporting to the much sexier region of analytics—the territory of SAS, SPSS, Web analytics, process optimization, Fair Isaac, and others. Some of its other activities, such as an online “data store” and advanced profitability analysis, make more sense in this context. It’s a huge switch for a company like Business Objects and there’s no guarantee they’ll succeed. But it does make sense. Good luck to them.
What made this intriguing to me was that I’ve always thought of Business Objects as providing reporting—that is, access to data--as opposed to the technically sophisticated analytics of a SAS or SPSS. Its high profile acquisitions such as Crystal Reports and the delightful Xcelsius product reflect this. Its backward integration in data preparation (Acta, acquired in 2002) and address processing (Postalsoft, acquired 2006) still fit this mold. So do its moves into enterprise performance management, dashboards and visualization. It’s all about getting, cleaning and distributing data.
But text analysis is different. It involves complex algorithms to interpret data. It’s the province of PhDs and rocket scientists. Yes, the connection with Business Objects is obvious enough—text analysis is another form of data preparation, another way to gather input for the reporting tools. But thinking of it that way is like hooking up a jet engine to a donkey cart. The technology of the engine is much more advanced than the vehicle it’s pulling.
Anyway, all this led me to take a closer look at Business Objects itself. Turns out they’ve been launching a whole new brand identity (in case you also missed that memo, check out the May 2 press release). I could do without the tagline (“Let there be light”—unoriginal yet meaningless) but the general notion seems to be “intelligent information”, which to me means “analytics”.
Viewed from this perspective, the Inxight acquisition makes more sense. Business Objects is repositioning itself away from the commodity businesses of data preparation and reporting to the much sexier region of analytics—the territory of SAS, SPSS, Web analytics, process optimization, Fair Isaac, and others. Some of its other activities, such as an online “data store” and advanced profitability analysis, make more sense in this context. It’s a huge switch for a company like Business Objects and there’s no guarantee they’ll succeed. But it does make sense. Good luck to them.
Monday, May 21, 2007
Everest Software Balances Hosted and On-Demand
The public debate between on-premise and on-demand (hosted) software has largely been won by the on-demand side, particularly where small businesses are concerned. Faster, cheaper and easier deployment seems an overwhelming advantage, even though on-demand long-term costs may be higher, integration more difficult, and functionality not quite as rich as for on-premise systems.
And yet—plenty of on-premise software is still sold. In fact, although I haven’t seen actual figures, I suspect on-premise continues to hold a larger share of the market. Small businesses are reluctant to make a change, so they’re likely to stick with their existing systems and incremental upgrades as long as possible. In addition, many small business owners (and here I speak from personal experience) prefer to have as much control as possible over their business operations, and find the notion of such heavy reliance on a remote system to be distasteful.
Everest Software produced a good white paper summarizing the issues, “On-Demand vs. On-Premise Software Deployment”, available here (registration required). Everest provides enterprise management software (everything from CRM to ecommerce to inventory and accounting) for small businesses. It offers both on-premise and on-demand options, so its paper is free to present a balanced view of the choices.
I have just two factual quibbles with the paper. One is it assumes on-premise systems have richer user interfaces: although this was true in early implementations, today technologies such as AJAX allow even purely browser-based hosted system to provide pretty much the same interface as systems that install local software. The other is the statement, repeated twice, that “after a period of three to five years, many businesses achieve a lower total cost of ownership with On-Premise software deployments (exclusive of personnel costs).” This might be literally correct, but personnel costs can’t really be excluded. Once you count them in, on-demand systems are very likely cheaper in the long run as well as the short run for many small businesses.
In any event, long-run total cost of ownership is unlikely to be a key issue for most small businesses. Capital tends to be scarce at such firms, so the smaller up-front investment of on-demand is a more compelling consideration. Again, I suspect the real obstacles to on-demand are the emotional one of control and the practical one of integration. Of those, integration is really the key. This won’t be solved by publishing APIs, as some hosted vendors seem to hope, because most small businesses lack the technical resources needed to take advantage of APIs. We’ll see whether simpler integration capabilities become available, similar to “mash up” features now available on some consumer Web sites. If I were a hosted software developer, that’s where I’d put my energy.
And yet—plenty of on-premise software is still sold. In fact, although I haven’t seen actual figures, I suspect on-premise continues to hold a larger share of the market. Small businesses are reluctant to make a change, so they’re likely to stick with their existing systems and incremental upgrades as long as possible. In addition, many small business owners (and here I speak from personal experience) prefer to have as much control as possible over their business operations, and find the notion of such heavy reliance on a remote system to be distasteful.
Everest Software produced a good white paper summarizing the issues, “On-Demand vs. On-Premise Software Deployment”, available here (registration required). Everest provides enterprise management software (everything from CRM to ecommerce to inventory and accounting) for small businesses. It offers both on-premise and on-demand options, so its paper is free to present a balanced view of the choices.
I have just two factual quibbles with the paper. One is it assumes on-premise systems have richer user interfaces: although this was true in early implementations, today technologies such as AJAX allow even purely browser-based hosted system to provide pretty much the same interface as systems that install local software. The other is the statement, repeated twice, that “after a period of three to five years, many businesses achieve a lower total cost of ownership with On-Premise software deployments (exclusive of personnel costs).” This might be literally correct, but personnel costs can’t really be excluded. Once you count them in, on-demand systems are very likely cheaper in the long run as well as the short run for many small businesses.
In any event, long-run total cost of ownership is unlikely to be a key issue for most small businesses. Capital tends to be scarce at such firms, so the smaller up-front investment of on-demand is a more compelling consideration. Again, I suspect the real obstacles to on-demand are the emotional one of control and the practical one of integration. Of those, integration is really the key. This won’t be solved by publishing APIs, as some hosted vendors seem to hope, because most small businesses lack the technical resources needed to take advantage of APIs. We’ll see whether simpler integration capabilities become available, similar to “mash up” features now available on some consumer Web sites. If I were a hosted software developer, that’s where I’d put my energy.
Friday, May 18, 2007
Manticore Offers A Lower Cost Alternative For Online Lead Generation
Somehow I received a copy of “Increasing Revenue Through Automated Demand Generation” (registration required), the kind of title that sends chills up my spine. The paper, from Manticore Technology, recommends five best practices for business marketing: (1) define a unified marketing and sales pipeline; (2) deploy an integrated marketing and sales platform; (3) measure pipeline activity; (4) automate lead nurturing; and (5) focus on the top of the funnel. There’s a bit of special pleading in items (2) and (4), and item (5) seems a bit arbitrary (doesn’t it really depend on where you get the greatest return on investment?) But, still, this is perfectly reasonable stuff.
Manticore itself is pretty interesting. As the recommendations suggest, it provides a hosted system for integrated lead generation and management. The company positions itself directly against Eloqua, and indeed is quite similar. That is, it can generate outbound emails, build customized landing pages for email response, track behavior of Web site visitors, and execute multi-step lead nurturing campaigns with logical branching depending on prospect behavior. It can also track the sources of Web site visitors and manage pay per click Web advertising campaigns. The main advantages Manticore claims over Eloqua are lower cost and easier, faster implementation. (Manticore doesn’t mention this, but Eloqua also has some capabilities that Manticore lacks, including direct mail, event management, and online chat. Whether that is worth the extra money will depend on the situation.) Both products integrate with salesforce.com (and Eloqua with several others).
Manticore also has something called “Prospect Builder” which may actually attempt to identify anonymous visitors by looking up IP addresses and connecting email addresses with company domains. I can’t quite tell from the Web site whether this is what actually happens, but if it does, it’s a nice little bonus. Otherwise, what Prospect Builder clearly does is maintain profiles of visitor behavior (emails received, Web pages viewed, etc.) and link these to profiles in salesforce.com.
I’m glad to know about Manticore because people do occasionally ask about this sort of system and balk at the cost of Eloqua and even-more-expensive Vtrenz (just purchased by Silverpop; see last week's post). It’s good to have a lower cost alternative.
Manticore itself is pretty interesting. As the recommendations suggest, it provides a hosted system for integrated lead generation and management. The company positions itself directly against Eloqua, and indeed is quite similar. That is, it can generate outbound emails, build customized landing pages for email response, track behavior of Web site visitors, and execute multi-step lead nurturing campaigns with logical branching depending on prospect behavior. It can also track the sources of Web site visitors and manage pay per click Web advertising campaigns. The main advantages Manticore claims over Eloqua are lower cost and easier, faster implementation. (Manticore doesn’t mention this, but Eloqua also has some capabilities that Manticore lacks, including direct mail, event management, and online chat. Whether that is worth the extra money will depend on the situation.) Both products integrate with salesforce.com (and Eloqua with several others).
Manticore also has something called “Prospect Builder” which may actually attempt to identify anonymous visitors by looking up IP addresses and connecting email addresses with company domains. I can’t quite tell from the Web site whether this is what actually happens, but if it does, it’s a nice little bonus. Otherwise, what Prospect Builder clearly does is maintain profiles of visitor behavior (emails received, Web pages viewed, etc.) and link these to profiles in salesforce.com.
I’m glad to know about Manticore because people do occasionally ask about this sort of system and balk at the cost of Eloqua and even-more-expensive Vtrenz (just purchased by Silverpop; see last week's post). It’s good to have a lower cost alternative.
Thursday, May 17, 2007
Aberdeen Study Confirms Value of LTV Measures
I had truly intended to give lifetime value a rest, but then an email arrived from Aberdeen Group asking me to participate in one of their surveys on “customer value management”. You can fill it out too by clicking here and earn a free copy of the results. They’re asking all the right questions, although I wonder how many people can really answer them accurately.
Aberdeen’s “research preview” for the study certainly is pro-LTV. And I quote:
“Recent Aberdeen research indicates that Best-in-Class organizations utilize “customer lifetime value” metrics in modeling and predicting which mix of customers, products, sales, marketing or media channels will help them to best achieve revenue targets and goals. In contrast, average and lagging companies are apt to take a more short-term, transactional approach to marketing strategic planning. Specifically, the Best-in-Class are more than twice as likely to achieve a greater than 15% improvement in annual customer retention rates. The Best-in-Class also outperform average and laggard companies in annual increase in revenues and in achieving a return on marketing investment (ROMI).”
I guess that makes me feel better--although it doesn't really change the fact that most people don't want to listen.
Aberdeen’s “research preview” for the study certainly is pro-LTV. And I quote:
“Recent Aberdeen research indicates that Best-in-Class organizations utilize “customer lifetime value” metrics in modeling and predicting which mix of customers, products, sales, marketing or media channels will help them to best achieve revenue targets and goals. In contrast, average and lagging companies are apt to take a more short-term, transactional approach to marketing strategic planning. Specifically, the Best-in-Class are more than twice as likely to achieve a greater than 15% improvement in annual customer retention rates. The Best-in-Class also outperform average and laggard companies in annual increase in revenues and in achieving a return on marketing investment (ROMI).”
I guess that makes me feel better--although it doesn't really change the fact that most people don't want to listen.
Labels:
customer metrics,
lifetime value
Wednesday, May 16, 2007
AOL Enters Mobile Advertising with Third Screen Acquisition
More news from the mobile marketing front: AOL yesterday announced it had acquired Third Screen Media, which will operate as part of its Advertising.com www.advertising.com subsidiary. Third Screen runs a mobile advertising network and provides tools for advertisers, publishers and carriers to research, place, administer and report on mobile ads.
Basically this illustrates the continued convergence of mobile with other digital advertising. It doesn’t explicitly address the unique capabilities offered by mobile—individual (tied to a person), local (tied to current physical location), continuous (always-on). Nor does it address interactions across those channels, such as using email, Web and mobile in the same campaign. In other words, it’s about advertising, not messaging.
Ultimately, messaging is likely to be more effective than advertising. By messaging, I mean two-way interactions with customers. Messaging is more work for marketers: they have to develop useful programs and keep them running, and probably change them fairly frequently to keep customers involved. It might help to look for messaging programs that add real value (e.g., price alerts, online coupons) rather than simple marketing promotions that attract participants largely by being entertaining.
Advertising is still needed to find customers to join messaging programs. So the two are complementary, not competitive. But it would be easy for marketers to consider advertising enough by itself. That would be a huge waste of the potential of the mobile medium.
Basically this illustrates the continued convergence of mobile with other digital advertising. It doesn’t explicitly address the unique capabilities offered by mobile—individual (tied to a person), local (tied to current physical location), continuous (always-on). Nor does it address interactions across those channels, such as using email, Web and mobile in the same campaign. In other words, it’s about advertising, not messaging.
Ultimately, messaging is likely to be more effective than advertising. By messaging, I mean two-way interactions with customers. Messaging is more work for marketers: they have to develop useful programs and keep them running, and probably change them fairly frequently to keep customers involved. It might help to look for messaging programs that add real value (e.g., price alerts, online coupons) rather than simple marketing promotions that attract participants largely by being entertaining.
Advertising is still needed to find customers to join messaging programs. So the two are complementary, not competitive. But it would be easy for marketers to consider advertising enough by itself. That would be a huge waste of the potential of the mobile medium.
Labels:
mobile marketing
Tuesday, May 15, 2007
Are Visual Sciences and WebSideStory Really the Same Company? (As a matter of fact, yes.)
Last week, WebSideStory announced it was going to become part of the Visual Sciences brand. (The two companies merged in February 2006 but had retained separate identities.)
The general theme of the combined business is “real time analytics”. This is what Visual Sciences has always done, so far as I can recall. It’s more of a departure for WebSideStory, which has its roots in the batch-oriented world of Web log analysis.
But what’s really intriguing is the applications WebSideStory has developed. One is a search system that helps users navigate within a site. Another provides Web content management. A third provides keyword bid management.
Those applications may sound barely relevant, but all are enriched with analytics in ways that make perfect sense. The search system uses analytics to help infer user interests and also lets users control results so the users are shown items that meet business needs. Web content management also includes functions that let business objectives influence the content presented to visitors. Keyword bid management is tightly integrated with subsequent site behavior—conversions and so on—so value can be optimized beyond the cost per click.
Maybe this is just good packaging, but it does seem to me that Visual Sciences has done something pretty clever here: rather than treating analytics and targeting as independent disciplines that are somehow applied to day-to-day Web operations, it has built analytics directly into the operational functions. Given the choice between plain content management and analytics-enhanced content management, why would anyone not choose the latter?
I haven’t really dug into these applications, so my reaction is purely superficial. All I know is they sound attractive. But even this is impressive at a time when so many online vendors are expanding their product lines through acquisitions that seem to have little strategic rationale beyond generally expanding the company footprint.
The general theme of the combined business is “real time analytics”. This is what Visual Sciences has always done, so far as I can recall. It’s more of a departure for WebSideStory, which has its roots in the batch-oriented world of Web log analysis.
But what’s really intriguing is the applications WebSideStory has developed. One is a search system that helps users navigate within a site. Another provides Web content management. A third provides keyword bid management.
Those applications may sound barely relevant, but all are enriched with analytics in ways that make perfect sense. The search system uses analytics to help infer user interests and also lets users control results so the users are shown items that meet business needs. Web content management also includes functions that let business objectives influence the content presented to visitors. Keyword bid management is tightly integrated with subsequent site behavior—conversions and so on—so value can be optimized beyond the cost per click.
Maybe this is just good packaging, but it does seem to me that Visual Sciences has done something pretty clever here: rather than treating analytics and targeting as independent disciplines that are somehow applied to day-to-day Web operations, it has built analytics directly into the operational functions. Given the choice between plain content management and analytics-enhanced content management, why would anyone not choose the latter?
I haven’t really dug into these applications, so my reaction is purely superficial. All I know is they sound attractive. But even this is impressive at a time when so many online vendors are expanding their product lines through acquisitions that seem to have little strategic rationale beyond generally expanding the company footprint.
Labels:
analytics tools,
marketing software,
web analytics
Monday, May 14, 2007
If Lifetime Value Falls and Nobody Measures It, Has It Really Gone Down?
I’m starting to rethink my focus on lifetime value as the key to customer centricity. I’m still fully convinced of my position: LTV is the essential guide for customer level management. But that message just doesn’t seem to resonate, even among managers who have accepted customer centricity as their goal.
I haven’t quite figured out why this is. The specific objections—lack of data, no practical applications, need for short term results—all have responses that I find convincing. Others may not, but I sense the problem is less specific objections than a general sense that LTV is irrelevant to day-to-day needs. People get much more excited talking about a specific online marketing approach or new analytical tool. They don’t see lack of measurement systems as a problem, and therefore aren’t interested in LTV as a solution.
This makes me sad, since people who fail to address this fundamental issue can never fully succeed. But if people want to focus on immediate concerns, I can’t stop them. Perhaps the best I can do is to ensure any short-term solutions are compatible with LTV measurement, so it will be available when people decide they need it.
I haven’t quite figured out why this is. The specific objections—lack of data, no practical applications, need for short term results—all have responses that I find convincing. Others may not, but I sense the problem is less specific objections than a general sense that LTV is irrelevant to day-to-day needs. People get much more excited talking about a specific online marketing approach or new analytical tool. They don’t see lack of measurement systems as a problem, and therefore aren’t interested in LTV as a solution.
This makes me sad, since people who fail to address this fundamental issue can never fully succeed. But if people want to focus on immediate concerns, I can’t stop them. Perhaps the best I can do is to ensure any short-term solutions are compatible with LTV measurement, so it will be available when people decide they need it.
Friday, May 11, 2007
Silverpop Acquires Vtrenz
Last Tuesday, email marketer Silverpop announced it was acquiring Vtrenz, which provides an integrated email / Web marketing suite.
This is the third recent acquisition by a email service provider—the others being Kefta by Actiom Impact and Loyalty Matrix by Responsys. I think it’s the most intriguing of the three, because the other two primarily added analytical capabilities, while Vtrenz adds a strong Web marketing component. This makes Vtrenz more of an integrated online marketing solution than the other two—or, at least, it will if and when the Vtrenz technology is actually linked with Silverpop. (For the moment, from the press release, it appears that Vtrenz will remain a separate operation.)
It’s pretty clear to me that that the major email providers will all need to expand in this direction. This is a natural extension for them and part of the inevitable consolidation within the online marketing arena. It’s certainly a good thing for customers who will benefit from working with fewer systems while creating more tightly integrated Web and email campaigns.
This is the third recent acquisition by a email service provider—the others being Kefta by Actiom Impact and Loyalty Matrix by Responsys. I think it’s the most intriguing of the three, because the other two primarily added analytical capabilities, while Vtrenz adds a strong Web marketing component. This makes Vtrenz more of an integrated online marketing solution than the other two—or, at least, it will if and when the Vtrenz technology is actually linked with Silverpop. (For the moment, from the press release, it appears that Vtrenz will remain a separate operation.)
It’s pretty clear to me that that the major email providers will all need to expand in this direction. This is a natural extension for them and part of the inevitable consolidation within the online marketing arena. It’s certainly a good thing for customers who will benefit from working with fewer systems while creating more tightly integrated Web and email campaigns.
Tuesday, May 08, 2007
Chase Bank Does It All Wrong
I could easily fill several blogs with personal tales of poor customer experiences, but don’t generally feel it adds any value. Today I’ll make an exception, only because the story illustrates how incredibly inept even a presumably sophisticated organization can be. Plus I’m really annoyed.
So, I walk into my local Chase Bank branch (yes, we’re naming names here) around 11:30 a.m. to make a quick deposit when there won’t be a line. Not a customer in sight. But I’m intercepted by an unfamiliar young woman at the service desk who offers to “help” me and then tells—not asks, but tells—me to go sit and talk to “Lou” the personal banker while she handles my transaction. She promises this won’t add any time, but of course it’s already taken longer than it would to just do the deposit.
Now I have to walk across the branch and wait while she pulls aside a rope barrier (no idea why that’s there) to introduce me to Lou. Since I’m obviously annoyed, she assures me they’re “doing this with everyone,” presumably to let me know I’m not being singled out as a problem, but also (a) ending any illusion I might retain that they recognize me as an important customer and (b) confirming they are doing this without any thought.
Lou also orders me to sit down and proceeds to verify my mailing address (which of course is correct since I’m receiving my statements) and my phone number (which is also correct since they called me two weeks ago to “schedule an appointment” with a personal banker on Tuesday or Thursday, since those were the days that worked for him). Lou then mentions some offer paying me a bounty on new accounts.
At this point I ask Lou if he realizes just how poorly this is being done. Obviously they have correct addresses: yes, he says, but they do get many corrected telephone numbers. He apologizes if it was “a poor customer experience, ” the young woman reappears with my deposit receipt, and that’s that.
Ok, so what did they do so poorly? First and foremost, they interrupted me when I was intent on making a simple transaction. The one thing I know for sure about banking (courtesy of Paco Underhill’s Why We Buy) is that people go into a branch to get something done and don’t want to hear about anything else until that’s taken care of. Pardon the cliche, but getting between me and that open teller really was like getting between a mother bear and her cub.
Second, they took a short transaction and made it longer.
Third, they did this without asking whether I had the time.
Fourth, they told me to sit down. There’s a whole power / authority thing going on here. (I remained standing the whole time.)
Fifth, they asked me really stupid questions—address and phone. They already have both and know or should know they’re correct.
Sixth, it was obviously all about them. I know they’re never going to use my phone number for anything helpful because the last time there was a problem with my account, they sent me a notice by mail. In my seven years as a customer, I can’t remember a phone call from them about anything except marketing.
Seventh, they missed their chance. I actually do need some new services, which would be obvious from a casual glance at my accounts. (Lou was a middle aged guy, probably an experienced banker, who could almost certainly recognize this if he tried. Instead he verified my address.) Ironically, if Lou had simply asked "Is there anything we can do for you?", he might have gotten some business out of me, despite all the other mistakes.
On reflection, I realize the entire transaction was the exact equivalent of a telemarketing call: an interruption of my business done obviously, purely and thoughtlessly for someone else’s purposes. Except of course, it was worse: with a telemarketer you can hang up, while here I had no choice but to participate. No wonder I was unhappy.
The pity, of course, is this was part of a “customer relationship” program. Somebody went out of their way to set up this experience that did nothing but annoy me. No doubt I’m more sensitive than most, but the flaws here are still fundamental and obvious. These are the sorts of mistakes you might expect from an amateur—but from an institution the size and sophistication of JP Morgan Chase, they’re inexcusable.
So, I walk into my local Chase Bank branch (yes, we’re naming names here) around 11:30 a.m. to make a quick deposit when there won’t be a line. Not a customer in sight. But I’m intercepted by an unfamiliar young woman at the service desk who offers to “help” me and then tells—not asks, but tells—me to go sit and talk to “Lou” the personal banker while she handles my transaction. She promises this won’t add any time, but of course it’s already taken longer than it would to just do the deposit.
Now I have to walk across the branch and wait while she pulls aside a rope barrier (no idea why that’s there) to introduce me to Lou. Since I’m obviously annoyed, she assures me they’re “doing this with everyone,” presumably to let me know I’m not being singled out as a problem, but also (a) ending any illusion I might retain that they recognize me as an important customer and (b) confirming they are doing this without any thought.
Lou also orders me to sit down and proceeds to verify my mailing address (which of course is correct since I’m receiving my statements) and my phone number (which is also correct since they called me two weeks ago to “schedule an appointment” with a personal banker on Tuesday or Thursday, since those were the days that worked for him). Lou then mentions some offer paying me a bounty on new accounts.
At this point I ask Lou if he realizes just how poorly this is being done. Obviously they have correct addresses: yes, he says, but they do get many corrected telephone numbers. He apologizes if it was “a poor customer experience, ” the young woman reappears with my deposit receipt, and that’s that.
Ok, so what did they do so poorly? First and foremost, they interrupted me when I was intent on making a simple transaction. The one thing I know for sure about banking (courtesy of Paco Underhill’s Why We Buy) is that people go into a branch to get something done and don’t want to hear about anything else until that’s taken care of. Pardon the cliche, but getting between me and that open teller really was like getting between a mother bear and her cub.
Second, they took a short transaction and made it longer.
Third, they did this without asking whether I had the time.
Fourth, they told me to sit down. There’s a whole power / authority thing going on here. (I remained standing the whole time.)
Fifth, they asked me really stupid questions—address and phone. They already have both and know or should know they’re correct.
Sixth, it was obviously all about them. I know they’re never going to use my phone number for anything helpful because the last time there was a problem with my account, they sent me a notice by mail. In my seven years as a customer, I can’t remember a phone call from them about anything except marketing.
Seventh, they missed their chance. I actually do need some new services, which would be obvious from a casual glance at my accounts. (Lou was a middle aged guy, probably an experienced banker, who could almost certainly recognize this if he tried. Instead he verified my address.) Ironically, if Lou had simply asked "Is there anything we can do for you?", he might have gotten some business out of me, despite all the other mistakes.
On reflection, I realize the entire transaction was the exact equivalent of a telemarketing call: an interruption of my business done obviously, purely and thoughtlessly for someone else’s purposes. Except of course, it was worse: with a telemarketer you can hang up, while here I had no choice but to participate. No wonder I was unhappy.
The pity, of course, is this was part of a “customer relationship” program. Somebody went out of their way to set up this experience that did nothing but annoy me. No doubt I’m more sensitive than most, but the flaws here are still fundamental and obvious. These are the sorts of mistakes you might expect from an amateur—but from an institution the size and sophistication of JP Morgan Chase, they’re inexcusable.
Labels:
customer experience management
Monday, May 07, 2007
Enough About LTV: Let's Talk Mobile
One final thought on last week’s string regarding LTV vs. product-based metrics. The precise relationship between LTV and conventional measures such as profit and cash flow is this: profit and cash flows are constraints, while LTV is what you optimize.
Now that we’ve cleared that up, I’d like to point out that today’s New York Times has not one but two articles on mobile marketing. One is on the front page of the business section (“Hollywood Loves the Tiny Screen. Advertisers Don’t.”, The New York Times, May 7, 2007, Business Day, page C1) and the other is inside (“Cellphones Tailored for Any Organization”, The New York Times, May 7, 2007, Business Day, page c7). This follows a piece last month in BusinessWeek (“The Sell-Phone Revolution”, BusinessWeek, April 23, 2007).
The BusinessWeek piece was still in the “gee-whiz, they can do ads on mobile phones” stage of thinking. The two Times pieces were a little more evolved, addressing the business challenges in mobile content and the idea of private-label cell phones for affinity groups or businesses (being offered by Sonopia).
I could note here that the private-label cell phone concept is yet another example of monetizing a customer relationship: in this case, by getting a consumer to commit to carrying your own cell phone, which then gives you a channel to beam them messages—your own and other people’s. But I think I just talked about that last week, and wouldn’t want to repeat myself (unless the topic is LTV. Have I mentioned that lately?)
So let me make another observation: the idea of private-label cell phones leads to the idea of people having more than one. The Times article mentions companies giving their phones to employees; this might easily be extended to favored customers and suppliers. I’m not sure there’s much business sense in this, although of course companies do already often provide non-branded phones to employees as regular business tools. But if some sort of revenue base evolves that makes it profitable for groups to offer phones to consumers for next to nothing, I can certainly see consumers carrying multiple phones in the same way they carry multiple credit cards.
In fact, I rather like the concept because it will break the emerging notion that cell phones are identical with their owners: each person has one phone and each phone has one person. This is almost true today but will probably be less true in the future. So it’s good for marketers to think ahead about how they’ll deal with many-to-many relationships.
Now that we’ve cleared that up, I’d like to point out that today’s New York Times has not one but two articles on mobile marketing. One is on the front page of the business section (“Hollywood Loves the Tiny Screen. Advertisers Don’t.”, The New York Times, May 7, 2007, Business Day, page C1) and the other is inside (“Cellphones Tailored for Any Organization”, The New York Times, May 7, 2007, Business Day, page c7). This follows a piece last month in BusinessWeek (“The Sell-Phone Revolution”, BusinessWeek, April 23, 2007).
The BusinessWeek piece was still in the “gee-whiz, they can do ads on mobile phones” stage of thinking. The two Times pieces were a little more evolved, addressing the business challenges in mobile content and the idea of private-label cell phones for affinity groups or businesses (being offered by Sonopia).
I could note here that the private-label cell phone concept is yet another example of monetizing a customer relationship: in this case, by getting a consumer to commit to carrying your own cell phone, which then gives you a channel to beam them messages—your own and other people’s. But I think I just talked about that last week, and wouldn’t want to repeat myself (unless the topic is LTV. Have I mentioned that lately?)
So let me make another observation: the idea of private-label cell phones leads to the idea of people having more than one. The Times article mentions companies giving their phones to employees; this might easily be extended to favored customers and suppliers. I’m not sure there’s much business sense in this, although of course companies do already often provide non-branded phones to employees as regular business tools. But if some sort of revenue base evolves that makes it profitable for groups to offer phones to consumers for next to nothing, I can certainly see consumers carrying multiple phones in the same way they carry multiple credit cards.
In fact, I rather like the concept because it will break the emerging notion that cell phones are identical with their owners: each person has one phone and each phone has one person. This is almost true today but will probably be less true in the future. So it’s good for marketers to think ahead about how they’ll deal with many-to-many relationships.
Friday, May 04, 2007
I Want My LTV Shirt!
I received my custom-printed “LTV RULES!” t-shirts yesterday. Naturally, you buy these over the Internet. The customer experience was painless at www.designashirt.com and I’d highly recommend them.
What’s interesting from a Client X Client point of view is that the company offered a $.50 discount on each shirt if you add their logo. Maybe I shouldn’t be too impressed at their cleverness in recognizing that the product represented an advertising opportunity, since many of their shirts are used as marketing promotions to begin with. Still, it’s a classic example of identifying a “slot” (space on a shirt you printed). converting it into a customer experience (if your logo were not on the shirt, no one would not know you produced it), and attaching a value to it (paying the buyer $.50 per shirt).
How did they come up with $.50? I don’t know and rather doubt it’s based on very precise analysis—after all, it’s tough to measure response to such a promotion. Could they sell the space to someone else, perhaps for more money? Quite likely: many marketers would welcome the opportunity to reach such highly targeted audiences, and many of the shirt buyers would gladly trade a price reduction for adding a logo or two. If the match were made correctly, there could be a mutual halo effect between the organization and the advertiser.
Anyway, I’m looking forward to enjoying my shirts, and will definitely send one to the colleague I mentioned yesterday who didn’t want to run his company by LTV.
Thursday, May 03, 2007
Facing the Threat of LTV Fundamentalism
You may suspect that some of the people I mention in these posts are created by me for dramatic purposes. First of all, I’m not clever enough to do that, and secondly, you may have noticed that they’re usually smarter than I am. Rest assured I will not willingly lose an argument to a figment of my imagination. On the other hand, I do try to avoid giving away true identities, since I don’t want to violate anyone’s privacy.
So you’ll just have to take my word for it that I really was discussing marketing metrics recently when, quite unprompted, one of my companions said the one thing he would absolutely never do was run his business on lifetime value. Since we had agreed on many related issues during the conversation, I was quite taken aback.
His argument had two points. The main issue was that businesses must meet near-term profit and cash flow targets, and no manager would be allowed to ignore them. Fair enough—this almost goes without saying, but does need to be noted every so often. Even though I do still believe LTV is the One True Metric, I’m perfectly aware that it must be balanced against other objectives like profit and return on investment in the real world.
His second point was that LTV counts heavily on future behaviors, and these may never occur. This is also true, but I feel it’s accounted for by applying high discount rates to future cash flows and, more generally, by being conservative in the underlying assumptions. So I don’t consider this one to be a fundamental objection.
In short, I know I’m oversimplifying when I focus on LTV and I’m doing it on purpose. Companies need a lot of convincing to adopt LTV, and it’s much more effective to communicate a simple message than a nuanced view with many qualifications. Of course the simplicity will be compromised during implementation; indeed, I’d be horrified if it were not. But first you have get people to accept the big idea, and to do that, you have to keep it simple.
So you’ll just have to take my word for it that I really was discussing marketing metrics recently when, quite unprompted, one of my companions said the one thing he would absolutely never do was run his business on lifetime value. Since we had agreed on many related issues during the conversation, I was quite taken aback.
His argument had two points. The main issue was that businesses must meet near-term profit and cash flow targets, and no manager would be allowed to ignore them. Fair enough—this almost goes without saying, but does need to be noted every so often. Even though I do still believe LTV is the One True Metric, I’m perfectly aware that it must be balanced against other objectives like profit and return on investment in the real world.
His second point was that LTV counts heavily on future behaviors, and these may never occur. This is also true, but I feel it’s accounted for by applying high discount rates to future cash flows and, more generally, by being conservative in the underlying assumptions. So I don’t consider this one to be a fundamental objection.
In short, I know I’m oversimplifying when I focus on LTV and I’m doing it on purpose. Companies need a lot of convincing to adopt LTV, and it’s much more effective to communicate a simple message than a nuanced view with many qualifications. Of course the simplicity will be compromised during implementation; indeed, I’d be horrified if it were not. But first you have get people to accept the big idea, and to do that, you have to keep it simple.
Wednesday, May 02, 2007
Still More Thoughts on Measurement for Product Managers
I think yesterday’s comments on lifetime value and product reporting need a bit of clarification. It’s important to distinguish measurements of customer acquisition efforts from measurements of other customer contacts. With acquisition, lifetime value in as a formal financial measure is very important and widely accepted, even though the actual calculation often does not include the full scope of future cross sales and other ancillary values. Here, the sort of attitude measures I was proposing as a more-accessible proxy for future value calculations are neither necessary nor appropriate.
Once a customer is acquired, it becomes much more plausible to consider each sale as independent. This is where product- and promotion-specific metrics are often used without consideration of their future value impact. It’s also difficult to measure the true incremental impact on future value of any one promotion or purchase (or other contact, such as product use or customer service.) Since future value is less obviously needed and more difficult to calculate, there’s little wonder it is used so rarely in these situations.
I haven’t changed my position: understanding the future value impact of each contact is still the only way to truly optimize business results. But this distinction does suggest that companies might start by improving the accuracy of their acquisition LTV measurements, for example by ensuring they include results across all product lines. This will be easier for managers to understand and accept, while laying the data and analytical foundation needed for the later, more challenging task of measuring incremental value changes from post-acquisition contacts.
Once a customer is acquired, it becomes much more plausible to consider each sale as independent. This is where product- and promotion-specific metrics are often used without consideration of their future value impact. It’s also difficult to measure the true incremental impact on future value of any one promotion or purchase (or other contact, such as product use or customer service.) Since future value is less obviously needed and more difficult to calculate, there’s little wonder it is used so rarely in these situations.
I haven’t changed my position: understanding the future value impact of each contact is still the only way to truly optimize business results. But this distinction does suggest that companies might start by improving the accuracy of their acquisition LTV measurements, for example by ensuring they include results across all product lines. This will be easier for managers to understand and accept, while laying the data and analytical foundation needed for the later, more challenging task of measuring incremental value changes from post-acquisition contacts.
Tuesday, May 01, 2007
More Thoughts On Measurements for Product Managers
I’m still thinking about how to measure product performance a customer-based world. Where I ended up yesterday was pretty much that there’s no alternative to using lifetime value, which specifically means calculating the incremental impact of each product sale on a customer’s future value. The main objection to this is the numbers will contain a great many estimates that will probably seem arbitrary, political or downright incomprehensible to most product managers. This violates one of the fundamental rules of management metrics, which is that managers should be judged on measures they can understand.
It also violates the rule that managers should be held accountable for things they can control. This is because the future value of the customer is affected by many factors other than that one product purchase. Managers would rightly feel they were being treated unfairly if the value assigned to their work was based largely on external elements.
One shouldn’t make too much of these issues. Although revenue is pretty easy to measure directly, any profit statement includes a fair number of allocated costs that are somewhat questionable. And even revenue figures will include estimated reserves for returns, bad debt and similar future losses. I suspect that few product managers could really explain how those calculations are made. Unless they suspect a major error (and that this error undervalues their performance), they are likely to just accept the figures provided. Lifetime value would ultimately work that way as well.
The “dependence on others” objection can also be overstated. In any large organization, many major revenue and cost drivers will be outside the product manager’s control. So they are used to that as well.
But, realistically, there is a big difference between being held responsible for profits on your own product’s sales—however those profits are measured—and profits on subsequent sales of other products. Both the fact that these are sales of other products and that they occur after the customer completes her experience with your product are problematic.
The best I can do right now is to suggest estimating the customer’s future behavior at the end of the product experience. This at least captures the customer’s state when they “left your hands”, so to speak, and before their intentions were affected by other activities. Even better, you could compare their expected behavior after the purchase with their expected behavior before the purchase, since any change is presumably due to their experience in between.
Of course, this immediately raises the question of when the product experience ends. Assuming they use the product after they buy it, its performance will affect their behavior at least as much as the purchase experience itself. I don’t have a specific answer for this; maybe you measure expected behavior in several places.
The other question this raises is where you get the estimates. In a rigorously tested environment, they could be based on firm data. When you find that environment, let me know. Here in the real world, you’ll probably be stuck measuring customer intentions with something like a net promoter score. Yes, I’m fully aware of the problems with such surveys, and still stand by my earlier criticisms. But if net promoter score is the best measure available, then that’s the one you use.
The point of measuring customer intentions after the purchase is simply to get product managers to think of affecting future behavior as part of their job. That’s what has to happen if they are going to help maximize lifetime value. Our real goal is to convert the net promoter scores into an expected future value stream, and thus report the change in expected lifetime value directly. But net promoter scores might actually be easier for them to grasp.
I’m still not thrilled with this solution, but it’s progress of a sort. At least it lets product managers manage something that is largely under their control, yet still orients them toward long term customer value. Those are the basic objectives.
It also violates the rule that managers should be held accountable for things they can control. This is because the future value of the customer is affected by many factors other than that one product purchase. Managers would rightly feel they were being treated unfairly if the value assigned to their work was based largely on external elements.
One shouldn’t make too much of these issues. Although revenue is pretty easy to measure directly, any profit statement includes a fair number of allocated costs that are somewhat questionable. And even revenue figures will include estimated reserves for returns, bad debt and similar future losses. I suspect that few product managers could really explain how those calculations are made. Unless they suspect a major error (and that this error undervalues their performance), they are likely to just accept the figures provided. Lifetime value would ultimately work that way as well.
The “dependence on others” objection can also be overstated. In any large organization, many major revenue and cost drivers will be outside the product manager’s control. So they are used to that as well.
But, realistically, there is a big difference between being held responsible for profits on your own product’s sales—however those profits are measured—and profits on subsequent sales of other products. Both the fact that these are sales of other products and that they occur after the customer completes her experience with your product are problematic.
The best I can do right now is to suggest estimating the customer’s future behavior at the end of the product experience. This at least captures the customer’s state when they “left your hands”, so to speak, and before their intentions were affected by other activities. Even better, you could compare their expected behavior after the purchase with their expected behavior before the purchase, since any change is presumably due to their experience in between.
Of course, this immediately raises the question of when the product experience ends. Assuming they use the product after they buy it, its performance will affect their behavior at least as much as the purchase experience itself. I don’t have a specific answer for this; maybe you measure expected behavior in several places.
The other question this raises is where you get the estimates. In a rigorously tested environment, they could be based on firm data. When you find that environment, let me know. Here in the real world, you’ll probably be stuck measuring customer intentions with something like a net promoter score. Yes, I’m fully aware of the problems with such surveys, and still stand by my earlier criticisms. But if net promoter score is the best measure available, then that’s the one you use.
The point of measuring customer intentions after the purchase is simply to get product managers to think of affecting future behavior as part of their job. That’s what has to happen if they are going to help maximize lifetime value. Our real goal is to convert the net promoter scores into an expected future value stream, and thus report the change in expected lifetime value directly. But net promoter scores might actually be easier for them to grasp.
I’m still not thrilled with this solution, but it’s progress of a sort. At least it lets product managers manage something that is largely under their control, yet still orients them toward long term customer value. Those are the basic objectives.
Subscribe to:
Posts (Atom)