I had a fascinating chat earlier this week with a client who described his vision for using DemandBase to tailor messages to Web site visitors from target accounts, using Bizo to further tailor messages to individuals by title, using all this data to synch inbound and outbound campaigns in Eloqua, and eventually driving everything with predictive model scores from a tool like Lattice Engines. That could serve as a pretty complete summary of the state of the art for B2B marketing today, especially if you consider “content marketing” as implicitly included. Equally helpful to me personally, it reinforced my intention to write about Bizo and DemandBase, both of which have recently briefed me on their latest product extensions.
Let’s start with DemandBase. Astonishingly, four years have passed since I last wrote about them. In that time, they’ve continued to build applications that exploit their core technology for identifying Web site visitors by company based on their IP address. This started by providing visitor lists and real-time alerts to sales people who were interested in specific accounts. It later extended to returning visitor attributes in real time so companies could pre-fill forms and personalize Web pages to match visitor interests. The most recent expansion went beyond a company’s own Web site to the much larger world of online advertising.
To reach that market, the company had to build its own version of “data management platform” (DMP) systems that manage lists of known entities, recognizes them when they appear on an external Web site, and delivers them an appropriate advertisement. The big difference is that DemandBase entities are companies identified by IP address, while traditional DMP entities are cookies attached to browsers (and assumed to relate to individual human beings). DemandBase had to build its own engines for real time bidding (RTB) and ad serving (Demand Side Platform or DSP) to support its approach. These can integrate with Demandbase’s own network of Web publishers that will accept its ads and with other ad exchanges that connect to their own, larger publisher networks.
Data in the DemandBase DMP comes from both DemandBase and clients. The DemandBase data are the company-level attributes that DemandBase has long assembled: company name, industry, revenue, employees, technologies used, etc. Some of this, such as DUNS Number, is purchased from external sources and requires extra payment. The client data, which of course is available only to the client who provided it, could be anything but is usually attributes such as account type, buying stage, and sales territory. The system doesn’t store any information about individuals. Marketing automation, Web analytics, and Web content management systems can all access this data via API calls for analytics and as inputs to their own selection and treatment rules. Outside the DMP itself, DemandBase can store content and decision rules to guide bidding and select which ad is displayed to each account.
So much for the mechanics. The business value is that DemandBase is allowing marketers to tailor messages to target accounts even before they engage directly with the company, thereby (hopefully) luring new prospects into the top of the funnel and engaging them if they don’t respond. This is a major extension beyond traditional marketing automation, which works mostly through email to known prospects. It also goes beyond Web site personalization, which requires people to at least visit your Web site and in most cases actively provide information about themselves. As you might imagine, DemandBase offers many case studies to show how much this improves performance.
Bizo comes at Web advertising from the traditional route of building a pool of cookies and assembling them into audiences based on the attributes of the individuals they represent. The pool was originally used to target display advertising and retarget site visitors by sending them ads on other sites. The company says it has pulled data from 4,200 publishers and other sources to identify about 120 million individuals worldwide, including 85 million within the U.S. The number of actual cookies is higher still.* Profiles contain titles and business demographics such as industry, but no personally identifiable information such as names or addresses.
Like DemandBase, Bizo has found many applications for its core data asset. These now extend beyond display ads to social media advertising through Facebook and LinkedIn, Web site personalization through Adobe, Web analytics through Google Analytics and Adobe, and integration with Salesforce.com CRM, BlueKai DMP, and Eloqua marketing automation. Other partners will be added over time.
I’ll assume the Eloqua integration is most interesting to readers of this blog. Basically, it lets Bizo read audience segments created by Eloqua. Bizo then matches segment members to Bizo identities and delivers Web site, advertising or social messages tailored to each segment. Because Eloqua captures such detailed information about prospect behaviors, this allows highly tailored advertising that is tightly synchronized with prospects’ progress through buying stages and marketing automation campaigns. Since it’s driven by cookies, it can send messages to anonymous as well as identified prospects – a huge expansion in marketing automation’s reach. Bizo can even allocate advertising spend across the different media to achieve reach and frequency targets as efficiently as possible. To encourage this approach, its pricing is based on the number of unique individuals that marketers manage in its system, rather than impressions or ad budget.
The business value offered by Bizo is similar to DemandBase: reaching prospects that haven’t yet engaged with a company directly or retargeting them when they don’t respond. The different technical approaches have their own strengths and weaknesses: IP-based identification is relatively stable but works only at the company level and doesn’t identify small businesses that lack their own stable IP address; cookies identify individuals but are often deleted, miss some people, and result in multiple, fragmented identities for others. Like the client I mentioned at the start of this article, you can probably think of them as complementary rather than competing components of a complete B2B marketing solution.
________________________________________________________________________
* Given that the total employed U.S. workforce is about 145 million,
I suspect that 85 million contains quite a few duplicates, meaning any
one profile captures just a fragment of an individual’s activity. But
that’s the nature of this sort of thing; the business question is how
well the data works even in its imperfect state.
Showing posts with label web analytics. Show all posts
Showing posts with label web analytics. Show all posts
Friday, February 21, 2014
Tuesday, December 10, 2013
Woopra Grows from Web Analytics to Multi-Source Customer Data, Insights and Actions
I stumbled over Woopra in their tiny booth during last month’s Dreamforce conference, where I was intrigued enough to let them scan my badge and promptly forgot why. Fortunately, a diligent sales rep followed up by email and I remembered it was worth a closer look. If you’ve been reading my recent posts, you won’t be surprised that I’ve decided they are yet another Customer Data Platform.
In fact, the people most surprised by this news will probably be the folks at Woopra itself, which positions itself as “an insight company” and has deep roots in traditional Web analytics. On the other hand, Woopra does distinguish itself from conventional Web analytics vendors by stressing the fact that it tracks individuals, not Web pages. In fact, one of its tag lines is “easily track, analyze, and take action on live customer data”, which is a pretty decent statement of the CDP value proposition.
Tag lines notwithstanding, Woopra wouldn’t qualify as a CDP if it only tracked Web behavior. But Woopra offers the core CDP function of building a multi-source database. It does this by directly capturing behaviors from Web site visits and mobile app interactions (via Javascript tags and API calls from iOS or Android) and by loading operational data such as purchases and customer service interactions. As its Dreamforce presence suggests, Woopra can integrate with Salesforce.com to both import CRM data and to display the information it has consolidated from multiple sources. The system can combine data with different identifiers, such as a cookie ID, mobile device ID, and Web session ID, although – like many CDPs – it relies on the client to figure out which identifiers belong to the same person.
Woopra stores its data as a combination of customer attributes and time-stamped individual events. The event data lets it report on individual movement through funnel stages, performance of start-date cohorts, and paths through a Web site or app, in addition to the usual profile reports. It stores the information using proprietary technology that allows continuous real-time updates and ad hoc segmentations, so users can run any report against a subset of the customer universe. Users can also design their own custom reports.
Woopra provides an API that lets external systems access its database, although it places some limits on volume to avoid performance issues. This access meets the minimum requirement for a CDP. The system can also continuously scan for user-specified events or conditions and execute user-specified actions when these occur. The actions can run Javascript on the client Web site, add tags to a customer record, send an email or push notification, or a call an external system via a Webhook. This allows marketers to manage some basic customer treatments within Woopra itself. But the system doesn’t have any predictive modeling or recommendation engines, so more advanced approaches would require external assistance.
Woopra was founded six years ago and relaunched in 2012. It has more than 3,000 paying customers in a wide range of industries, including many small businesses and several large ones. The system is offered in three editions with different sets of features, including a free version for simple visitor tracking. Pricing is based on activity volume and starts at $80 per month for the mid-level edition.
In fact, the people most surprised by this news will probably be the folks at Woopra itself, which positions itself as “an insight company” and has deep roots in traditional Web analytics. On the other hand, Woopra does distinguish itself from conventional Web analytics vendors by stressing the fact that it tracks individuals, not Web pages. In fact, one of its tag lines is “easily track, analyze, and take action on live customer data”, which is a pretty decent statement of the CDP value proposition.
Tag lines notwithstanding, Woopra wouldn’t qualify as a CDP if it only tracked Web behavior. But Woopra offers the core CDP function of building a multi-source database. It does this by directly capturing behaviors from Web site visits and mobile app interactions (via Javascript tags and API calls from iOS or Android) and by loading operational data such as purchases and customer service interactions. As its Dreamforce presence suggests, Woopra can integrate with Salesforce.com to both import CRM data and to display the information it has consolidated from multiple sources. The system can combine data with different identifiers, such as a cookie ID, mobile device ID, and Web session ID, although – like many CDPs – it relies on the client to figure out which identifiers belong to the same person.
Woopra stores its data as a combination of customer attributes and time-stamped individual events. The event data lets it report on individual movement through funnel stages, performance of start-date cohorts, and paths through a Web site or app, in addition to the usual profile reports. It stores the information using proprietary technology that allows continuous real-time updates and ad hoc segmentations, so users can run any report against a subset of the customer universe. Users can also design their own custom reports.
Woopra provides an API that lets external systems access its database, although it places some limits on volume to avoid performance issues. This access meets the minimum requirement for a CDP. The system can also continuously scan for user-specified events or conditions and execute user-specified actions when these occur. The actions can run Javascript on the client Web site, add tags to a customer record, send an email or push notification, or a call an external system via a Webhook. This allows marketers to manage some basic customer treatments within Woopra itself. But the system doesn’t have any predictive modeling or recommendation engines, so more advanced approaches would require external assistance.
Woopra was founded six years ago and relaunched in 2012. It has more than 3,000 paying customers in a wide range of industries, including many small businesses and several large ones. The system is offered in three editions with different sets of features, including a free version for simple visitor tracking. Pricing is based on activity volume and starts at $80 per month for the mid-level edition.
Thursday, March 24, 2011
OMMA Metrics Conference: Online Ads Must Prove Real Value To Succeed
I took a break from my usual obsessions yesterday to attend the New York edition of MediaPost’s OMMA Metrics and Measurement conference. It was a good chance to dive into this particular sector of the marketing analytics universe. (Another version of the program will be presented in San Francisco in July; the company also live streams a free Webcast. You can also download selected presentations from yesterday.)
If there was an overriding theme to the event, it was frustration that online advertising isn’t attracting as much money as it should. There was more than a little ”TV-envy”: the feeling that TV gets more advertising because buying is based on simple, widely-accepted audience measures. Some speakers argued for duplicating this situation, by removing some middlemen and creating standard online audience measures.
Others pointed to a deeper issue: that online marketers can’t measure the value of their efforts in terms of revenue or brand metrics like awareness and preference. In this view, TV buyers accept simple measures like Gross Rating Points because these measures have proven over time to correlate with real business results. Media mix modeling has more recently confirmed this. But except for direct response, online media can’t show the same relationship. This forces online marketers to report endless (but never complete) data about who saw what and how they acted, in the hopes that piling on enough details will somehow make advertisers happy. It never does.
This is the online version of the old joke about the drunk who loses his keys in the alley but looks for them under the streetlamp “because the light is better”. Moral of story: no volume of irrelevant data can substitute for the information you really need.
In the case of online advertising, the dark alley is the connections between ad placements and final business results. Several speakers touched on parts of this. IBM’s Yuchun Lee gave an opening keynote that highlighted the pervasive influence of online information over all customer activities, not just online purchases. Adometry’s Steve O’Brien explicitly stated that attribution must measure the incremental impact of each marketing effort on final results (although I think he limited this to online results). ForeSee Results’ Larry Freed stressed the need to trace all online and offline behaviors to understand their true role in final outcomes. Others cited studies where careful measurement found that indirect results showed online to be much more powerful than direct attribution alone.
Yesterday’s speakers also raised the problem of scalability: that is, being able to duplicate and expand on success. This is one area where TV envy makes sense, because it’s easy to add more Gross Rating Points and be reasonably sure of getting the expected results. Online ad buying is more like buying print ads or mailing lists: you may have some sense of the audience demographics, but the only way to really know how it will perform is to run a test. But this isn't a measurement problem: simple, standard measures that hide true audience differences are only going to be unreliable predictors of actual results. What’s really needed are better testing methods to predict as quickly and cheaply as possible how each new audience will perform. The trick is you’re not just looking at immediate response, but all of those indirect effects that are so tricky to capture in the first place. Now you have to predict them in advance as well as measure them after the fact.
Nobody said it would be easy.
If there was an overriding theme to the event, it was frustration that online advertising isn’t attracting as much money as it should. There was more than a little ”TV-envy”: the feeling that TV gets more advertising because buying is based on simple, widely-accepted audience measures. Some speakers argued for duplicating this situation, by removing some middlemen and creating standard online audience measures.
Others pointed to a deeper issue: that online marketers can’t measure the value of their efforts in terms of revenue or brand metrics like awareness and preference. In this view, TV buyers accept simple measures like Gross Rating Points because these measures have proven over time to correlate with real business results. Media mix modeling has more recently confirmed this. But except for direct response, online media can’t show the same relationship. This forces online marketers to report endless (but never complete) data about who saw what and how they acted, in the hopes that piling on enough details will somehow make advertisers happy. It never does.
This is the online version of the old joke about the drunk who loses his keys in the alley but looks for them under the streetlamp “because the light is better”. Moral of story: no volume of irrelevant data can substitute for the information you really need.
In the case of online advertising, the dark alley is the connections between ad placements and final business results. Several speakers touched on parts of this. IBM’s Yuchun Lee gave an opening keynote that highlighted the pervasive influence of online information over all customer activities, not just online purchases. Adometry’s Steve O’Brien explicitly stated that attribution must measure the incremental impact of each marketing effort on final results (although I think he limited this to online results). ForeSee Results’ Larry Freed stressed the need to trace all online and offline behaviors to understand their true role in final outcomes. Others cited studies where careful measurement found that indirect results showed online to be much more powerful than direct attribution alone.
Yesterday’s speakers also raised the problem of scalability: that is, being able to duplicate and expand on success. This is one area where TV envy makes sense, because it’s easy to add more Gross Rating Points and be reasonably sure of getting the expected results. Online ad buying is more like buying print ads or mailing lists: you may have some sense of the audience demographics, but the only way to really know how it will perform is to run a test. But this isn't a measurement problem: simple, standard measures that hide true audience differences are only going to be unreliable predictors of actual results. What’s really needed are better testing methods to predict as quickly and cheaply as possible how each new audience will perform. The trick is you’re not just looking at immediate response, but all of those indirect effects that are so tricky to capture in the first place. Now you have to predict them in advance as well as measure them after the fact.
Nobody said it would be easy.
Wednesday, August 18, 2010
LeadForce1 Adds Mind Reading to Marketing Automation
Summary: LeadForce1 infers Web visitors' intent and sales stage from the contents they read. It combines this with standard B2B marketing automation features to provide better-qualified leads to sales people.
The B2B marketing automation industry has reached the stage where product features are similar and companies compete primarily on business and marketing savvy. This intrigues me in its own way although it's not as much fun as looking at cool new technologies. Of course, if you’re a vendor offering a cool new technology, the stakes are higher.
Such is my take on LeadForce1. The company’s product touches the standard marketing automation bases: outbound email, landing pages and forms, lead nurturing, scoring and integration with Salesforce.com. It adds some less typical features for telephone lead qualification, which makes sense for reasons we’ll get to later. But its most intriguing claim is that it supplements the usual Web behavior tracking with reports on visitors’ intent and sales stages.
LeadForce1 does this by capturing the text that visitors hover over, click on, highlight or copy, and comparing it with keywords that indicate intent and sales stage. The system starts with a standard list of keywords which clients can modify to match their business. “Intent” is usually related to customer interests, such as a particular problem or product line. “Sales stage” uses a standard progression of research, consideration, trial and purchase, which clients can change if they wish. Because B2B purchases are often made by a team of specialists, the system assigns interests separately to each individual but assigns a single sales stage to everyone from the same company.
Intent and sales stage reporting are not merely random cool features. They help rank leads and send alerts as part of a larger focus on delivering qualified names to sales people. Related capabilities include reverse IP lookup of the company of anonymous visitors, connections to Jigsaw to provide contact names of those companies, and the aforementioned telephone lead qualification. In fact, LeadForce1 is targeted in part at Web publishers who collect leads and resell them to other businesses. Its ability to enhance these leads with intent and sales stage makes them more targeted and, thus, more valuable. This is why LeadForce1 sometimes refers to itself as being in the “lead exchange” business, although it currently seems to prefer the label “marketing automation 2.0”.
Sales people would certainly benefit from knowing the intent and sales stage of their leads. Of course, you do have to wonder about the accuracy of the information. LeadForce1 currently does some response tracking but mostly relies on clients to decide for themselves which keywords are effective. It does plan to add more rigorous analysis using the data it already collects. The same data will also be used to measure the impact of marketing contacts on changes in intent and sales stage and to forecast movement of leads from one stage to the next. The results will be interesting and, assuming the system proves reasonably accurate, should be quite valuable.
LeadForce1 was launched about two years ago and currently has 224 customers. Pricing is based on the modules purchased, number of users and number of leads. Monthly cost can be as low as $500 although a typical clients spends about $3,000 per month.
With so many customers, and growing quickly, LeadForce1 may survive the marketing automation industry consolidation as an independent firm. If not, its technology is useful enough that there's a good chance it will find its way into other systems.
The B2B marketing automation industry has reached the stage where product features are similar and companies compete primarily on business and marketing savvy. This intrigues me in its own way although it's not as much fun as looking at cool new technologies. Of course, if you’re a vendor offering a cool new technology, the stakes are higher.
Such is my take on LeadForce1. The company’s product touches the standard marketing automation bases: outbound email, landing pages and forms, lead nurturing, scoring and integration with Salesforce.com. It adds some less typical features for telephone lead qualification, which makes sense for reasons we’ll get to later. But its most intriguing claim is that it supplements the usual Web behavior tracking with reports on visitors’ intent and sales stages.
LeadForce1 does this by capturing the text that visitors hover over, click on, highlight or copy, and comparing it with keywords that indicate intent and sales stage. The system starts with a standard list of keywords which clients can modify to match their business. “Intent” is usually related to customer interests, such as a particular problem or product line. “Sales stage” uses a standard progression of research, consideration, trial and purchase, which clients can change if they wish. Because B2B purchases are often made by a team of specialists, the system assigns interests separately to each individual but assigns a single sales stage to everyone from the same company.
Intent and sales stage reporting are not merely random cool features. They help rank leads and send alerts as part of a larger focus on delivering qualified names to sales people. Related capabilities include reverse IP lookup of the company of anonymous visitors, connections to Jigsaw to provide contact names of those companies, and the aforementioned telephone lead qualification. In fact, LeadForce1 is targeted in part at Web publishers who collect leads and resell them to other businesses. Its ability to enhance these leads with intent and sales stage makes them more targeted and, thus, more valuable. This is why LeadForce1 sometimes refers to itself as being in the “lead exchange” business, although it currently seems to prefer the label “marketing automation 2.0”.
Sales people would certainly benefit from knowing the intent and sales stage of their leads. Of course, you do have to wonder about the accuracy of the information. LeadForce1 currently does some response tracking but mostly relies on clients to decide for themselves which keywords are effective. It does plan to add more rigorous analysis using the data it already collects. The same data will also be used to measure the impact of marketing contacts on changes in intent and sales stage and to forecast movement of leads from one stage to the next. The results will be interesting and, assuming the system proves reasonably accurate, should be quite valuable.
LeadForce1 was launched about two years ago and currently has 224 customers. Pricing is based on the modules purchased, number of users and number of leads. Monthly cost can be as low as $500 although a typical clients spends about $3,000 per month.
With so many customers, and growing quickly, LeadForce1 may survive the marketing automation industry consolidation as an independent firm. If not, its technology is useful enough that there's a good chance it will find its way into other systems.
Thursday, May 20, 2010
Omniture Study Suggests Marketers Doubt Value of Analytics Investment
Not to beat a dead horse, but Wednesday’s eMarketer reported on yet another survey that touched on the question of why marketers don’t measure. Although the Omniture 2010 Online Analytics Survey is obviously limited to Web analytics, the answers probably apply to other types of measurement as well.
I wasn’t able to get a copy of the full survey results, despite two requests to Omniture and even filling it out myself, which was supposed to yield a copy that compared my answers with my peers'. Perhaps I’m peerless. But the snippets published in eMarketer are enough for now.

Specifically, eMarketer reported that the leading challenge in Web analytics was “talent”, cited by 58.4% of respondents. Assuming that “talent” is really a polite way of saying “skilled staff”, this suggests that lack of education, not lack of time, is the critical roadblock to better measurement. I’ve been betting the reason is time, but would reconsider in the face of new evidence.
But wait.
When I took the survey, the question about “talent” actually defined it as "lack of skill/time". So it’s perfectly possible that marketers picking "talent" really saw lack of time as the most important challenge.
My position is arguably strengthened by the relatively low ranking of "support/training" (37.6%) and "budget" (31.7%) in the answers. Those can certainly improve skills but they can’t expand the manager's available time. Even hiring more staff wouldn't do that.
On the other hand, the second- and third-ranked challenges were "actionability" (47.3%) and "finding insights" (41.5%) which both suggest doubts that Web analytics can deliver real value. This would show a need for education – but, as I wrote in my comment on the original Why Marketers Don't Measure post, it's a need for education in the fundamental utility of measurement, not education in specific techniques.
Bottom line: the Omniture survey confirms that marketers won’t invest in analytics until they’re convinced it’s the best use of their limited resources. Efforts to expand adoption of analytics should start with that.
I wasn’t able to get a copy of the full survey results, despite two requests to Omniture and even filling it out myself, which was supposed to yield a copy that compared my answers with my peers'. Perhaps I’m peerless. But the snippets published in eMarketer are enough for now.

Specifically, eMarketer reported that the leading challenge in Web analytics was “talent”, cited by 58.4% of respondents. Assuming that “talent” is really a polite way of saying “skilled staff”, this suggests that lack of education, not lack of time, is the critical roadblock to better measurement. I’ve been betting the reason is time, but would reconsider in the face of new evidence.
But wait.
When I took the survey, the question about “talent” actually defined it as "lack of skill/time". So it’s perfectly possible that marketers picking "talent" really saw lack of time as the most important challenge.
My position is arguably strengthened by the relatively low ranking of "support/training" (37.6%) and "budget" (31.7%) in the answers. Those can certainly improve skills but they can’t expand the manager's available time. Even hiring more staff wouldn't do that.
On the other hand, the second- and third-ranked challenges were "actionability" (47.3%) and "finding insights" (41.5%) which both suggest doubts that Web analytics can deliver real value. This would show a need for education – but, as I wrote in my comment on the original Why Marketers Don't Measure post, it's a need for education in the fundamental utility of measurement, not education in specific techniques.
Bottom line: the Omniture survey confirms that marketers won’t invest in analytics until they’re convinced it’s the best use of their limited resources. Efforts to expand adoption of analytics should start with that.
Tuesday, September 29, 2009
More Surveys Agree: Web and Non-Web Data Must Be Integrated
It’s not that I’m obsessive, but just to gnaw a bit more on last week’s bone about the coming unification of Web and other marketing data...
- a recent survey sponsored by enterprise marketing automation vendor Unica found that “integration with other marketing solutions” was the most commonly cited web analytics challenge (46%).
This was followed by “verifying accuracy of data (inflation/deflation)” (41%) and “not comprehensive/missing types of data” (32%). It’s interesting that the question was about Web analytics in particular – even analyzing Web results by itself requires non-Web data.
- another survey by another marketing automation vendor, Alterian, found the most-commonly cited top obstacle in online marketing (25% of respondents) was “integration of online with database marketing and offline channels”.
Now, this isn’t exactly the same answer as the Unica survey, since the question isn’t limited to data integration. But Alterian also reported that “lack of ability to assess or manage internal infrastructure and culture challenges (25%) and the integration of all the technology to power the cycle (20%) were identified as the biggest factors in implementing the customer engagement cycle.” So clearly data and system integration are indeed primary concerns for multi-channel marketing.
For still more on this topic, see today’s post on the MPM Toolkit blog, describing a very detailed report in eMarketer about Online Brand Measurement. This provides still more evidence for the need for integration of Web and non-Web data, in addition to other issues.
Case closed.
- a recent survey sponsored by enterprise marketing automation vendor Unica found that “integration with other marketing solutions” was the most commonly cited web analytics challenge (46%).
This was followed by “verifying accuracy of data (inflation/deflation)” (41%) and “not comprehensive/missing types of data” (32%). It’s interesting that the question was about Web analytics in particular – even analyzing Web results by itself requires non-Web data.
- another survey by another marketing automation vendor, Alterian, found the most-commonly cited top obstacle in online marketing (25% of respondents) was “integration of online with database marketing and offline channels”.
Now, this isn’t exactly the same answer as the Unica survey, since the question isn’t limited to data integration. But Alterian also reported that “lack of ability to assess or manage internal infrastructure and culture challenges (25%) and the integration of all the technology to power the cycle (20%) were identified as the biggest factors in implementing the customer engagement cycle.” So clearly data and system integration are indeed primary concerns for multi-channel marketing.
For still more on this topic, see today’s post on the MPM Toolkit blog, describing a very detailed report in eMarketer about Online Brand Measurement. This provides still more evidence for the need for integration of Web and non-Web data, in addition to other issues.
Case closed.
Wednesday, September 23, 2009
Web Analytics Is Dead. So Is Customer Centricity. I Need a Drink.
Summary: Web analytics is merging into the broad world of marketing measurement across all media, which itself is shifting focus from tracking individuals to understanding group behavior. Although Web analytics and marketing automation vendors are currently wrestling over who will house customer data, both are likely to lose custody to enterprises who want to control their data for themselves.
Even as analysts are still sorting through the implications of last week’s acquisition of Omniture by Adobe, the industry saw two additional important announcements this week: Omniture combining its data with comScore to help measure Web advertising audiences, and Nielsen working with Facebook to poll consumers on advertising impact.
Both announcements share several interesting features: they don't rely on traditional Web analytics (tracking page views); they involve vendors who report data from consumer panels; and they relate to measuring advertising measurement. Maybe they coincided simply because the big Advertising Week conference is now under way. But I think they, along with the Omniture/Adobe deal itself, hint at something more profound: the end of Web analytics as we know it.
Ok, that may not be as earth-shattering as the end of several other things you might imagine. But many marketers are just getting their arms around traditional Web analytics. So it’s worth warning them that things are about to change again.
Smarter Content on the Way
Let’s start with Omniture/Adobe. After thinking about it for a week (and hearing what the participants said), I think the main purpose of the deal was to let Adobe build content that was more intelligent in two ways:
- First, the content will be inherently “instrumented” to report how, when, where and which people are consuming it, using techniques go beyond traditional Web analytics methods (server logs, Javascript tags and cookies). This is needed because content increasingly exists outside of plain vanilla Web pages that can be tracked with conventional techniques. Problems include Flash, video, audio and other non-HTML Web content; venues like mobile, digital video recorders and interactive TV; widgets that migrate through social networks; and plain old cookie deletion. Web analytics vendors are striving to extend their technologies to capture these, but at some point you have to look for a different basic model. Content that can itself “phone home” rather than relying on the carrier medium could be the long-term answer.
- Second, content will be self-optimizing. This involves built-in tests and, perhaps, a sort of swarm intelligence where subsequent views are actually modified based on previous results reported by the content itself. Imagine a widget with a built-in A/B headline test: every time someone accesses it, the widget offers one headline or the other, and reports the result to a central server. (This could be done without transmitting personally identifiable information, so privacy issues are minimal.) Once a pattern emerges, the server could instruct the distributed widgets to only display the winning headline, or, better still, to start a new test. Even without a central server, each copy of the widget could still run its own test and, assuming it’s accessed enough times, adjust all by itself.
Who Owns The Data?
So far so good, and I think that’s plenty of reason to justify spending $1.8 billion for Omniture. But I also think Adobe was interested in the fact that Omniture controls so much of its clients’ data.
There is a battle brewing over that issue: like most Web analytics vendors, Omniture stores Web traffic data on its own servers and sells clients the ability to access that data. That didn’t raise any particular business issues when Web data was viewed largely in isolation. But as the Web takes an increasingly central role in customer contacts, marketers need to merge that Web data with their other customer information. Indeed, if you want to use that data to help guide customer interactions across channels, the data must be not just centralized but also updated in real-time. That means marketers either give all their non-Web data to their Web analytics vendor, or directly capture Web analytics data on their own servers.
The Web analytics vendors see this future and recognize that they’ll be more important to their clients, and thus able to charge higher fees, if they hold everything. Marketing automation vendors and in-house IT groups see the same future and recognize the threat to their own positions. Thus, the business-oriented marketing automation vendors (i.e., demand generation vendors: Eloqua, Silverpop, Marketo, etc.) already capture Web behavior in their own systems and integrate it directly with customer management. The big consumer-oriented marketing automation vendors (SAS, Teradata, Unica) also offer Web analytics, although perhaps less tightly integrated.
The problem here is that the Web analytics vendors and demand generation vendors are largely SaaS systems, so both hold the integrated customer data outside of the client’s own data center. It’s not clear that clients – especially big enterprise clients – will continue to accept this. The consumer-oriented marketing automation vendors already largely support on-premise configurations, so they may be the real winners in this battle. Yet bear in mind that looming behind the marketing automation vendors are the enterprise CRM vendors, who also offer largely on-premise installations. They may eventually gobble up the marketing automation business and the Web analytics data along with it. In that case, Web analytics vendors who hope to become rich stewards of centralized customer databases will be very disappointed.
Customer-Centricity Is Obsolete
I also think there may be one still deeper trend at play here, although this is more speculative. Let’s call it a shift from customer-centric to community-centric marketing. Since customer centricity has been the ultimate goal of marketers, or at least marketing gurus, for several decades, I expect some skepticism.
But think of it this way: marketing has always been about deploying and propagating messages to consumers. In recent years, we’ve striven and become technically more able to target those messages directly at individuals. Yet messages were never really limited to one person. Even if they were delivered privately, they could be shared directly through conversation, physical and electronic pass-along, and indirectly as consumers discussed their experiences with the company in general. Thus, there has always been a community component to marketing campaigns. In cases such as word of mouth programs, this was even the primary objective.
Today, of course, that sort of sharing has become increasingly important for every marketing project. Thus marketers have more need to track the secondary impact of their messages on the larger community. Happily, they also have more technology to do the tracking.
Here’s what’s interesting, though. Marketers will never be able to trace the exact path of each message from one person to another. And even if the data were available, there were no privacy constraints and they could handle the volume, marketers would still face the insurmountable challenge of that multiple messages influence final behavior. That is, they could never meaningfully say that one particular message was the single reason a customer did something. All they can ever do is to look at the many different messages a customer probably received and compare these with actual behavior. If the customer did what you wanted, the messages somehow worked.
If this sounds familiar, it should: it’s the classic problem faced by brand marketers in measuring the value of their investments. Their solution has always been to measure intermediate variables such as consumer attitudes, and to measure these through samples rather than by polling everyone in the market.
This brings us full circle to the panel-based attitudinal research at the core of the Omniture/comScore and Nielsen/Facebook deals. Once you recognize that what’s most important is the broad community impact of your marketing efforts, you fall back on those types of measures rather than attempting the impossible, and impossibly expensive, task of tracing the exact path followed by each individual. In other words, what we’re seeing here is not some Mad Men-style reversion to obsolete brand marketing behaviors, but a recognition that modern marketing is community-driven, so its measurements must be as well.
Now, if I can just find a reason to bring back the three-martini lunch….
Even as analysts are still sorting through the implications of last week’s acquisition of Omniture by Adobe, the industry saw two additional important announcements this week: Omniture combining its data with comScore to help measure Web advertising audiences, and Nielsen working with Facebook to poll consumers on advertising impact.
Both announcements share several interesting features: they don't rely on traditional Web analytics (tracking page views); they involve vendors who report data from consumer panels; and they relate to measuring advertising measurement. Maybe they coincided simply because the big Advertising Week conference is now under way. But I think they, along with the Omniture/Adobe deal itself, hint at something more profound: the end of Web analytics as we know it.
Ok, that may not be as earth-shattering as the end of several other things you might imagine. But many marketers are just getting their arms around traditional Web analytics. So it’s worth warning them that things are about to change again.
Smarter Content on the Way
Let’s start with Omniture/Adobe. After thinking about it for a week (and hearing what the participants said), I think the main purpose of the deal was to let Adobe build content that was more intelligent in two ways:
- First, the content will be inherently “instrumented” to report how, when, where and which people are consuming it, using techniques go beyond traditional Web analytics methods (server logs, Javascript tags and cookies). This is needed because content increasingly exists outside of plain vanilla Web pages that can be tracked with conventional techniques. Problems include Flash, video, audio and other non-HTML Web content; venues like mobile, digital video recorders and interactive TV; widgets that migrate through social networks; and plain old cookie deletion. Web analytics vendors are striving to extend their technologies to capture these, but at some point you have to look for a different basic model. Content that can itself “phone home” rather than relying on the carrier medium could be the long-term answer.
- Second, content will be self-optimizing. This involves built-in tests and, perhaps, a sort of swarm intelligence where subsequent views are actually modified based on previous results reported by the content itself. Imagine a widget with a built-in A/B headline test: every time someone accesses it, the widget offers one headline or the other, and reports the result to a central server. (This could be done without transmitting personally identifiable information, so privacy issues are minimal.) Once a pattern emerges, the server could instruct the distributed widgets to only display the winning headline, or, better still, to start a new test. Even without a central server, each copy of the widget could still run its own test and, assuming it’s accessed enough times, adjust all by itself.
Who Owns The Data?
So far so good, and I think that’s plenty of reason to justify spending $1.8 billion for Omniture. But I also think Adobe was interested in the fact that Omniture controls so much of its clients’ data.
There is a battle brewing over that issue: like most Web analytics vendors, Omniture stores Web traffic data on its own servers and sells clients the ability to access that data. That didn’t raise any particular business issues when Web data was viewed largely in isolation. But as the Web takes an increasingly central role in customer contacts, marketers need to merge that Web data with their other customer information. Indeed, if you want to use that data to help guide customer interactions across channels, the data must be not just centralized but also updated in real-time. That means marketers either give all their non-Web data to their Web analytics vendor, or directly capture Web analytics data on their own servers.
The Web analytics vendors see this future and recognize that they’ll be more important to their clients, and thus able to charge higher fees, if they hold everything. Marketing automation vendors and in-house IT groups see the same future and recognize the threat to their own positions. Thus, the business-oriented marketing automation vendors (i.e., demand generation vendors: Eloqua, Silverpop, Marketo, etc.) already capture Web behavior in their own systems and integrate it directly with customer management. The big consumer-oriented marketing automation vendors (SAS, Teradata, Unica) also offer Web analytics, although perhaps less tightly integrated.
The problem here is that the Web analytics vendors and demand generation vendors are largely SaaS systems, so both hold the integrated customer data outside of the client’s own data center. It’s not clear that clients – especially big enterprise clients – will continue to accept this. The consumer-oriented marketing automation vendors already largely support on-premise configurations, so they may be the real winners in this battle. Yet bear in mind that looming behind the marketing automation vendors are the enterprise CRM vendors, who also offer largely on-premise installations. They may eventually gobble up the marketing automation business and the Web analytics data along with it. In that case, Web analytics vendors who hope to become rich stewards of centralized customer databases will be very disappointed.
Customer-Centricity Is Obsolete
I also think there may be one still deeper trend at play here, although this is more speculative. Let’s call it a shift from customer-centric to community-centric marketing. Since customer centricity has been the ultimate goal of marketers, or at least marketing gurus, for several decades, I expect some skepticism.
But think of it this way: marketing has always been about deploying and propagating messages to consumers. In recent years, we’ve striven and become technically more able to target those messages directly at individuals. Yet messages were never really limited to one person. Even if they were delivered privately, they could be shared directly through conversation, physical and electronic pass-along, and indirectly as consumers discussed their experiences with the company in general. Thus, there has always been a community component to marketing campaigns. In cases such as word of mouth programs, this was even the primary objective.
Today, of course, that sort of sharing has become increasingly important for every marketing project. Thus marketers have more need to track the secondary impact of their messages on the larger community. Happily, they also have more technology to do the tracking.
Here’s what’s interesting, though. Marketers will never be able to trace the exact path of each message from one person to another. And even if the data were available, there were no privacy constraints and they could handle the volume, marketers would still face the insurmountable challenge of that multiple messages influence final behavior. That is, they could never meaningfully say that one particular message was the single reason a customer did something. All they can ever do is to look at the many different messages a customer probably received and compare these with actual behavior. If the customer did what you wanted, the messages somehow worked.
If this sounds familiar, it should: it’s the classic problem faced by brand marketers in measuring the value of their investments. Their solution has always been to measure intermediate variables such as consumer attitudes, and to measure these through samples rather than by polling everyone in the market.
This brings us full circle to the panel-based attitudinal research at the core of the Omniture/comScore and Nielsen/Facebook deals. Once you recognize that what’s most important is the broad community impact of your marketing efforts, you fall back on those types of measures rather than attempting the impossible, and impossibly expensive, task of tracing the exact path followed by each individual. In other words, what we’re seeing here is not some Mad Men-style reversion to obsolete brand marketing behaviors, but a recognition that modern marketing is community-driven, so its measurements must be as well.
Now, if I can just find a reason to bring back the three-martini lunch….
Tuesday, September 15, 2009
Adobe Buys Omniture: Good for Marketers, Bad for Marketing Automation Vendors
Summary: Adobe's agreement to purchase Omniture illustrates the on-going convergence of Web content management and Web analytics systems. This puts pressure on marketing automation vendors, who also want to provide Web analytics and content management, and who are already being pressed by customer relationship management (CRM) vendors. That's a pretty unpleasant position.
Adobe's announcement that it will purchase Omniture for $1.8 billion makes perfect sense. As I discussed in July, marketers have a lot to gain from tight integration between a Web content management system (CMS) like Adobe's Dreamweaver and Web analytics and optimization like Omniture.
Let's take it as a given, then, that major Web content management systems will soon include integrated analytics. This sets up a new clash between marketing automation vendors and Web CMS vendors. One of Omniture's major selling points before the merger was its ability to combine information across all online marketing channels, and I think they were working towards adding offline channels as well. Although short-term priorities will probably shift now towards Adobe integration, I doubt their long-term ambitions in that direction will evaporate.
And even if the CMS vendors do restrict their focus to online, they will still be competing with Web CMS and analytics solutions from marketing automation vendors who realize that online is too big a sector for them to ignore. Even though both sets of vendors will need to provide some degree of openness so their clients can move data from one platform to another, both will really want to sell their clients the entire execution and analysis stack, and will tightly integrate them to encourage this.
I think I've made this point before, but I'll repeat it again: the marketing automation vendors are really being squeezed between the Web vendors on one side, and the CRM vendors on the other. This is a very unpleasant position, since both CMS and CRM vendors are much larger than the marketing automation specialists. It's hard to see how they can survive as anything but niche products in the not-too-distance future.
This position probably puts me at odds with industry analysts who see great opportunities for growth in the marketing automation space. (I'd point to specific examples but can't find any just this minute.) The general argument seems to be that low adoption rates mean there's plenty of unmet need that will eventually lead to sales. I agree that adoption is low -- but there's no guarantee that the marketing automation specialists will be the ones who fill the gap. Improved CRM or CMS offerings might actually meet marketers needs. And if since nearly everyone has or needs a CRM and CMS system, it will actually be easier for companies to use the expanded features in their existing systems than to buy a separate marketing automation product.
If anybody has a good counter argument, I'd be happy to hear it.
Two further thoughts:
- When I asked one of the marketing automation vendors recently whether he considered CMS vendors as competitors, he said he didn't because CMS vendors still sell primarily to IT, while marketing automation is purchased by marketing. Assuming this is true, then Omniture also helps Adobe by giving access to marketing departments.
- The acquisition may make marketing automation vendors more attractive acquisition candidates for CMS vendors wishing to beef up their marketing capabilities. Autonomy (Interwoven), Open Text, and EMC (Documentum) could all swallow a Unica, Aprimo or Alterian without stopping to chew.
Adobe's announcement that it will purchase Omniture for $1.8 billion makes perfect sense. As I discussed in July, marketers have a lot to gain from tight integration between a Web content management system (CMS) like Adobe's Dreamweaver and Web analytics and optimization like Omniture.
Let's take it as a given, then, that major Web content management systems will soon include integrated analytics. This sets up a new clash between marketing automation vendors and Web CMS vendors. One of Omniture's major selling points before the merger was its ability to combine information across all online marketing channels, and I think they were working towards adding offline channels as well. Although short-term priorities will probably shift now towards Adobe integration, I doubt their long-term ambitions in that direction will evaporate.
And even if the CMS vendors do restrict their focus to online, they will still be competing with Web CMS and analytics solutions from marketing automation vendors who realize that online is too big a sector for them to ignore. Even though both sets of vendors will need to provide some degree of openness so their clients can move data from one platform to another, both will really want to sell their clients the entire execution and analysis stack, and will tightly integrate them to encourage this.
I think I've made this point before, but I'll repeat it again: the marketing automation vendors are really being squeezed between the Web vendors on one side, and the CRM vendors on the other. This is a very unpleasant position, since both CMS and CRM vendors are much larger than the marketing automation specialists. It's hard to see how they can survive as anything but niche products in the not-too-distance future.
This position probably puts me at odds with industry analysts who see great opportunities for growth in the marketing automation space. (I'd point to specific examples but can't find any just this minute.) The general argument seems to be that low adoption rates mean there's plenty of unmet need that will eventually lead to sales. I agree that adoption is low -- but there's no guarantee that the marketing automation specialists will be the ones who fill the gap. Improved CRM or CMS offerings might actually meet marketers needs. And if since nearly everyone has or needs a CRM and CMS system, it will actually be easier for companies to use the expanded features in their existing systems than to buy a separate marketing automation product.
If anybody has a good counter argument, I'd be happy to hear it.
Two further thoughts:
- When I asked one of the marketing automation vendors recently whether he considered CMS vendors as competitors, he said he didn't because CMS vendors still sell primarily to IT, while marketing automation is purchased by marketing. Assuming this is true, then Omniture also helps Adobe by giving access to marketing departments.
- The acquisition may make marketing automation vendors more attractive acquisition candidates for CMS vendors wishing to beef up their marketing capabilities. Autonomy (Interwoven), Open Text, and EMC (Documentum) could all swallow a Unica, Aprimo or Alterian without stopping to chew.
Tuesday, July 14, 2009
SiteCore Adds Analytics and Marketing To Web Content Management
Summary: SiteCore has added extensive analytical and marketing features to its Web content management system. The integrated analytics should save considerable effort for marketers. Channel-specific marketing automation is less appealing but should help to keep marketing automation vendors on their toes.
I commented last month that more Web content management system (CMS) vendors are adding marketing automation features. One of my examples was SiteCore, so I can’t point to them again as further proof of that assertion. But I did have a good talk last week with SiteCore VP Marketing Darren Guarnaccia, who clarified why this is happening and made a strong case for the integrated approach.
For those of you who (like me) are unfamiliar with SiteCore: it is an eight year old provider of Microsoft .NET-based Web content management systems, with over 1,600 mid-to-large sized customers running more than 20,000 dynamic Web sites including Sara Lee, Toshiba, Omni Hotels and Dollar Rent-a-Car/Thrifty. In other words, it is a substantial player in a crowded market.
According to Guarnaccia, the company has seen control over the CMS selection process steadily migrate from IT departments to marketers over the past four years. The trend is most pronounced at mid-sized firms, where IT is generally less powerful than at very large companies. During this time, it’s become clear that marketers need features that go beyond editing Web pages, to helping them do a better job of understanding and reacting to customers. SiteCore describes this as closing an “actionability chasm” between analytics and execution.
The chasm is created by the traditional approach of using analytical systems (often sold as externally hosted services) that are separate from the underlying content management system. Capturing detailed information with such systems involves much more than adding one code snippet to a shared page template. At a minimum, each page must be given its own ID and, more realistically, pages must be given multiple tags to facilitate analysis. Companies running several separate analytical systems may need several sets of tags.
The practical result of such an arrangement is that marketers and their Web teams quickly fall behind in their tagging, and end up with incomplete and unreliable analytics. Building analytics into the CMS allows users to avoid some tags altogether and makes it easier to reuse the rest. Integrated analytics also allow the system to track visitors with first-party cookies, which are less likely to be erased than the third-party cookies used by some stand-alone analytical products.
Integration also makes it easier to coordinate activities such as personalization, behavior-based targeting, and tests. The logic for these potentially overlapping functions can be all managed as part of one Web page definition, rather than separately.
For example, SiteCore supports lead scoring by assigning content scores (for technology, marketing, sales, pricing, tech support, etc.) to each Web page or, potentially, to components within a page. The lead score for each visitor’s interest in each category is the sum of the category scores for all the pages that person has visited. The same information can be used to identify the visitor’s business role or assign a persona.
The advantage of page-based scoring is that the scores adjust automatically to new Web contents. Otherwise, the company must rely on one team of workers to add new content and a separate team to incorporate the new content into the scoring rules.
Guarnaccia offered a Web site marketing maturity model that started with traffic statistics and extended to user experience statistics, content profiling, segmentation, conversion tracking, campaign management, sales enablement (using the IP address to identify visitor location and company), testing and optimization, and real time personalization. He said these are all present at no extra charge within the latest version of SiteCore, which was released at the end of June as the SiteCore Online Marketing Suite. An online demonstration confirmed they are indeed available, and at an impressively high level of sophistication.
SiteCore organizes these features around the individual Web pages. Attributes for each page include interest scores already mentioned, plus goals and other events that are logged to the visitor's history profile when the page is viewed. A page view can also trigger actions including test execution, personalization, data updates, parameter setting, sales alerts, and calls to external scripts. The system also captures the usual Web analytics data such as traffic volume, referring and exit pages, and on-site search terms. It can also use the visitor history to play back the sequence of pages viewed during a Web session.
This page-centric view of the world makes sense for a CMS vendor, but it's a pretty big switch from the campaign-centric view of most marketers and most marketing automation systems. In fact, the biggest objection to CMS-based marketing automation may be that it assumes everything is centered on the Web site.
Guarnaccia didn’t see it that way. He suggested that marketers will use separate systems for each channel. I think that SiteCore’s main goal is to replace stand-alone Web analytics and personalization systems, not to provide cross-channel marketing automation. Still, the company does plan move beyond Web marketing by adding outbound email campaigns in a few weeks. It will also support emails triggered by Web page visits.
My own take is that building analytics into the CMS makes sense, but I doubt marketers want new silos in the form of channel-specific marketing systems. If so, SiteCore’s marketing features will be most appealing to companies that interact with customers primarily through the Web. For those firms, the Web site could reasonably be the core customer management system. Systems for other channels then would connect with the Web database in the same way that auxiliary channels are (sometimes) now integrated with a central Customer Relationship Management (CRM) system.
The CMS-based model relates to other industry trends: integration between marketing automation and sales systems, and, more broadly, absorption of marketing automation into operational systems. For companies where the Web site is the primarily operational system, these are exactly the same thing. For companies where the Web and CRM are both important independent systems, marketing automation is an ally they may both wish to annex.
For now, though, SiteCore is working to cooperate with CRM rather than replace it. The system can scan IP address registries to identify a visitor’s geographic location and company, and then use the results to route leads, alert sales people, and aggregate data at the company level. SiteCore has built data synchronization for Salesforce.com and Microsoft Dynamics and will add other systems as clients request them. If further integration is needed, other systems can access the SiteCore databases directly.
This access is simplified because SiteCore is traditional on-premise software, not an externally hosted service. Pricing is based on the number of concurrent users and servers. A single server license starts as low as $15,000, although an average installation runs about $90,000. The vendor provides several days of classes, including about two days for marketing users.
The reasons for CMS vendors to add marketing automation functions are clear: to differentiate themselves and to capture budget now spent on analytical and marketing systems. It makes perfect sense for companies selecting a new CMS to prefer integrated analytics, and in some cases to add integrated marketing automation. It’s less likely that companies will discard an otherwise-satisfactory existing CMS just to get these features. But the normal replacement cycle runs three to five years, according to Guarnaccia, so it won't be long before most marketers find themselves with integrated analytical features and new marketing automation options. Even if marketers don't use all of those features, the possibility will encourage stand-alone marketing automation vendors to improve their own products to keep pace.
I commented last month that more Web content management system (CMS) vendors are adding marketing automation features. One of my examples was SiteCore, so I can’t point to them again as further proof of that assertion. But I did have a good talk last week with SiteCore VP Marketing Darren Guarnaccia, who clarified why this is happening and made a strong case for the integrated approach.
For those of you who (like me) are unfamiliar with SiteCore: it is an eight year old provider of Microsoft .NET-based Web content management systems, with over 1,600 mid-to-large sized customers running more than 20,000 dynamic Web sites including Sara Lee, Toshiba, Omni Hotels and Dollar Rent-a-Car/Thrifty. In other words, it is a substantial player in a crowded market.
According to Guarnaccia, the company has seen control over the CMS selection process steadily migrate from IT departments to marketers over the past four years. The trend is most pronounced at mid-sized firms, where IT is generally less powerful than at very large companies. During this time, it’s become clear that marketers need features that go beyond editing Web pages, to helping them do a better job of understanding and reacting to customers. SiteCore describes this as closing an “actionability chasm” between analytics and execution.
The chasm is created by the traditional approach of using analytical systems (often sold as externally hosted services) that are separate from the underlying content management system. Capturing detailed information with such systems involves much more than adding one code snippet to a shared page template. At a minimum, each page must be given its own ID and, more realistically, pages must be given multiple tags to facilitate analysis. Companies running several separate analytical systems may need several sets of tags.
The practical result of such an arrangement is that marketers and their Web teams quickly fall behind in their tagging, and end up with incomplete and unreliable analytics. Building analytics into the CMS allows users to avoid some tags altogether and makes it easier to reuse the rest. Integrated analytics also allow the system to track visitors with first-party cookies, which are less likely to be erased than the third-party cookies used by some stand-alone analytical products.
Integration also makes it easier to coordinate activities such as personalization, behavior-based targeting, and tests. The logic for these potentially overlapping functions can be all managed as part of one Web page definition, rather than separately.
For example, SiteCore supports lead scoring by assigning content scores (for technology, marketing, sales, pricing, tech support, etc.) to each Web page or, potentially, to components within a page. The lead score for each visitor’s interest in each category is the sum of the category scores for all the pages that person has visited. The same information can be used to identify the visitor’s business role or assign a persona.
The advantage of page-based scoring is that the scores adjust automatically to new Web contents. Otherwise, the company must rely on one team of workers to add new content and a separate team to incorporate the new content into the scoring rules.
Guarnaccia offered a Web site marketing maturity model that started with traffic statistics and extended to user experience statistics, content profiling, segmentation, conversion tracking, campaign management, sales enablement (using the IP address to identify visitor location and company), testing and optimization, and real time personalization. He said these are all present at no extra charge within the latest version of SiteCore, which was released at the end of June as the SiteCore Online Marketing Suite. An online demonstration confirmed they are indeed available, and at an impressively high level of sophistication.
SiteCore organizes these features around the individual Web pages. Attributes for each page include interest scores already mentioned, plus goals and other events that are logged to the visitor's history profile when the page is viewed. A page view can also trigger actions including test execution, personalization, data updates, parameter setting, sales alerts, and calls to external scripts. The system also captures the usual Web analytics data such as traffic volume, referring and exit pages, and on-site search terms. It can also use the visitor history to play back the sequence of pages viewed during a Web session.
This page-centric view of the world makes sense for a CMS vendor, but it's a pretty big switch from the campaign-centric view of most marketers and most marketing automation systems. In fact, the biggest objection to CMS-based marketing automation may be that it assumes everything is centered on the Web site.
Guarnaccia didn’t see it that way. He suggested that marketers will use separate systems for each channel. I think that SiteCore’s main goal is to replace stand-alone Web analytics and personalization systems, not to provide cross-channel marketing automation. Still, the company does plan move beyond Web marketing by adding outbound email campaigns in a few weeks. It will also support emails triggered by Web page visits.
My own take is that building analytics into the CMS makes sense, but I doubt marketers want new silos in the form of channel-specific marketing systems. If so, SiteCore’s marketing features will be most appealing to companies that interact with customers primarily through the Web. For those firms, the Web site could reasonably be the core customer management system. Systems for other channels then would connect with the Web database in the same way that auxiliary channels are (sometimes) now integrated with a central Customer Relationship Management (CRM) system.
The CMS-based model relates to other industry trends: integration between marketing automation and sales systems, and, more broadly, absorption of marketing automation into operational systems. For companies where the Web site is the primarily operational system, these are exactly the same thing. For companies where the Web and CRM are both important independent systems, marketing automation is an ally they may both wish to annex.
For now, though, SiteCore is working to cooperate with CRM rather than replace it. The system can scan IP address registries to identify a visitor’s geographic location and company, and then use the results to route leads, alert sales people, and aggregate data at the company level. SiteCore has built data synchronization for Salesforce.com and Microsoft Dynamics and will add other systems as clients request them. If further integration is needed, other systems can access the SiteCore databases directly.
This access is simplified because SiteCore is traditional on-premise software, not an externally hosted service. Pricing is based on the number of concurrent users and servers. A single server license starts as low as $15,000, although an average installation runs about $90,000. The vendor provides several days of classes, including about two days for marketing users.
The reasons for CMS vendors to add marketing automation functions are clear: to differentiate themselves and to capture budget now spent on analytical and marketing systems. It makes perfect sense for companies selecting a new CMS to prefer integrated analytics, and in some cases to add integrated marketing automation. It’s less likely that companies will discard an otherwise-satisfactory existing CMS just to get these features. But the normal replacement cycle runs three to five years, according to Guarnaccia, so it won't be long before most marketers find themselves with integrated analytical features and new marketing automation options. Even if marketers don't use all of those features, the possibility will encourage stand-alone marketing automation vendors to improve their own products to keep pace.
Thursday, May 15, 2008
Demand Generation Systems Shift Focus to Tracking Behavior
Over the past few months, I’ve had conversations with “demand generation” software vendors including Eloqua, Vtrenz and Manticore, and been on the receiving end of a drip marketing stream from yet another (Moonray Marketing, lately renamed OfficeAutoPilot).
What struck me was that each vendor stressed its ability to give a detailed view of prospects’ activities on the company Web site (pages visited, downloads requested, time spent, etc.) The (true) claim is that this information gives a significant insight into the prospect’s state of mind: the exact issues that concern them, their current degree of interest, and which people at the prospect company were involved. Of course, the Web information is combined with conventional contact history such as emails sent and call notes to give a complete view of the customer’s situation..
Even though I’ve long known it was technically possible for companies can track my visits in such detail, I’ll admit I still find it a bit spooky. It just doesn’t seem quite sporting of them to record what I’m doing if I haven’t voluntarily identified myself by registration or logging in. But I suppose it’s not a real privacy violation. I also know that if this really bothered me, I could remove cookies on a regular basis and foil much of the tracking.
Lest I comfort myself that my personal behavior is more private, another conversation with the marketing software people at SAS reminded me that they use the excellent Web behavior tracking technology of UK-based Speed-Trap to similarly monitor consumer activities. (I originally wrote about the SAS offering, called Customer Experience Analytics, when it was launched in the UK in February 2007. It is now being offered elsewhere.) Like the demand generation systems, SAS and Speed-Trap can record anonymous visits and later connect them to personal profiles once the user is identified.
Detailed tracking of individual behavior is quite different from traditional Web analytics, which are concerned with mass statistics—which pages are viewed most often, what paths do most customers follow, which offers yield the highest response. Although the underlying technology is similar, the focus on individuals supports highly personalized marketing.
In fact, the ability of these systems to track individual behavior is what links their activity monitoring features to what I have previously considered the central feature of the demand generation systems: the ability to manage automated, multi-step contact streams. This is still a major selling point and vendors continue to make such streams more powerful and easier to use. But it no longer seems to be the focus of their presentations.
Perhaps contact streams are no longer a point of differentiation simply because so many products now have them in a reasonably mature form. But I suspect the shift reflects something more fundamental. I believe that marketers now recognize, perhaps only intuitively, that the amount of detailed, near-immediate information now available about individual customers substantially changes their business. Specifically, it makes possible more effective treatments than a small number of conventional contact streams can provide.
Conventional contact streams are relatively difficult to design, deploy and maintain. As a result, they are typically limited to a small number of key decisions. The greater volume of information now available implies a much larger number of possible decisions, so a new approach is needed.
This will still use decision rules to react as events occur. But the rules will make more subtle distinctions among events, based on the details of the events themselves and the context provided by surrounding events. This may eventually involve advanced analytics to uncover subtle relationships among events and behaviors, and to calculate the optimal response in each situation. However, those analytics are not yet in place. Until they are, human decision-makers will do a better job of integrating the relevant information and finding the best response. This is why the transformation has started with demand generation systems, which are used primarily in business-to-business situations where sales people personally manage individual customer relationships.
Over time, the focus of these systems will shift from simply capturing information and presenting it to humans, to reacting to that information automatically. The transition may be nearly imperceptible since it will employ technologies that already exist, such as recommendation engines and interaction management systems. These will gradually take over an increasing portion of the treatment decisions as they gradually improve the quality of the decisions they can make. Only when we compare today’s systems with those in place several years from now will we see how radically the situation has changed.
But the path is already clear. As increasing amounts of useful information become accessible, marketers will find tools to take advantage of it. Today, the volume is overwhelming, like oil gushing into the air from a newly drilled well. Eventually marketers will cap that well and use its stream of information invisibly but even more effectively—not wasting a single precious drop.
What struck me was that each vendor stressed its ability to give a detailed view of prospects’ activities on the company Web site (pages visited, downloads requested, time spent, etc.) The (true) claim is that this information gives a significant insight into the prospect’s state of mind: the exact issues that concern them, their current degree of interest, and which people at the prospect company were involved. Of course, the Web information is combined with conventional contact history such as emails sent and call notes to give a complete view of the customer’s situation..
Even though I’ve long known it was technically possible for companies can track my visits in such detail, I’ll admit I still find it a bit spooky. It just doesn’t seem quite sporting of them to record what I’m doing if I haven’t voluntarily identified myself by registration or logging in. But I suppose it’s not a real privacy violation. I also know that if this really bothered me, I could remove cookies on a regular basis and foil much of the tracking.
Lest I comfort myself that my personal behavior is more private, another conversation with the marketing software people at SAS reminded me that they use the excellent Web behavior tracking technology of UK-based Speed-Trap to similarly monitor consumer activities. (I originally wrote about the SAS offering, called Customer Experience Analytics, when it was launched in the UK in February 2007. It is now being offered elsewhere.) Like the demand generation systems, SAS and Speed-Trap can record anonymous visits and later connect them to personal profiles once the user is identified.
Detailed tracking of individual behavior is quite different from traditional Web analytics, which are concerned with mass statistics—which pages are viewed most often, what paths do most customers follow, which offers yield the highest response. Although the underlying technology is similar, the focus on individuals supports highly personalized marketing.
In fact, the ability of these systems to track individual behavior is what links their activity monitoring features to what I have previously considered the central feature of the demand generation systems: the ability to manage automated, multi-step contact streams. This is still a major selling point and vendors continue to make such streams more powerful and easier to use. But it no longer seems to be the focus of their presentations.
Perhaps contact streams are no longer a point of differentiation simply because so many products now have them in a reasonably mature form. But I suspect the shift reflects something more fundamental. I believe that marketers now recognize, perhaps only intuitively, that the amount of detailed, near-immediate information now available about individual customers substantially changes their business. Specifically, it makes possible more effective treatments than a small number of conventional contact streams can provide.
Conventional contact streams are relatively difficult to design, deploy and maintain. As a result, they are typically limited to a small number of key decisions. The greater volume of information now available implies a much larger number of possible decisions, so a new approach is needed.
This will still use decision rules to react as events occur. But the rules will make more subtle distinctions among events, based on the details of the events themselves and the context provided by surrounding events. This may eventually involve advanced analytics to uncover subtle relationships among events and behaviors, and to calculate the optimal response in each situation. However, those analytics are not yet in place. Until they are, human decision-makers will do a better job of integrating the relevant information and finding the best response. This is why the transformation has started with demand generation systems, which are used primarily in business-to-business situations where sales people personally manage individual customer relationships.
Over time, the focus of these systems will shift from simply capturing information and presenting it to humans, to reacting to that information automatically. The transition may be nearly imperceptible since it will employ technologies that already exist, such as recommendation engines and interaction management systems. These will gradually take over an increasing portion of the treatment decisions as they gradually improve the quality of the decisions they can make. Only when we compare today’s systems with those in place several years from now will we see how radically the situation has changed.
But the path is already clear. As increasing amounts of useful information become accessible, marketers will find tools to take advantage of it. Today, the volume is overwhelming, like oil gushing into the air from a newly drilled well. Eventually marketers will cap that well and use its stream of information invisibly but even more effectively—not wasting a single precious drop.
Wednesday, May 23, 2007
Online Marketing Systems Are Still Very Fragmented
What with all the recent acquisitions in the digital marketing industry, I thought I’d draw a little diagram of all the components needed for a complete solution. The results were a surprise.
It’s not that I didn’t know what the pieces were. I’ve written about pretty much every variety of online marketing software here or elsewhere and have reviewed dozens of products in depth. But somehow I had assumed that the more integrated vendors had already assembled a fairly complete package. Now that I’m staring at the list of components, I see how far we have to go.
My diagram is divided into two main areas: traffic generation systems that lead visitors to a Web site, and visitor treatment systems that control what happens once people get there. The Web site itself sits in the middle.
The traffic generation side includes:
- email campaign systems like Responsys and Silverpop;
- search engine marketing like Efficient Frontier and Did-It;
- online advertising like Doubleclick and Tacoda;
- search engine optimization like Apex Pacific and SEO Elite; and
- mobile advertising like Knotice and Enpocket
(There are many more vendors in each category; these are just top-of-mind examples, and not necessarily the market leaders. If you’re familiar with these systems, you’ll immediately notice that search engine optimization is a world of $150 PC software, while all the other categories are dominated by large service providers. Not sure why this is, or if I’ve just missed something.)
The visitor treatment side includes:
- behavioral targeting systems like [x + 1] and Certona (this area overlaps heavily with online ad networks, which use similar technology to target ads they place on other people’s Web sites)
- site optimization and personalization like Offermatica and Optimost;
- Web analytics like Coremetrics and Webtrends
- real time interaction management like Infor Epiphany and Chordiant(and, arguably, a slew of online customer service systems)
The Web site that sits in between has its own set of components. These include Web application servers like IBM Websphere, Web development tools like Adobe Dreamweaver, and Web content management tools like Vignette. Although their focus is much broader than just marketing, they certainly support marketing systems and many include marketing functions that compete with the specialized marketing tools.
If you compare this list of applications with the handful of seemingly integrated products, you’ll see that no product comes close to covering all the bases. Demand generation systems like Vtrenz, Eloqua and Manticore combine email with some Web page creation and analytics. Some of the general purpose campaign managers like Unica and SmartFocus combine cross-channel customer management with email, interaction management and analytics. A few of the recent acquisitions (Acxiom Impact / Kefta, Omniture / Touch Clarity, Silverpop / Vtrenz) marry particular pairs of capabilities. Probably the most complete offerings are from platform vendors like Websphere, although those products are so sprawling it’s often hard to understand their full scope.
What this all means is the wave of consolidations among online marketing vendors has just begun. Moreover, online marketing itself is just one piece of marketing in general, so even a complete online marketing system could be trumped by an enterprise marketing suite. Viewed from another angle, online marketing is also just one component of a total online platform. So the enterprise marketing vendors and the Web platform vendors will find themselves competing as well—both for acquisitions and clients.
Interesting days are ahead.
It’s not that I didn’t know what the pieces were. I’ve written about pretty much every variety of online marketing software here or elsewhere and have reviewed dozens of products in depth. But somehow I had assumed that the more integrated vendors had already assembled a fairly complete package. Now that I’m staring at the list of components, I see how far we have to go.
My diagram is divided into two main areas: traffic generation systems that lead visitors to a Web site, and visitor treatment systems that control what happens once people get there. The Web site itself sits in the middle.
The traffic generation side includes:
- email campaign systems like Responsys and Silverpop;
- search engine marketing like Efficient Frontier and Did-It;
- online advertising like Doubleclick and Tacoda;
- search engine optimization like Apex Pacific and SEO Elite; and
- mobile advertising like Knotice and Enpocket
(There are many more vendors in each category; these are just top-of-mind examples, and not necessarily the market leaders. If you’re familiar with these systems, you’ll immediately notice that search engine optimization is a world of $150 PC software, while all the other categories are dominated by large service providers. Not sure why this is, or if I’ve just missed something.)
The visitor treatment side includes:
- behavioral targeting systems like [x + 1] and Certona (this area overlaps heavily with online ad networks, which use similar technology to target ads they place on other people’s Web sites)
- site optimization and personalization like Offermatica and Optimost;
- Web analytics like Coremetrics and Webtrends
- real time interaction management like Infor Epiphany and Chordiant(and, arguably, a slew of online customer service systems)
The Web site that sits in between has its own set of components. These include Web application servers like IBM Websphere, Web development tools like Adobe Dreamweaver, and Web content management tools like Vignette. Although their focus is much broader than just marketing, they certainly support marketing systems and many include marketing functions that compete with the specialized marketing tools.
If you compare this list of applications with the handful of seemingly integrated products, you’ll see that no product comes close to covering all the bases. Demand generation systems like Vtrenz, Eloqua and Manticore combine email with some Web page creation and analytics. Some of the general purpose campaign managers like Unica and SmartFocus combine cross-channel customer management with email, interaction management and analytics. A few of the recent acquisitions (Acxiom Impact / Kefta, Omniture / Touch Clarity, Silverpop / Vtrenz) marry particular pairs of capabilities. Probably the most complete offerings are from platform vendors like Websphere, although those products are so sprawling it’s often hard to understand their full scope.
What this all means is the wave of consolidations among online marketing vendors has just begun. Moreover, online marketing itself is just one piece of marketing in general, so even a complete online marketing system could be trumped by an enterprise marketing suite. Viewed from another angle, online marketing is also just one component of a total online platform. So the enterprise marketing vendors and the Web platform vendors will find themselves competing as well—both for acquisitions and clients.
Interesting days are ahead.
Labels:
marketing software,
web analytics
Tuesday, May 15, 2007
Are Visual Sciences and WebSideStory Really the Same Company? (As a matter of fact, yes.)
Last week, WebSideStory announced it was going to become part of the Visual Sciences brand. (The two companies merged in February 2006 but had retained separate identities.)
The general theme of the combined business is “real time analytics”. This is what Visual Sciences has always done, so far as I can recall. It’s more of a departure for WebSideStory, which has its roots in the batch-oriented world of Web log analysis.
But what’s really intriguing is the applications WebSideStory has developed. One is a search system that helps users navigate within a site. Another provides Web content management. A third provides keyword bid management.
Those applications may sound barely relevant, but all are enriched with analytics in ways that make perfect sense. The search system uses analytics to help infer user interests and also lets users control results so the users are shown items that meet business needs. Web content management also includes functions that let business objectives influence the content presented to visitors. Keyword bid management is tightly integrated with subsequent site behavior—conversions and so on—so value can be optimized beyond the cost per click.
Maybe this is just good packaging, but it does seem to me that Visual Sciences has done something pretty clever here: rather than treating analytics and targeting as independent disciplines that are somehow applied to day-to-day Web operations, it has built analytics directly into the operational functions. Given the choice between plain content management and analytics-enhanced content management, why would anyone not choose the latter?
I haven’t really dug into these applications, so my reaction is purely superficial. All I know is they sound attractive. But even this is impressive at a time when so many online vendors are expanding their product lines through acquisitions that seem to have little strategic rationale beyond generally expanding the company footprint.
The general theme of the combined business is “real time analytics”. This is what Visual Sciences has always done, so far as I can recall. It’s more of a departure for WebSideStory, which has its roots in the batch-oriented world of Web log analysis.
But what’s really intriguing is the applications WebSideStory has developed. One is a search system that helps users navigate within a site. Another provides Web content management. A third provides keyword bid management.
Those applications may sound barely relevant, but all are enriched with analytics in ways that make perfect sense. The search system uses analytics to help infer user interests and also lets users control results so the users are shown items that meet business needs. Web content management also includes functions that let business objectives influence the content presented to visitors. Keyword bid management is tightly integrated with subsequent site behavior—conversions and so on—so value can be optimized beyond the cost per click.
Maybe this is just good packaging, but it does seem to me that Visual Sciences has done something pretty clever here: rather than treating analytics and targeting as independent disciplines that are somehow applied to day-to-day Web operations, it has built analytics directly into the operational functions. Given the choice between plain content management and analytics-enhanced content management, why would anyone not choose the latter?
I haven’t really dug into these applications, so my reaction is purely superficial. All I know is they sound attractive. But even this is impressive at a time when so many online vendors are expanding their product lines through acquisitions that seem to have little strategic rationale beyond generally expanding the company footprint.
Labels:
analytics tools,
marketing software,
web analytics
Thursday, April 12, 2007
Beware Distributed Analytics (You Read It Here First)
What might be called the “standard model” of business intelligence systems boils down to this: many operational sources feed data into a central repository which in turn supports analytical, reporting and operational systems. A refinement of the model divides the central repository into a data warehouse structured for analysis and an operational data store used for immediate access. (Come to think of it, I don’t see the term “operational data store” used much anymore—have I just stopped looking or did I miss the memo that renamed it?)
No one has been attacking this model directly (unless I’ve missed yet another memo), but it does seem to be under some strain. Specifically, much more analytical power is being built into the operational systems themselves, reducing the apparent need for centralized, external business intelligence functions. Examples abound:
- channel-specific analytical systems for Web sites and call centers (see last Thursday's post).
- more extensive business intelligence capabilities built into enterprise software systems like SAP
- rule- and model-driven interaction management components built into customer touchpoint systems
- more advanced testing functions built into operational processes (okay, we haven’t seen that one yet, but I did write about it yesterday).
Generally speaking, more analytical capability in operational systems is a Good Thing. But if these capabilities replace functions that would be otherwise provided by a centralized business intelligence system, they reduce the incremental value provided by that central system and therefore make it harder to justify. The resulting fragmentation of analytics is preferred by many departments anyway because it increases their autonomy.
This fragmentation is a Bad Thing, especially from the perspective of customer experience management. Fragmentation leads to inconsistent customer treatments and to decisions that are optimal for the specific department but not the enterprise as a whole.
The problem is that analysts, consultants, journalists and similar entertainers are always looking for new trends to champion. (Yes that includes me.) So coining a phrase like “distributed analytics” and calling it the Next Big Thing is immensely attractive. (FYI, a Google search on “distributed analytics” finds 64 hits, compared with 410,000 for “predictive analytics”. So the term is definitely available.)
We must all resist that temptation. A consolidated, centralized data view may seem old-fashioned, but it is still necessary. (Whether the data must be physically consolidated is quite another story—there’s nothing wrong with federated approaches.) By all means, analytical capabilities should be extended throughout the organization. But everyone should be working with the same, comprehensive information so their independent decisions still reflect the corporate perspective.
No one has been attacking this model directly (unless I’ve missed yet another memo), but it does seem to be under some strain. Specifically, much more analytical power is being built into the operational systems themselves, reducing the apparent need for centralized, external business intelligence functions. Examples abound:
- channel-specific analytical systems for Web sites and call centers (see last Thursday's post).
- more extensive business intelligence capabilities built into enterprise software systems like SAP
- rule- and model-driven interaction management components built into customer touchpoint systems
- more advanced testing functions built into operational processes (okay, we haven’t seen that one yet, but I did write about it yesterday).
Generally speaking, more analytical capability in operational systems is a Good Thing. But if these capabilities replace functions that would be otherwise provided by a centralized business intelligence system, they reduce the incremental value provided by that central system and therefore make it harder to justify. The resulting fragmentation of analytics is preferred by many departments anyway because it increases their autonomy.
This fragmentation is a Bad Thing, especially from the perspective of customer experience management. Fragmentation leads to inconsistent customer treatments and to decisions that are optimal for the specific department but not the enterprise as a whole.
The problem is that analysts, consultants, journalists and similar entertainers are always looking for new trends to champion. (Yes that includes me.) So coining a phrase like “distributed analytics” and calling it the Next Big Thing is immensely attractive. (FYI, a Google search on “distributed analytics” finds 64 hits, compared with 410,000 for “predictive analytics”. So the term is definitely available.)
We must all resist that temptation. A consolidated, centralized data view may seem old-fashioned, but it is still necessary. (Whether the data must be physically consolidated is quite another story—there’s nothing wrong with federated approaches.) By all means, analytical capabilities should be extended throughout the organization. But everyone should be working with the same, comprehensive information so their independent decisions still reflect the corporate perspective.
Wednesday, April 11, 2007
Operational Systems Should Be Designed with Testing in Mind
A direct marketing client pointed out to me recently that it can be very difficult to set up tests in operational systems.
There is no small irony in this. Direct marketers have always prided themselves on their ability to test what they do, in pointed contrast to the barely measurable results of conventional media. But his point was well taken. Although it’s fairly easy to set up tests for direct marketing promotions, testing customer treatments delivered through operational systems such as order processing is much more difficult. Those systems are designed with the assumption that all customers are treated the same. Forcing them to treat selected customers differently can require significant contortions.
This naturally led me to wonder what an operational system would look like if it had been designed from the start with testing in mind. A bit more reflection led me to the multivariate testing systems I have been looking at recently—Optimost, Offermatica, Memetrics, SiteSpect, Vertster. These take control of all customer treatments (within a limited domain), and therefore make delivering a test message no harder or easier than delivering a default message. If we treated them as a template for generic customer treatment systems, which functions would we copy?
I see several:
- segmentation capabilities, which can select customers for particular tests (or for customized treatment in general). You might generalize this further to include business rules and statistical models that determine exactly which treatments are applied to which customers: when you think about it, segmentation is just a special type of business rule.
- customer profiles, which make available all the information needed for segmentation/rules and hold tags that identify customers already tagged for a particular test
- content management features, which make existing content available to apply in tests. Some systems provide content creation as well, although I think this is a separate specialty that should usually remain external.
- test design functions, that help users create correctly-structured tests and then link them to the segmentation rules, data profiles and content needed to execute them. These design functions also include parameters such as date ranges, preview features for checking that they are set up correctly, workflow for approvals, and similar administrative features.
- reporting and analysis, so users can easily read test results, understand their implications, and use the knowledge effectively.
I don’t mean to suggest that existing multivariate testing systems should replace operational systems. The testing systems are mostly limited to Web sites and control only a few portions of a few pages within those sites. They sit on top of general purpose Web platforms which handle many other site capabilities. Using the testing systems to control all pages on a site without an underlying platform would be difficult if not impossible, and in any case isn’t what the testing systems are designed for.
Rather, my point is that developers of operational systems should use the testing products as models for how to finely control each customer experience, whether for testing or simply to tailor treatments to individual customer needs. Starting their design from the perspective of creating a powerful testing environment should enable them to understand more clearly what is needed to build a comprehensive solution, rather than trying to bolt on particular capabilities without grasping the underlying connections.
There is no small irony in this. Direct marketers have always prided themselves on their ability to test what they do, in pointed contrast to the barely measurable results of conventional media. But his point was well taken. Although it’s fairly easy to set up tests for direct marketing promotions, testing customer treatments delivered through operational systems such as order processing is much more difficult. Those systems are designed with the assumption that all customers are treated the same. Forcing them to treat selected customers differently can require significant contortions.
This naturally led me to wonder what an operational system would look like if it had been designed from the start with testing in mind. A bit more reflection led me to the multivariate testing systems I have been looking at recently—Optimost, Offermatica, Memetrics, SiteSpect, Vertster. These take control of all customer treatments (within a limited domain), and therefore make delivering a test message no harder or easier than delivering a default message. If we treated them as a template for generic customer treatment systems, which functions would we copy?
I see several:
- segmentation capabilities, which can select customers for particular tests (or for customized treatment in general). You might generalize this further to include business rules and statistical models that determine exactly which treatments are applied to which customers: when you think about it, segmentation is just a special type of business rule.
- customer profiles, which make available all the information needed for segmentation/rules and hold tags that identify customers already tagged for a particular test
- content management features, which make existing content available to apply in tests. Some systems provide content creation as well, although I think this is a separate specialty that should usually remain external.
- test design functions, that help users create correctly-structured tests and then link them to the segmentation rules, data profiles and content needed to execute them. These design functions also include parameters such as date ranges, preview features for checking that they are set up correctly, workflow for approvals, and similar administrative features.
- reporting and analysis, so users can easily read test results, understand their implications, and use the knowledge effectively.
I don’t mean to suggest that existing multivariate testing systems should replace operational systems. The testing systems are mostly limited to Web sites and control only a few portions of a few pages within those sites. They sit on top of general purpose Web platforms which handle many other site capabilities. Using the testing systems to control all pages on a site without an underlying platform would be difficult if not impossible, and in any case isn’t what the testing systems are designed for.
Rather, my point is that developers of operational systems should use the testing products as models for how to finely control each customer experience, whether for testing or simply to tailor treatments to individual customer needs. Starting their design from the perspective of creating a powerful testing environment should enable them to understand more clearly what is needed to build a comprehensive solution, rather than trying to bolt on particular capabilities without grasping the underlying connections.
Thursday, April 05, 2007
Channel-Specific Analytics Are Doomed: Doomed, I Tell You
Did you ever have one of those crazy dreams, not quite a nightmare, where unrelated things get mixed up together? I felt that way this morning when I was looking at the Web site for one of the mobile marketing systems and saw they had alliances with Web analytics vendors. That rang a bell, but it took a while for me to realize that I had been writing about consolidation in the Web marketing space separately from mobile marketing.
The confusion is compounded by my recent look at non-Web analytics system including ClickFox (which gathers interaction logs from call centers and other systems) and Skytide (which gathers all kinds of data; I haven’t written about it yet).
There’s an obvious connection between systems that gather interaction data and those that manage marketing messages. As the Omniture / TouchClarity hookup I mentioned yesterday illustrates, some of the vendors are themselves bringing the two together. It’s no surprise that this would happen for Web systems, which tend to be internally integrated but isolated from other media.
Of course, the Web should not be isolated, and the trend is in fact towards cross-channel integration. Does it make sense, then, for Web analytics vendors to integrate tightly with Web targeting systems? You can see why an analytics vendor would want to do it—as a revenue-generating line extension and a way to help clients who lack an existing targeting solution. But the vendors (and I’m sure Omniture recognizes this) must also make it easy to integrate their systems with any other targeting product. Otherwise, they risk losing sales to prospects who already have a targeting solution and don’t want to change it.
From a broader perspective, though, interaction data from many channels needs to be combined for marketers to do the best job of analysis and targeting. This can be done by physically copying the data into a traditional data warehouse or by using some sort of virtual or federated structure. What’s important is that data from many sources must come together into a single location, where it becomes accessible to many execution systems. In other words—am I beating a dead horse here? —you don’t want direct connections between single-channel source and execution systems, such as Web analytics to Web targeting.
This has technical implications. In the cross-channel scheme, the role of the analytics system is just to gather and reformat data so it can be presented to the central storage facility. The actual analysis would be done in the central system or by a cross-channel analysis system that draws from it. This means that products which combine data gathering and analysis, like current Web analytics systems, need to decouple those functions and build open interfaces to reconnect them. These interfaces would allow users to substitute other products on either side of the relationship. In addition, vendors with specialized data storage technologies might offer a storage component with interfaces at both ends, one to accept feeds from multiple data-gathering systems and the other to allow access by multiple analysis and targeting tools.
This is not an appealing proposition for many vendors. Breaking their systems into components opens them up to more competitors and risks each component appearing to be a commodity. It also eases switching costs, placing further pressure on prices. In general, as I’ve noted many times, vendors seek to expand their footprint and increase integration, not the other way around.
But vendors who specialize in systems for one channel will increasingly find themselves frozen out of multi-channel opportunities. There are already many products to provide multi-channel data store, analysis and targeting. Data gathering still tends to be channel-specific, but that won’t last as new channels become better understood.
In short, vendors who seek to remain channel specialists are likely to find their business shrinking over time. This may seem like bad news, but the sooner they begin to adjust to it, the better off they’ll ultimately be.
The confusion is compounded by my recent look at non-Web analytics system including ClickFox (which gathers interaction logs from call centers and other systems) and Skytide (which gathers all kinds of data; I haven’t written about it yet).
There’s an obvious connection between systems that gather interaction data and those that manage marketing messages. As the Omniture / TouchClarity hookup I mentioned yesterday illustrates, some of the vendors are themselves bringing the two together. It’s no surprise that this would happen for Web systems, which tend to be internally integrated but isolated from other media.
Of course, the Web should not be isolated, and the trend is in fact towards cross-channel integration. Does it make sense, then, for Web analytics vendors to integrate tightly with Web targeting systems? You can see why an analytics vendor would want to do it—as a revenue-generating line extension and a way to help clients who lack an existing targeting solution. But the vendors (and I’m sure Omniture recognizes this) must also make it easy to integrate their systems with any other targeting product. Otherwise, they risk losing sales to prospects who already have a targeting solution and don’t want to change it.
From a broader perspective, though, interaction data from many channels needs to be combined for marketers to do the best job of analysis and targeting. This can be done by physically copying the data into a traditional data warehouse or by using some sort of virtual or federated structure. What’s important is that data from many sources must come together into a single location, where it becomes accessible to many execution systems. In other words—am I beating a dead horse here? —you don’t want direct connections between single-channel source and execution systems, such as Web analytics to Web targeting.
This has technical implications. In the cross-channel scheme, the role of the analytics system is just to gather and reformat data so it can be presented to the central storage facility. The actual analysis would be done in the central system or by a cross-channel analysis system that draws from it. This means that products which combine data gathering and analysis, like current Web analytics systems, need to decouple those functions and build open interfaces to reconnect them. These interfaces would allow users to substitute other products on either side of the relationship. In addition, vendors with specialized data storage technologies might offer a storage component with interfaces at both ends, one to accept feeds from multiple data-gathering systems and the other to allow access by multiple analysis and targeting tools.
This is not an appealing proposition for many vendors. Breaking their systems into components opens them up to more competitors and risks each component appearing to be a commodity. It also eases switching costs, placing further pressure on prices. In general, as I’ve noted many times, vendors seek to expand their footprint and increase integration, not the other way around.
But vendors who specialize in systems for one channel will increasingly find themselves frozen out of multi-channel opportunities. There are already many products to provide multi-channel data store, analysis and targeting. Data gathering still tends to be channel-specific, but that won’t last as new channels become better understood.
In short, vendors who seek to remain channel specialists are likely to find their business shrinking over time. This may seem like bad news, but the sooner they begin to adjust to it, the better off they’ll ultimately be.
Tuesday, March 20, 2007
Proving the Value of Site Optimization
Eric’s comment on yesterday’s post, to the effect that “There shouldn’t be much debate here. Both full and fractional designs have their place in the testing cycle” is a useful reminder that it’s easy to get distracted by technical details and miss the larger perspective of the value provided by testing systems. This in turn raises the question posed implicitly by Friday’s post and Demi’s comment, of why so few companies have actually adopted these systems despite the proven benefits.
My personal theory is it has less to do with a reluctance to be measured than a lack of time and skills to conduct the testing itself. You can outsource the skills part: most if not all of the site testing vendors have staff to do this for you. But time is harder to come by. I suspect that most Web teams are struggling to keep up with demands for operational changes, such as accommodating new features, products and promotions. Optimization simply takes a lower priority.
(I’m tempted to add that optimization implies a relatively stable platform, whereas things are constantly changing on most sites. But plenty of areas, such as landing pages and check out processes, are usually stable enough that optimization is possible.)
Time can be expanded by adding more staff, either in-house or outsourced. This comes down to a question of money. Measuring the financial value of optimization comes back to last Wednesday's post on the credibility of marketing metrics.
Most optimization tests seem to focus on simple goals such as conversion rates, which have the advantage of being easy to measure but don’t capture the full value of an improvement. As I’ve argued many times in this blog, that value is properly defined as change in lifetime value. Calculating this is difficult and convincing others to accept the result is harder still. Marketing analysts therefore shy away from the problem unless pushed to engage it by senior management. The senior managers themselves will not be willing to invest the necessary resources unless they believe there is some benefit.
This is a chicken-and-egg problem, since the benefit from lifetime value analysis comes from shifting resources into more productive investments, but the only way to demonstrate this is possible is to do the lifetime value calculations in the first place. The obstacle is not insurmountable, however. One-off projects can illustrate the scope of the opportunity without investing in a permanent, all-encompassing LTV system. The series of “One Big Button” posts culminating last Monday described some approaches to this sort of analysis.
Which brings us back to Web site testing. Short term value measures will at best understate the benefits of an optimization project, and at worst lead to changes that destroy rather than increase long term value. So it makes considerable sense for a site testing trial project to include a pilot LTV estimate. It’s almost certain that the estimated value of the test benefit will be higher when based on LTV than when based on immediate results alone. This higher value can then justify expanded resources for both site testing and LTV.
And you thought last week’s posts were disconnected.
My personal theory is it has less to do with a reluctance to be measured than a lack of time and skills to conduct the testing itself. You can outsource the skills part: most if not all of the site testing vendors have staff to do this for you. But time is harder to come by. I suspect that most Web teams are struggling to keep up with demands for operational changes, such as accommodating new features, products and promotions. Optimization simply takes a lower priority.
(I’m tempted to add that optimization implies a relatively stable platform, whereas things are constantly changing on most sites. But plenty of areas, such as landing pages and check out processes, are usually stable enough that optimization is possible.)
Time can be expanded by adding more staff, either in-house or outsourced. This comes down to a question of money. Measuring the financial value of optimization comes back to last Wednesday's post on the credibility of marketing metrics.
Most optimization tests seem to focus on simple goals such as conversion rates, which have the advantage of being easy to measure but don’t capture the full value of an improvement. As I’ve argued many times in this blog, that value is properly defined as change in lifetime value. Calculating this is difficult and convincing others to accept the result is harder still. Marketing analysts therefore shy away from the problem unless pushed to engage it by senior management. The senior managers themselves will not be willing to invest the necessary resources unless they believe there is some benefit.
This is a chicken-and-egg problem, since the benefit from lifetime value analysis comes from shifting resources into more productive investments, but the only way to demonstrate this is possible is to do the lifetime value calculations in the first place. The obstacle is not insurmountable, however. One-off projects can illustrate the scope of the opportunity without investing in a permanent, all-encompassing LTV system. The series of “One Big Button” posts culminating last Monday described some approaches to this sort of analysis.
Which brings us back to Web site testing. Short term value measures will at best understate the benefits of an optimization project, and at worst lead to changes that destroy rather than increase long term value. So it makes considerable sense for a site testing trial project to include a pilot LTV estimate. It’s almost certain that the estimated value of the test benefit will be higher when based on LTV than when based on immediate results alone. This higher value can then justify expanded resources for both site testing and LTV.
And you thought last week’s posts were disconnected.
Tuesday, March 13, 2007
SiteSpect Does Web Tests without Tags
I had a long and interesting talk yesterday with Larry Epstein at SiteSpect, a vendor of Web site multivariate testing and targeting software. SiteSpect’s primary claim to fame is they manage such tests without inserting any page tags, unlike pretty much all other vendors in this space. Their trick, as I understand it, is to use a proxy server that inserts test changes and captures results between site visitors and a client’s Web server. Users control changes by defining conditions, such as words or values to replace in specified pages, which the system checks for as traffic streams by.
Even though defining complex changes can take a fair amount of technical expertise, users with appropriate skills can make it happen without modifying the underlying pages. This frees marketers from reliance on the technical team that manages the site. It also frees the process from Javascript (which is inside most page tags), which doesn’t always execute correctly and adds some time to page processing.
This is an intriguing approach, but I haven’t decided what I think of it. Tagging individual pages or even specific regions within each page is clearly work, but it’s by far the most widely used approach. This might mean that most users find it acceptable or it might be the reason relatively few people use such systems. (Or both.) There is also an argument that requiring tags on every page means you get incomplete results when someone occasionally leaves one out by mistake. But I think this applies more to site analytics than testing. With testing, the number of tags is limited and they should be inserted with surgical precision. Therefore, inadvertent error should not be an issue and the technical people should simply do the insertions as part of their job.
I’m kidding, of course. If there’s one thing I’ve learned from years of working with marketing systems, it’s that marketers never want to rely on technical people for anything—and the technical people heartily agree that marketers should do as much as possible for themselves. There are very sound, practical reasons for this that boil down to the time and effort required to accurately transfer requests from marketers to technologists. If the marketers can do the work themselves, these very substantial costs can be avoided.
This holds true even when significant technical skills are still required. Setting up complex marketing campaigns, for example, can be almost as much work in campaign management software as when programmers had to do it. Most companies with such software therefore end up with experts in their marketing departments to do the setup. The difference between programmers and these campaign management super users isn’t really so much their level of technical skill, as it is that the super users are part of the marketing department. This makes them both more familiar with marketers’ needs and more responsive to their requests.
Framing the issue this way puts SiteSpect’s case in a different light. Does SiteSpect really give marketers more control over testing and segmentation than other products? Compared with products where vendor professional services staff sets up the tests, the answer is yes. (Although relying on vendor staff may be more like relying on an internal super user than a corporate IT department.) But most of the testing products do provide marketing users with substantial capabilities once the initial tagging is complete. So I’d say the practical advantage for SiteSpect is relatively small.
But I’ll give the last word to SiteSpect. Larry told me they have picked up large new clients specifically because those companies did find working with tag-based testing systems too cumbersome. So perhaps there are advantages I haven’t seen, or perhaps there are particular situations where SiteSpect’s no-tag approach has special advantages.
Time, and marketing skills, will tell.
Even though defining complex changes can take a fair amount of technical expertise, users with appropriate skills can make it happen without modifying the underlying pages. This frees marketers from reliance on the technical team that manages the site. It also frees the process from Javascript (which is inside most page tags), which doesn’t always execute correctly and adds some time to page processing.
This is an intriguing approach, but I haven’t decided what I think of it. Tagging individual pages or even specific regions within each page is clearly work, but it’s by far the most widely used approach. This might mean that most users find it acceptable or it might be the reason relatively few people use such systems. (Or both.) There is also an argument that requiring tags on every page means you get incomplete results when someone occasionally leaves one out by mistake. But I think this applies more to site analytics than testing. With testing, the number of tags is limited and they should be inserted with surgical precision. Therefore, inadvertent error should not be an issue and the technical people should simply do the insertions as part of their job.
I’m kidding, of course. If there’s one thing I’ve learned from years of working with marketing systems, it’s that marketers never want to rely on technical people for anything—and the technical people heartily agree that marketers should do as much as possible for themselves. There are very sound, practical reasons for this that boil down to the time and effort required to accurately transfer requests from marketers to technologists. If the marketers can do the work themselves, these very substantial costs can be avoided.
This holds true even when significant technical skills are still required. Setting up complex marketing campaigns, for example, can be almost as much work in campaign management software as when programmers had to do it. Most companies with such software therefore end up with experts in their marketing departments to do the setup. The difference between programmers and these campaign management super users isn’t really so much their level of technical skill, as it is that the super users are part of the marketing department. This makes them both more familiar with marketers’ needs and more responsive to their requests.
Framing the issue this way puts SiteSpect’s case in a different light. Does SiteSpect really give marketers more control over testing and segmentation than other products? Compared with products where vendor professional services staff sets up the tests, the answer is yes. (Although relying on vendor staff may be more like relying on an internal super user than a corporate IT department.) But most of the testing products do provide marketing users with substantial capabilities once the initial tagging is complete. So I’d say the practical advantage for SiteSpect is relatively small.
But I’ll give the last word to SiteSpect. Larry told me they have picked up large new clients specifically because those companies did find working with tag-based testing systems too cumbersome. So perhaps there are advantages I haven’t seen, or perhaps there are particular situations where SiteSpect’s no-tag approach has special advantages.
Time, and marketing skills, will tell.
Friday, March 02, 2007
Business Intelligence and the One Big Button
Literal-minded creature that I am, yesterday’s discussion of organizing analysis tools around questions led me to consider changing my sample LTV system to open with a list of questions that the system can answer. Selecting a question would take you to the tab with the related information. (Nothing unique here—many systems do this.)
But I soon realized that things like “How much do revenue, marketing costs and product costs contribute to total value?” aren’t really what managers want to know. In fact, they really want just one button that answers the question, “How can I make more money?” This must look beyond past performance, and even beyond the factors that caused past performance, to identify opportunities for improvement.
You could argue that this is where human creativity comes in, and ultimately you’d be correct. But if we limit the discussion to marginal improvements within an existing structure, the process used to uncover opportunities is pretty well defined and can indeed be automated. It involves comparing the results of on-going efforts—things like different customer acquisition programs—and shifting investments from below-average to above-average performers.
Of course, it’s not trivial to get information on the results. You also have to estimate the incremental (rather than average) return on investments. But standard systems and formulas can do those sorts of things. They can also estimate the size of the opportunity represented by each change, so the system can prioritize the list of recommendations that the One Big Button returns.
Now, if the only thing managers have to do is push one button, why not automate the process entirely? Indeed, you may well do that in some situations. But here’s where we get back to human judgment.
It’s not just that systems sometimes recommend things that managers know are wrong. An automated forecast based on toy sales by week would predict incredible sales each January. Any human (at least in the U.S.) knows the pattern is seasonal. However, this isn’t such a big deal. Systems can be built to incorporate such factors and can be modified over time to avoid repeating an error.
The real reason you want humans involved is that looking at the recommendations and underlying data will generate new ideas and insights. A machine can only work within the existing structure, but a smart manager or analyst can draw inferences about what else might be worth trying. This will only happen if the manager sees the data.
I’m not saying any of the ideas I’ve just presented are new or profound. But they’re worth keeping in mind. For example, they apply to the question of whether Web targeting should be based on automated behavior monitoring or structured tests. (The correct answer is both—and make sure to look at the results of the automated systems to see if they suggest new tests.)
These ideas may also help developers of business intelligence and analytics systems understand how they can continue to add value, even after specialized features are assimilated into broader platforms. (I’m thinking here of acquisitions: Google/Urchin, Omniture/Touch Clarity, Oracle/Hyperion, SAP/Pilot, and so on.) Many analytical capabilities are rapidly approaching commodity status. In this world, only vendors who help answer the really important question—vendors who put something useful behind the One Big Button—will be able to survive.
But I soon realized that things like “How much do revenue, marketing costs and product costs contribute to total value?” aren’t really what managers want to know. In fact, they really want just one button that answers the question, “How can I make more money?” This must look beyond past performance, and even beyond the factors that caused past performance, to identify opportunities for improvement.
You could argue that this is where human creativity comes in, and ultimately you’d be correct. But if we limit the discussion to marginal improvements within an existing structure, the process used to uncover opportunities is pretty well defined and can indeed be automated. It involves comparing the results of on-going efforts—things like different customer acquisition programs—and shifting investments from below-average to above-average performers.
Of course, it’s not trivial to get information on the results. You also have to estimate the incremental (rather than average) return on investments. But standard systems and formulas can do those sorts of things. They can also estimate the size of the opportunity represented by each change, so the system can prioritize the list of recommendations that the One Big Button returns.
Now, if the only thing managers have to do is push one button, why not automate the process entirely? Indeed, you may well do that in some situations. But here’s where we get back to human judgment.
It’s not just that systems sometimes recommend things that managers know are wrong. An automated forecast based on toy sales by week would predict incredible sales each January. Any human (at least in the U.S.) knows the pattern is seasonal. However, this isn’t such a big deal. Systems can be built to incorporate such factors and can be modified over time to avoid repeating an error.
The real reason you want humans involved is that looking at the recommendations and underlying data will generate new ideas and insights. A machine can only work within the existing structure, but a smart manager or analyst can draw inferences about what else might be worth trying. This will only happen if the manager sees the data.
I’m not saying any of the ideas I’ve just presented are new or profound. But they’re worth keeping in mind. For example, they apply to the question of whether Web targeting should be based on automated behavior monitoring or structured tests. (The correct answer is both—and make sure to look at the results of the automated systems to see if they suggest new tests.)
These ideas may also help developers of business intelligence and analytics systems understand how they can continue to add value, even after specialized features are assimilated into broader platforms. (I’m thinking here of acquisitions: Google/Urchin, Omniture/Touch Clarity, Oracle/Hyperion, SAP/Pilot, and so on.) Many analytical capabilities are rapidly approaching commodity status. In this world, only vendors who help answer the really important question—vendors who put something useful behind the One Big Button—will be able to survive.
Labels:
business intelligence,
lifetime value,
web analytics
Thursday, March 01, 2007
Users Want Answers, Not Tools
I hope you appreciated that yesterday’s post about reports within the sample Lifetime Value system was organized around the questions that each report answered, and not around the report contents themselves. (You DID read yesterday’s post, didn’t you? Every last word?) This was the product of considerable thought about what it takes to make systems like that useful to actual human beings.
Personally I find those kinds of reports intrinsically fascinating, especially when they have fun charts and sliders to play with. But managers without the time for leisurely exploration—shall we call it “data tourism”?—need an easier way to get into the data and find exactly what they want. Starting with a list of questions they might ask, and telling them where they will find each answer, is one way of helping out.
Probably a more common approach is to offer prebuilt analysis scenarios, which would be packages of reports and/or recommended analysis steps to handle specific projects. It’s a more sophisticated version of the same notion: figure out what questions people are likely to have and lead them through the process of acquiring answers. There is a faint whiff of condescension to this—a serious analyst might be insulted at the implication that she needs help. But the real question is whether non-analysts would have the patience to work through even this sort of guided presentation. The vendors who offer such scenarios tell me that users appreciate them, but I’ve never heard a user praise them directly.
The ultimate fallback, of course, is to have someone else do the analysis for you. One of my favorite sayings—which nobody has ever found as witty as I do, alas—is that the best user interface ever invented is really the telephone: as in, pick up the telephone and tell somebody else to answer your question.. Many of the weighty pronouncements I see about how automated systems can never replace the insight of human beings really come down to this point.
But if that’s really the case, are we just kidding ourselves by trying to make analytics accessible to non-power users? Should we stop trying and simply build power tools to make the real experts as productive as possible? And even if that’s correct, must we still pretend to care about non-power users because they are often control the purchase decision?
On reflection, this is a silly line of thought. Business users need to make business decisions and they need to have the relevant information presented to them in ways they can understand. Automated systems make sense but still must run under the supervision of real human beings. There is no reason to try to turn business users into analysts. But each type of user should be given the tools it needs to do its job.
Personally I find those kinds of reports intrinsically fascinating, especially when they have fun charts and sliders to play with. But managers without the time for leisurely exploration—shall we call it “data tourism”?—need an easier way to get into the data and find exactly what they want. Starting with a list of questions they might ask, and telling them where they will find each answer, is one way of helping out.
Probably a more common approach is to offer prebuilt analysis scenarios, which would be packages of reports and/or recommended analysis steps to handle specific projects. It’s a more sophisticated version of the same notion: figure out what questions people are likely to have and lead them through the process of acquiring answers. There is a faint whiff of condescension to this—a serious analyst might be insulted at the implication that she needs help. But the real question is whether non-analysts would have the patience to work through even this sort of guided presentation. The vendors who offer such scenarios tell me that users appreciate them, but I’ve never heard a user praise them directly.
The ultimate fallback, of course, is to have someone else do the analysis for you. One of my favorite sayings—which nobody has ever found as witty as I do, alas—is that the best user interface ever invented is really the telephone: as in, pick up the telephone and tell somebody else to answer your question.. Many of the weighty pronouncements I see about how automated systems can never replace the insight of human beings really come down to this point.
But if that’s really the case, are we just kidding ourselves by trying to make analytics accessible to non-power users? Should we stop trying and simply build power tools to make the real experts as productive as possible? And even if that’s correct, must we still pretend to care about non-power users because they are often control the purchase decision?
On reflection, this is a silly line of thought. Business users need to make business decisions and they need to have the relevant information presented to them in ways they can understand. Automated systems make sense but still must run under the supervision of real human beings. There is no reason to try to turn business users into analysts. But each type of user should be given the tools it needs to do its job.
Thursday, February 15, 2007
Speed-trap and SAS Promise More Accurate Web Analytics
Here’s an intriguing claim: a February 2 press release from SAS UK touts “SAS for Customer Experience Analytics” as linking “online and off-line customer data with world class business intelligence technology to deliver new levels of actionable insight for multi-channel organisations.” But the release and related brochure make clear that heart of the offering is Web analytics technology from a British firm named speed-trap.
Speed-trap employs what it says is a uniquely accurate technology to capture detailed customer behavior on a Web site. It works by providing a standard piece of code in each Web page. This code activates when the page is loaded and sends a record of a displayed contents to a server, where it is stored for analysis.
Although this sounds similar to other page tagging approaches to Web data collection, speed-trap goes to great lengths to distinguish itself from such competitors. The advantage it promotes most aggressively is that the same tag is used in all pages, and this tag does not need to pre-specify the attributes to be collected. Yet this is not truly unique: ClickTracks also uses a single tag without attribute pre-specification, and there may be other vendors who do the same.
But there does seem to be something more to the speed-trap solution, since speed-trap captures details about the contents displayed and user events such as mouse clicks. My understanding (which could be wrong) is that ClickTracks only records the url strings sent by site visitors. The level of detail captured by speed-trap seems more similar to TeaLeaf Technology, although TeaLeaf uses packet sniffing rather than page tags.
Speed-trap’s white paper “Choosing a data collection strategy” provides a detailed comparison against page tagging and log file analysis. As with any vendor white paper, its description of competitive products must be taken with a large grain of salt.
Back to the SAS UK press release. Apart from speed-trap, it seems that what’s being offered is the existing collection of SAS analytical tools. These are fine, of course, but don’t provide anything new for analysis of multi-channel customer experiences. In particular, one would hope for some help in correlating activities across channels to better understand customer behavior patterns. Maybe it’s just that I’ve been drinking our Client X Client Kool-Aid, but I’d like to see channel-independent ways to classify events—gathering information, making purchases, searching for support, etc.--so their purpose and continuity from one channel to another become more obvious. Plus I think that most people would want real-time predictions and optimized recommendations as part of “actionable insight”—something that is notably lacking from SAS’s description of what its solution provides.
Bottom line: speed-trap is interesting, but this is far from the ultimate analytical offering for multi-channel customer experience management.
Speed-trap employs what it says is a uniquely accurate technology to capture detailed customer behavior on a Web site. It works by providing a standard piece of code in each Web page. This code activates when the page is loaded and sends a record of a displayed contents to a server, where it is stored for analysis.
Although this sounds similar to other page tagging approaches to Web data collection, speed-trap goes to great lengths to distinguish itself from such competitors. The advantage it promotes most aggressively is that the same tag is used in all pages, and this tag does not need to pre-specify the attributes to be collected. Yet this is not truly unique: ClickTracks also uses a single tag without attribute pre-specification, and there may be other vendors who do the same.
But there does seem to be something more to the speed-trap solution, since speed-trap captures details about the contents displayed and user events such as mouse clicks. My understanding (which could be wrong) is that ClickTracks only records the url strings sent by site visitors. The level of detail captured by speed-trap seems more similar to TeaLeaf Technology, although TeaLeaf uses packet sniffing rather than page tags.
Speed-trap’s white paper “Choosing a data collection strategy” provides a detailed comparison against page tagging and log file analysis. As with any vendor white paper, its description of competitive products must be taken with a large grain of salt.
Back to the SAS UK press release. Apart from speed-trap, it seems that what’s being offered is the existing collection of SAS analytical tools. These are fine, of course, but don’t provide anything new for analysis of multi-channel customer experiences. In particular, one would hope for some help in correlating activities across channels to better understand customer behavior patterns. Maybe it’s just that I’ve been drinking our Client X Client Kool-Aid, but I’d like to see channel-independent ways to classify events—gathering information, making purchases, searching for support, etc.--so their purpose and continuity from one channel to another become more obvious. Plus I think that most people would want real-time predictions and optimized recommendations as part of “actionable insight”—something that is notably lacking from SAS’s description of what its solution provides.
Bottom line: speed-trap is interesting, but this is far from the ultimate analytical offering for multi-channel customer experience management.
Subscribe to:
Posts (Atom)