You probably saw ExactTarget’s June 13 announcement of its strategic partnership with Marketo and Eloqua’s June 21 announcement of its new AppCloud marketplace for connectors with other systems. So did I. But it took a little while to connect with the vendors to get the details, so I’m only now ready to write about them.
Both announcements shared a theme of integration between core marketing platforms and other marketing systems. That Eloqua sees itself as the center of a marketing infrastructure isn’t surprising, although it does show how far we've traveled from the once-common view of marketing automation as an auxiliary to the sales automation “system of record”. ExactTarget’s aspiration to a central role was less expected, since its original and still primary business is email delivery. But ExactTarget has added mobile, Web pages, and social in recent years. They've been pulling these together with an “Interactive Marketing Hub” in beta since last September, which is now used by 500 of their 4,000 clients. The IMH, as we cognoscenti call it, combines ExactTarget's email, mobile, Web pages, Web visitor tracking, and social media with external touchpoints as well as Salesforce.com and Microsoft CRM.
The IMH sports a slick user interface with a very nice dashboard showing real-time updates of summary statistics for each channel. It also provides a central marketing calendar of campaigns across the channels. The underlying database can be simple lists, as in traditional email system, or a proper multi-table structure acting as the primary marketing database. As Captain Planet used to say, The Power Is Yours.
It’s perfectly sensible for ExactTarget to move in this direction, since it otherwise risks being pushed to the unprofitable edges of the marketing world as a commodity email engine. In fact, the real head-scratcher was why ExactTarget would deal with Marketo if it had ambitions to occupy the same central turf. (Marketo’s motivation is obvious: to gain broader distribution.)
ExactTarget’s answer was refreshingly honest: IMH lacks key B2B marketing automation features including lead scoring, advanced segmentation, and multi-step campaigns. The campaign engine will be improved before IMH's official launch this September, but other specialized B2B features probably won’t be added. ExactTarget also sees Marketo as the first of many partner applications for IMH, further clarifying that they see it in the central position.
Eloqua’s AppCloud is obviously modeled on Salesforce.com’s AppExchange and other application stores. The goal is for third parties to extend the value of a core platform by building tools that enhance it. In Eloqua’s case, most of the initial applications are connectors with other systems for Webinars, social communities, messaging and data acquisition. These will be joined over time by apps that add functionality within Eloqua itself. The AppCloud is an extension of Eloqua’s earlier Cloud Connector initiative, which provides APIs for external systems to access Eloqua data and functions. Basically, AppCloud makes it easier to find and deploy those connectors.
I did ask Eloqua how AppCloud relates to its Revenue Performance Management positioning. This felt like a pretty clever question until I later saw it was addressed in the AppCloud press release. Oh well. The answer came smoothly enough: AppCloud makes it easier to gather the activity data needed for Revenue Performance Management analysis. That makes sense, although AppCloud implies a more active integration with external systems than simply reporting against them.
Both the ExactTarget and Eloqua announcements reflect a strategy of positioning their products as a company’s primary customer management system. If you recall my post last week on Adobe and Oracle announcements, those firms also wanted to place themselves at the center of the customer management universe. So does pretty much everyone else.
Obviously they all can’t win this game. At the end of the day, I’d still put my money on the big CRM systems as the logical central repository for customer data. But I do believe that many auxiliary systems will continue to feed data to the central system and somehow coordinate treatment decisions with it. Connectors created to service ExactTarget, Eloqua, and others will make it easier to integrate the peripheral systems with whichever product ends up in the middle. So it’s all good.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Wednesday, June 29, 2011
Wednesday, June 22, 2011
Dueling Strategies: Adobe and Oracle Take Opposite Paths to Customer Experience Management
Adobe on Monday announced a new “Digital Enterprise Platform for Customer Experience Management”. The platform fills the center of Adobe’s three-part corporate mission to “make, manage, and measure” digital content and experiences. The other two pieces were already in place: “make” is Adobe’s original content creation business, while “measure” is Omniture Web analytics.
The strategic significance of the announcement seems more important than the actual product enhancements. These include improved integration of the company’s Web content management system (formerly Day C5) with Scene 7 dynamic content and Omniture Survey and Test & Target; features for salespeople and customer service agents to customize standard documents in a controlled fashion; integrated content reviews and workflows; and a platform to build and share content in multiple formats. The announcement also included beta versions of tools for social engagement, online enrollment, and agent workspaces. Good stuff but nothing earth-shaking.
Adobe's strategy itself is a curious mixture of broad ambition and narrow execution. Adobe describes its scope as nothing less than optimizing customer experience and marketing spend across the entire customer journey, from first learning about a company through validation, purchase decision, product use, and commitment. But Adobe also explicitly limits its scope to digital channels, and implicitly limits its concern to content creation, delivery, and evaluation. In fact, the only customer-facing technology Adobe offers is Web site management. Otherwise, Adobe expects even digital content such as emails to be delivered by third party products. Offline interactions, such as telephone and retail, are definitely out of the picture. Nor does Adobe manage the underlying customer database, marketing campaigns, or deep analytics. The only exceptions are customer profiles, segmentation, and content to support Web personalization.
The company argues the tools it does provide, combined with the cross-channel content sharing, are enough to build a unified digital customer experience. I’m not so sure that’s correct, and even if it is, I question whether customers will be happy to have only their digital experiences be unified. Either way, marketers will certainly need other vendors' products to manage their full customer relationships.
On the other hand, I do agree with Adobe’s argument that its approach lets clients create a unified digital experience without replacing their entire enterprise infrastructure. This is certainly an advantage.
Adobe’s announcement was released on Monday, but I didn’t get around to writing about it until today. The delay is unfortunate, since the attention of the enterprise marketing automation world has already shifted to yesterday’s announcement that Oracle is acquiring Web “experience” management vendor FatWire Software. I’m not sure I accept “Web experience management” as a legitimate software category, but FatWire does combine conventional Web content management with unusually strong targeting, personalization, content analytics, digital asset management, mobile, and social features. Perhaps that justifies calling it more than plain old Web content management.
The strategic purpose of the FatWire acquisition is self-evident: to fill a gap in Oracle’s customer-facing technologies, which already had ATG ecommerce and general Enterprise Content Management for Web sites, as well as Oracle CRM and Oracle Loyalty. (Oracle isn’t very creative with product names.) FatWire will allow much richer, more personalized and targeted Web site interactions. It also provides some Web analytics, although I still think Oracle has a gap to fill there.
The Oracle and Adobe announcements do highlight a clear strategic contrast. Adobe has largely limited itself to digital interactions, and has largely avoided customer-facing systems except for Web sites. Oracle has embraced the full range of online and offline interactions, including customer-facing systems in every channel. Oracle has also hedged its bets a bit with Real Time Decisions, which can coordinate customer treatments delivered by non-Oracle systems and powered by non-Oracle data sources. Of the other enterprise-level marketing automation vendors, IBM, SAS and Teradata share Adobe's focus on digital channels and its avoidance of customer-facing systems, although they resemble Oracle in offering deep analytics and customer database management.
Based on my fundamental rule that “suites win”, I think Oracle’s strategy is more likely to succeed. But only time will tell.
The strategic significance of the announcement seems more important than the actual product enhancements. These include improved integration of the company’s Web content management system (formerly Day C5) with Scene 7 dynamic content and Omniture Survey and Test & Target; features for salespeople and customer service agents to customize standard documents in a controlled fashion; integrated content reviews and workflows; and a platform to build and share content in multiple formats. The announcement also included beta versions of tools for social engagement, online enrollment, and agent workspaces. Good stuff but nothing earth-shaking.
Adobe's strategy itself is a curious mixture of broad ambition and narrow execution. Adobe describes its scope as nothing less than optimizing customer experience and marketing spend across the entire customer journey, from first learning about a company through validation, purchase decision, product use, and commitment. But Adobe also explicitly limits its scope to digital channels, and implicitly limits its concern to content creation, delivery, and evaluation. In fact, the only customer-facing technology Adobe offers is Web site management. Otherwise, Adobe expects even digital content such as emails to be delivered by third party products. Offline interactions, such as telephone and retail, are definitely out of the picture. Nor does Adobe manage the underlying customer database, marketing campaigns, or deep analytics. The only exceptions are customer profiles, segmentation, and content to support Web personalization.
The company argues the tools it does provide, combined with the cross-channel content sharing, are enough to build a unified digital customer experience. I’m not so sure that’s correct, and even if it is, I question whether customers will be happy to have only their digital experiences be unified. Either way, marketers will certainly need other vendors' products to manage their full customer relationships.
On the other hand, I do agree with Adobe’s argument that its approach lets clients create a unified digital experience without replacing their entire enterprise infrastructure. This is certainly an advantage.
Adobe’s announcement was released on Monday, but I didn’t get around to writing about it until today. The delay is unfortunate, since the attention of the enterprise marketing automation world has already shifted to yesterday’s announcement that Oracle is acquiring Web “experience” management vendor FatWire Software. I’m not sure I accept “Web experience management” as a legitimate software category, but FatWire does combine conventional Web content management with unusually strong targeting, personalization, content analytics, digital asset management, mobile, and social features. Perhaps that justifies calling it more than plain old Web content management.
The strategic purpose of the FatWire acquisition is self-evident: to fill a gap in Oracle’s customer-facing technologies, which already had ATG ecommerce and general Enterprise Content Management for Web sites, as well as Oracle CRM and Oracle Loyalty. (Oracle isn’t very creative with product names.) FatWire will allow much richer, more personalized and targeted Web site interactions. It also provides some Web analytics, although I still think Oracle has a gap to fill there.
The Oracle and Adobe announcements do highlight a clear strategic contrast. Adobe has largely limited itself to digital interactions, and has largely avoided customer-facing systems except for Web sites. Oracle has embraced the full range of online and offline interactions, including customer-facing systems in every channel. Oracle has also hedged its bets a bit with Real Time Decisions, which can coordinate customer treatments delivered by non-Oracle systems and powered by non-Oracle data sources. Of the other enterprise-level marketing automation vendors, IBM, SAS and Teradata share Adobe's focus on digital channels and its avoidance of customer-facing systems, although they resemble Oracle in offering deep analytics and customer database management.
Based on my fundamental rule that “suites win”, I think Oracle’s strategy is more likely to succeed. But only time will tell.
Monday, June 20, 2011
How Do You Measure the Influence of Marketing Messages?
My review of Coremetrics Lifestyle raised the issue of measuring the impact of marketing materials on customer behavior. Of course, this is just one piece of the marketing attribution puzzle. But it’s worth a separate discussion because it’s such a common question – and, unlike so many measurement problems, this one actually has an answer.
Let’s start with the original impetus. This was an “influence” report that showed the percentage of people reaching a marketing stage who had received specific marketing treatments (or had other attributes such as source, product history, demographic, etc.). The idea was that treatments received by a higher percentage of customers were more influential. In other words, if 100% of new buyers saw a white paper offer and just 50% saw a Webinar invitation, then the white paper has more influence than the Webinar.
Plausible, yes. But wrong.
Let’s think through the example. What if the white paper is offered to everyone? Yes, 100% of new buyers saw it, but so did 100% of non-buyers. We know exactly nothing about whether it made its recipients more or less likely to purchase.
Now, let’s say just 10% of prospects see the Webinar invitation, compared with 50% of buyers. Can we say it has a positive influence? Still no: maybe the Webinar attracts hot prospects who would have purchased anyway. It’s even possible that the Webinar offer annoys people and actually reduces purchase rates. You can’t tell from these figures.
In other words, it’s not enough to know what was seen by customers who became buyers (or, more generally, by people who took any particular action). You also need to know what was seen by non-buyers and, ideally, to compare results for groups that are similar except for that particular treatment.
So, what measures do make sense for assessing influence?
- the simplest measure compares the result rate of treated customers with results for non-treated customers. You might find that 20% of people who receive a white paper became buyers, compared with 10% of people who don’t receive the white paper. These two figures can be combined in a single ratio: 20% of treated / 10% of non-treated = 2.0. The higher the ratio, the more it seems that receiving the white paper increased the likelihood that someone would purchase. But it’s no more than a suggestion: maybe the white paper was sent to people who were stronger prospects to begin with.
- a more advanced measure adjusts for the audience by attempting to limit the non-treated group (e.g., non-buyers) to customers similar to the target group. This could be done by building a statistical model that uses all other attributes to predict behavior. Or, you could apply lead scores or funnel stage definitions. Whatever the technique, the result is to divide the audience into groups that are expected to behave similarly. The calculation would then compare results of treated vs. non-treated customers in each same group. So, a report might find that 40% of “stage 3 leads” (whatever they are) made a purchase after attending a Webinar, while just 15% of “stage 3 leads” made a purchase if they didn't attend a Webinar. Again, the treated and non-treated figures could be combined in a ratio (40% / 15% = 2.7)
- of course, the only true measure is a structured test. This ensures that the only difference between the treated and non-treated groups is the treatment itself. Without such tests, there's a good chance that the customers selected for treatment would have performed differently in any event.
A proper reporting system would present the ratios along with actual result rates, trends over time, the number of customers receiving each treatment, and comparisons with ratios for other treatments. These figures help marketers focus their energies on the most valuable opportunities. Still, the starting point is always a comparison of treated vs. non-treated performance: without that, the numbers could mean anything.
Let’s start with the original impetus. This was an “influence” report that showed the percentage of people reaching a marketing stage who had received specific marketing treatments (or had other attributes such as source, product history, demographic, etc.). The idea was that treatments received by a higher percentage of customers were more influential. In other words, if 100% of new buyers saw a white paper offer and just 50% saw a Webinar invitation, then the white paper has more influence than the Webinar.
Plausible, yes. But wrong.
Let’s think through the example. What if the white paper is offered to everyone? Yes, 100% of new buyers saw it, but so did 100% of non-buyers. We know exactly nothing about whether it made its recipients more or less likely to purchase.
Now, let’s say just 10% of prospects see the Webinar invitation, compared with 50% of buyers. Can we say it has a positive influence? Still no: maybe the Webinar attracts hot prospects who would have purchased anyway. It’s even possible that the Webinar offer annoys people and actually reduces purchase rates. You can’t tell from these figures.
In other words, it’s not enough to know what was seen by customers who became buyers (or, more generally, by people who took any particular action). You also need to know what was seen by non-buyers and, ideally, to compare results for groups that are similar except for that particular treatment.
So, what measures do make sense for assessing influence?
- the simplest measure compares the result rate of treated customers with results for non-treated customers. You might find that 20% of people who receive a white paper became buyers, compared with 10% of people who don’t receive the white paper. These two figures can be combined in a single ratio: 20% of treated / 10% of non-treated = 2.0. The higher the ratio, the more it seems that receiving the white paper increased the likelihood that someone would purchase. But it’s no more than a suggestion: maybe the white paper was sent to people who were stronger prospects to begin with.
- a more advanced measure adjusts for the audience by attempting to limit the non-treated group (e.g., non-buyers) to customers similar to the target group. This could be done by building a statistical model that uses all other attributes to predict behavior. Or, you could apply lead scores or funnel stage definitions. Whatever the technique, the result is to divide the audience into groups that are expected to behave similarly. The calculation would then compare results of treated vs. non-treated customers in each same group. So, a report might find that 40% of “stage 3 leads” (whatever they are) made a purchase after attending a Webinar, while just 15% of “stage 3 leads” made a purchase if they didn't attend a Webinar. Again, the treated and non-treated figures could be combined in a ratio (40% / 15% = 2.7)
- of course, the only true measure is a structured test. This ensures that the only difference between the treated and non-treated groups is the treatment itself. Without such tests, there's a good chance that the customers selected for treatment would have performed differently in any event.
A proper reporting system would present the ratios along with actual result rates, trends over time, the number of customers receiving each treatment, and comparisons with ratios for other treatments. These figures help marketers focus their energies on the most valuable opportunities. Still, the starting point is always a comparison of treated vs. non-treated performance: without that, the numbers could mean anything.
Thursday, June 09, 2011
Swyft Offers Low-Cost Interaction Management Software as a Service
Summary: Swyft offers a Software-as-a-Service real-time interaction manager. It costs less than traditional versions of those products but has similar features.
Last month’s post on Oracle Real Time Decisions offered a brief overview of real-time interaction management products. I won’t repeat that here, except to summarize that these systems use data from multiple source systems to feed centrally-managed, real-time decisions to multiple touchpoints. The most common application has probably been product recommendations in customer service call centers, where there’s a substantial opportunity to sell something to a customer once you’ve solved their problem. Another frequent use has been selecting offers on Web sites, such as the familiar book recommendations on Amazon.com.
You’ll note that both of these are single-channel examples. That may seem odd, since coordinating treatments across channels is a key selling point. I believe the explanation is that most buyers purchase interaction management systems to get more powerful decision engines than those provided with their call center and Web site products.
Indeed, effective interaction management requires a sophisticated mix of predictive modeling, business rules, flow management, response capture, data integration, real-time processing, simulation, and analytics. The simple scripting and personalization engines built into call center and Web products don't provide all this. Equally important, the results of an interaction management deployment are immediately and precisely measureable – so it’s clear when one product works better than another. This means specialist vendors with superior products have a good chance to survive.
But you’ll also notice that these products don’t have many customers. I haven’t done a proper census but doubt there are five hundred implementations among all vendors combined. One reason is the sophistication itself: only a highly knowledgeable set of users can deploy the required rules and models effectively. Another is cost: you’re looking at the price of a 50 foot yacht (about a quarter million dollars if you haven’t bought one lately), plus a sister ship or two for implementation. Few firms with the resources and business volume needed to justify this expense.
(Alternate interpretation: the tools built into standard call center and Web applications are pretty good, so dedicated interaction managers offer only a small percentage gain. A company must be quite large for this to cover the interaction manager's cost.)
Swyft provides a low-cost alternative – more like a 30 footer (around $100,000).
The comparison is inexact because traditional interaction management systems are sold as licensed on-premise software, while Swyft is a Software-as-a-Service product, billed monthly. Pricing for agent-based applications (call centers, field sales, etc.) runs about one dinghy per user ($50 to $80 per month). But even small clients buy a fleet of 100 or more. Web site applications are priced on number of customers but come to roughly the same total.
Implementation is around $15,000 to $25,000, with data connections handled through standard Web Services. The company says a typical deployment takes 30 to 90 days, usually closer to 30.
Functionally, Swyft offers a pretty full set of interaction management capabilities. Decision rules can take into account capacity constraints such as call center workload; customer propensities; current and previous interactions; channel distinctions; offer eligibility; and event-based triggers. Interactions can kick off complex back-end workflows for follow-up treatments.
Call center integrations monitor agent activities and flash an alert if the system has an offer to make. The system then guides the agent through transition statements, probing questions, objections, offers, closing statements, and disposition capture. It can present different messages depending on the agent’s skill level. Web site implementations can present offers, collect data, and run champion/challenger and multivariate tests. The system will automatically adjust offer frequencies based on test results.
One feature that Swyft lacks is built-in predictive modeling. The company says it has found that most clients already have models in place. Rules can use model scores as inputs.
Like other interaction managers, Swyft relies primarily on data stored in external systems. Again like other products, it creates its own database of offers made and responses received for each customer. Less typically, it also stores marketing contents internally and provides a content builder to create these. The system can import and store additioinal information if real-time access is not appropriate.
The current version of Swyft lacks an interface that lets business users create their own rules. The company addresses this largely by doing the work for its clients, providing a “concierge” service that includes content and rule management as part of the base price. Clients do have the option to do this work for themselves; the company says it can be done after a couple weeks of training. A simpler end-user interface is planned for future development.
Swyft was founded in 2004 and launched its product in 2006. It has about ten clients spread among financial services, insurance, communications and media. The largest are mid-sized firms, with a several million customers. Intriguingly, the company offers its product on the Salesforce.com App Exchange, specifically offering a smartphone-enabled version that can use geolocation to identify a salesperson’s current location and recommend the most efficient prospects to visit. It has not yet deployed this at an actual client.
Last month’s post on Oracle Real Time Decisions offered a brief overview of real-time interaction management products. I won’t repeat that here, except to summarize that these systems use data from multiple source systems to feed centrally-managed, real-time decisions to multiple touchpoints. The most common application has probably been product recommendations in customer service call centers, where there’s a substantial opportunity to sell something to a customer once you’ve solved their problem. Another frequent use has been selecting offers on Web sites, such as the familiar book recommendations on Amazon.com.
You’ll note that both of these are single-channel examples. That may seem odd, since coordinating treatments across channels is a key selling point. I believe the explanation is that most buyers purchase interaction management systems to get more powerful decision engines than those provided with their call center and Web site products.
Indeed, effective interaction management requires a sophisticated mix of predictive modeling, business rules, flow management, response capture, data integration, real-time processing, simulation, and analytics. The simple scripting and personalization engines built into call center and Web products don't provide all this. Equally important, the results of an interaction management deployment are immediately and precisely measureable – so it’s clear when one product works better than another. This means specialist vendors with superior products have a good chance to survive.
But you’ll also notice that these products don’t have many customers. I haven’t done a proper census but doubt there are five hundred implementations among all vendors combined. One reason is the sophistication itself: only a highly knowledgeable set of users can deploy the required rules and models effectively. Another is cost: you’re looking at the price of a 50 foot yacht (about a quarter million dollars if you haven’t bought one lately), plus a sister ship or two for implementation. Few firms with the resources and business volume needed to justify this expense.
(Alternate interpretation: the tools built into standard call center and Web applications are pretty good, so dedicated interaction managers offer only a small percentage gain. A company must be quite large for this to cover the interaction manager's cost.)
Swyft provides a low-cost alternative – more like a 30 footer (around $100,000).
The comparison is inexact because traditional interaction management systems are sold as licensed on-premise software, while Swyft is a Software-as-a-Service product, billed monthly. Pricing for agent-based applications (call centers, field sales, etc.) runs about one dinghy per user ($50 to $80 per month). But even small clients buy a fleet of 100 or more. Web site applications are priced on number of customers but come to roughly the same total.
Implementation is around $15,000 to $25,000, with data connections handled through standard Web Services. The company says a typical deployment takes 30 to 90 days, usually closer to 30.
Functionally, Swyft offers a pretty full set of interaction management capabilities. Decision rules can take into account capacity constraints such as call center workload; customer propensities; current and previous interactions; channel distinctions; offer eligibility; and event-based triggers. Interactions can kick off complex back-end workflows for follow-up treatments.
Call center integrations monitor agent activities and flash an alert if the system has an offer to make. The system then guides the agent through transition statements, probing questions, objections, offers, closing statements, and disposition capture. It can present different messages depending on the agent’s skill level. Web site implementations can present offers, collect data, and run champion/challenger and multivariate tests. The system will automatically adjust offer frequencies based on test results.
One feature that Swyft lacks is built-in predictive modeling. The company says it has found that most clients already have models in place. Rules can use model scores as inputs.
Like other interaction managers, Swyft relies primarily on data stored in external systems. Again like other products, it creates its own database of offers made and responses received for each customer. Less typically, it also stores marketing contents internally and provides a content builder to create these. The system can import and store additioinal information if real-time access is not appropriate.
The current version of Swyft lacks an interface that lets business users create their own rules. The company addresses this largely by doing the work for its clients, providing a “concierge” service that includes content and rule management as part of the base price. Clients do have the option to do this work for themselves; the company says it can be done after a couple weeks of training. A simpler end-user interface is planned for future development.
Swyft was founded in 2004 and launched its product in 2006. It has about ten clients spread among financial services, insurance, communications and media. The largest are mid-sized firms, with a several million customers. Intriguingly, the company offers its product on the Salesforce.com App Exchange, specifically offering a smartphone-enabled version that can use geolocation to identify a salesperson’s current location and recommend the most efficient prospects to visit. It has not yet deployed this at an actual client.
Wednesday, June 08, 2011
Coremetrics Offers a Foggy View of Lifecycle Analysis
I stumbled over an Adexchanger interview yesterday with John Squire, the Chief Strategy Officer of IBM Coremetrics. It first caught my eye because the headline read “IBM’s Vision for the Marketer”, which is always a topic of interest. Then I noticed it was touting new reporting feature called Coremetrics Lifecycle, which the company describes as “the industry’s first application geared to enable online marketers to track and understand how customers progress through long-term conversion lifecycles.”
This was intriguing. On one hand, I’ve seen plenty of systems that track customers through the buying process, including Eloqua, Marketo, Leadformix, Clear Saleing, C3 Metrics, and Encore Media Metrics. So the claim to be first is questionable. But, on the other hand, seeing another vendor offer this sort of analysis reinforces the importance of the concept.
But a closer look at Lifecycle itself was disappointing. The product does allow tracking of individual Web site visitors over time, which is the foundation of lifecycle analysis. But, in my opinion, a lifecycle tracking system reports on movement of customers across stages within the lifecycle. That is, it shows conversions from one stage to the next. This implies reports that show the previous stages of customers who enter a new stage (“where they came from”), and show the destinations of customers who leave a stage (“where they went”). These are typically represented as a matrix showing all combinations of previous and current stages, or a flow chart that highlights the most common before-and-after pairs.
Lifecycle does none of this. Rather, it lets users define any number of segmentation schemes and count the number of customers in each segment. It does report how many customers entered each segment during a specified time period, but not where they came from. In fact, there is no requirement for a logical progression from one segment to the next, which to me is what a lifecycle implies.
Lifecycle has some other useful features. It can report on the most common marketing treatments received by people who moved into a segment, giving some insight into treatment effectiveness. It calculates the average number of days and Web sessions that customers spend in a segment, which is a limited velocity measure. It also lets users select segment members and send them messages through Coremetrics’ products for email, display ad retargeting, and Web site personalization, although it's not clear the process can be automated.
But a proper lifecycle analysis tool would go much further. It would calculate the end-to-end completion rates, show the drop-off from one stage to the next, estimate the incremental impact of specific treatments, project future segment counts, and show changes in these measures over time. So while I’m pleased that Coremetrics is promoting the concept of lifecycle analysis, I’m disappointed that its product doesn’t deliver a real lifecycle measurement solution.
Addendum - June 19, 2011
After the original post and IBM's comment on it, I reviewed the Lifecycle product with the Coremetrics team. This uncovered no substantive errors in the original post, although a couple of points could have been stated more clearly.
- the system supports two types of lifecycles, one requiring that customers progress through the stages in sequence and one that does not. Users specify the type when they set up a new lifecycle. In both cases, the stages are defined by selection rules created by the user.
- there is a limit of six stages per lifeycle.
- for sequential lifecycles, the system will warn the user if the selection rules are not inherently sequential. (An inherently sequential rule might be based on the number of purchases made; you can't make three purchases without having previously made two. Other stage definitions, such as downloading a white paper or leaving a comment, might come in any order and, therefore, are not inherently sequential.)
- in a sequential lifecycle, the system will not allow customers to advance outside of sequence even if the definitions would allow it. Nor does it report on customers who would qualify for a later stage but cannot reach it because they didn't qualify for a previous one.
- the system's primary report shows the number of customers within each stage during a specified date range. Think of this as an inventory. A "Migrator" report shows how many customers entered their current stage during the report period: for example, there were 500 customers in stage 3, of whom 200 first entered stage 3 during this period. This gives some sense of movement, but it's not the classic funnel analysis showing the percentage of customers in each stage who eventually move to the next stage.
- users can run the standard reports against "segments", which could be defined as anything including a cohort of customers who entered the system during a specified time period. A Lifecycle inventory report for such a cohort would show how many customers reached each stage and got no further. This is the information needed to build a classic funnel analysis, although users would have to extract the data and manipulate it to produce an actual funnel report. This would be done outside of Coremetrics, because there is no end-user report writer.
- reports show the average number of days and Web sessions it takes customers to reach each stage (i.e., since they first entered the system), not the number of days and sessions spent in each stage, as I wrote originally.
- users do have the option to create a recurring process that automatically selects customers in a particular stage and sends them an email or other message. The system could apply a few rules to this process, such as eliminating people who had been selected previously. But more sophisticated controls would be handled outside of Coremetrics, in the message delivery system.
- the system can profile customers in each stage against many attributes (products purchased, geography, social network membership, etc.) in addition to marketing contents received. But, as I wrote originally, the reporting only shows the percentage of customers in each stage who match a particular attribute: this is far from measuring influence, for reasons I'll explain in a future post.
- we confirmed that the system doesn't do projections of future inventory counts, report on out-of-sequence customer movements, or allow customers to migrate backwards into lower-ranked stages (as might happen if stages were based on recency or ratios).
I'm happy to have clarified these matters but none of this changes my original assessment: Lifecycle is a useful product that falls far short of serious life stage analysis.
This was intriguing. On one hand, I’ve seen plenty of systems that track customers through the buying process, including Eloqua, Marketo, Leadformix, Clear Saleing, C3 Metrics, and Encore Media Metrics. So the claim to be first is questionable. But, on the other hand, seeing another vendor offer this sort of analysis reinforces the importance of the concept.
But a closer look at Lifecycle itself was disappointing. The product does allow tracking of individual Web site visitors over time, which is the foundation of lifecycle analysis. But, in my opinion, a lifecycle tracking system reports on movement of customers across stages within the lifecycle. That is, it shows conversions from one stage to the next. This implies reports that show the previous stages of customers who enter a new stage (“where they came from”), and show the destinations of customers who leave a stage (“where they went”). These are typically represented as a matrix showing all combinations of previous and current stages, or a flow chart that highlights the most common before-and-after pairs.
Lifecycle does none of this. Rather, it lets users define any number of segmentation schemes and count the number of customers in each segment. It does report how many customers entered each segment during a specified time period, but not where they came from. In fact, there is no requirement for a logical progression from one segment to the next, which to me is what a lifecycle implies.
Lifecycle has some other useful features. It can report on the most common marketing treatments received by people who moved into a segment, giving some insight into treatment effectiveness. It calculates the average number of days and Web sessions that customers spend in a segment, which is a limited velocity measure. It also lets users select segment members and send them messages through Coremetrics’ products for email, display ad retargeting, and Web site personalization, although it's not clear the process can be automated.
But a proper lifecycle analysis tool would go much further. It would calculate the end-to-end completion rates, show the drop-off from one stage to the next, estimate the incremental impact of specific treatments, project future segment counts, and show changes in these measures over time. So while I’m pleased that Coremetrics is promoting the concept of lifecycle analysis, I’m disappointed that its product doesn’t deliver a real lifecycle measurement solution.
Addendum - June 19, 2011
After the original post and IBM's comment on it, I reviewed the Lifecycle product with the Coremetrics team. This uncovered no substantive errors in the original post, although a couple of points could have been stated more clearly.
- the system supports two types of lifecycles, one requiring that customers progress through the stages in sequence and one that does not. Users specify the type when they set up a new lifecycle. In both cases, the stages are defined by selection rules created by the user.
- there is a limit of six stages per lifeycle.
- for sequential lifecycles, the system will warn the user if the selection rules are not inherently sequential. (An inherently sequential rule might be based on the number of purchases made; you can't make three purchases without having previously made two. Other stage definitions, such as downloading a white paper or leaving a comment, might come in any order and, therefore, are not inherently sequential.)
- in a sequential lifecycle, the system will not allow customers to advance outside of sequence even if the definitions would allow it. Nor does it report on customers who would qualify for a later stage but cannot reach it because they didn't qualify for a previous one.
- the system's primary report shows the number of customers within each stage during a specified date range. Think of this as an inventory. A "Migrator" report shows how many customers entered their current stage during the report period: for example, there were 500 customers in stage 3, of whom 200 first entered stage 3 during this period. This gives some sense of movement, but it's not the classic funnel analysis showing the percentage of customers in each stage who eventually move to the next stage.
- users can run the standard reports against "segments", which could be defined as anything including a cohort of customers who entered the system during a specified time period. A Lifecycle inventory report for such a cohort would show how many customers reached each stage and got no further. This is the information needed to build a classic funnel analysis, although users would have to extract the data and manipulate it to produce an actual funnel report. This would be done outside of Coremetrics, because there is no end-user report writer.
- reports show the average number of days and Web sessions it takes customers to reach each stage (i.e., since they first entered the system), not the number of days and sessions spent in each stage, as I wrote originally.
- users do have the option to create a recurring process that automatically selects customers in a particular stage and sends them an email or other message. The system could apply a few rules to this process, such as eliminating people who had been selected previously. But more sophisticated controls would be handled outside of Coremetrics, in the message delivery system.
- the system can profile customers in each stage against many attributes (products purchased, geography, social network membership, etc.) in addition to marketing contents received. But, as I wrote originally, the reporting only shows the percentage of customers in each stage who match a particular attribute: this is far from measuring influence, for reasons I'll explain in a future post.
- we confirmed that the system doesn't do projections of future inventory counts, report on out-of-sequence customer movements, or allow customers to migrate backwards into lower-ranked stages (as might happen if stages were based on recency or ratios).
I'm happy to have clarified these matters but none of this changes my original assessment: Lifecycle is a useful product that falls far short of serious life stage analysis.
Thursday, June 02, 2011
Oracle Integrates On Demand Marketing with On Demand CRM
Summary: Oracle has integrated marketing automation with its on-demand CRM product. Will competitors do the same?
If I were more on the ball, I would have noticed that May 25 marked a full year since Oracle bought the intellectual property* of high-end B2B marketing automation vendor Market2Lead. I was actually briefed on May 17 by the Oracle team handling the resulting product but hadn’t noticed that the anniversary was approaching. I wonder if they had cake?
I hope so, since they’ve clearly been working hard. In February they released the first “Oraclized” version of the product, now Oracle CRM On Demand Marketing. This included a redesigned user interface that matches the look, feel, and terminology of Oracle’s on-demand CRM product, is available in the same 20 languages, and allows unified user IDs and log-ins.
The new system uses the same data structures as the rest of Oracle CRM On Demand. It shares many physical tables as well, but keeps the major contact tables separate yet synchronized. This is largely because Marketing systems typically contain many more leads than Sales wants in CRM. Marketing systems also run large, complex queries that would interfere with CRM performance if both systems used the same physical files.
The unified data structure allows unified reporting across the customer buying process. Although Oracle doesn’t use the term, this is very consistent with the revenue management concepts described by other B2B marketing automation vendors. Oracle’s new Marketing system supports this with a separate analytical database, which is essential for advanced revenue reporting such as time-series analysis.
Except for improved reporting, the features of the new Oracle product are pretty much the same as the old Market2Lead, which I last reviewed two years ago. This was, and remains, one of the most powerful in the industry. Important capabilities include “adaptive” program flows, which vary depending on customer behavior; advanced Web pages and forms; automated content recommendations; and several types of asset templates. These are mostly relevant to large companies, which fits nicely with Oracle’s customer base.
In fact, most On Demand Marketing sales are part of an Oracle CRM On Demand installation, either at current Oracle CRM users or at clients buying both CRM and marketing automation simultaneously. It doesn’t hurt that On Demand Marketing is the only choice for companies who want a Software-as-a-Service marketing system from Oracle. The company’s other big marketing automation product, Siebel Marketing, is on-premise software.
All this makes Oracle a poster child of sorts for the proposition that CRM and marketing automation should be part of a single system. I’ve argued this for a long time but it’s still a minority view, especially (and for obvious reasons) among the vendors of stand-alone marketing automation products. Oracle itself has announced an initiative for “cross channel customer experience management” which incorporates its products for CRM, marketing, loyalty, real-time decisions, and ecommerce.
It will be interesting to see whether Oracle’s integrated marketing-plus-sales product leads its not-so-friendly competitors at Salesforce.com to respond with an integrated solution of their own. Or, more broadly, whether IBM sticks to its position that it doesn’t need a CRM product to dominate the integrated marketing world. The B2B marketing automation vendors are definitely mice at an elephant dance. It’s a dangerous but exciting position.
____________________________________________________
* It wasn’t an outright acquisition because Salesforce.com wouldn’t permit integration with an Oracle-owned product. So existing Market2Lead clients remained with the old company until it could migrate them to other marketing platforms. Once this is done, Market2Lead will complete its shutdown.
If I were more on the ball, I would have noticed that May 25 marked a full year since Oracle bought the intellectual property* of high-end B2B marketing automation vendor Market2Lead. I was actually briefed on May 17 by the Oracle team handling the resulting product but hadn’t noticed that the anniversary was approaching. I wonder if they had cake?
I hope so, since they’ve clearly been working hard. In February they released the first “Oraclized” version of the product, now Oracle CRM On Demand Marketing. This included a redesigned user interface that matches the look, feel, and terminology of Oracle’s on-demand CRM product, is available in the same 20 languages, and allows unified user IDs and log-ins.
The new system uses the same data structures as the rest of Oracle CRM On Demand. It shares many physical tables as well, but keeps the major contact tables separate yet synchronized. This is largely because Marketing systems typically contain many more leads than Sales wants in CRM. Marketing systems also run large, complex queries that would interfere with CRM performance if both systems used the same physical files.
The unified data structure allows unified reporting across the customer buying process. Although Oracle doesn’t use the term, this is very consistent with the revenue management concepts described by other B2B marketing automation vendors. Oracle’s new Marketing system supports this with a separate analytical database, which is essential for advanced revenue reporting such as time-series analysis.
Except for improved reporting, the features of the new Oracle product are pretty much the same as the old Market2Lead, which I last reviewed two years ago. This was, and remains, one of the most powerful in the industry. Important capabilities include “adaptive” program flows, which vary depending on customer behavior; advanced Web pages and forms; automated content recommendations; and several types of asset templates. These are mostly relevant to large companies, which fits nicely with Oracle’s customer base.
In fact, most On Demand Marketing sales are part of an Oracle CRM On Demand installation, either at current Oracle CRM users or at clients buying both CRM and marketing automation simultaneously. It doesn’t hurt that On Demand Marketing is the only choice for companies who want a Software-as-a-Service marketing system from Oracle. The company’s other big marketing automation product, Siebel Marketing, is on-premise software.
All this makes Oracle a poster child of sorts for the proposition that CRM and marketing automation should be part of a single system. I’ve argued this for a long time but it’s still a minority view, especially (and for obvious reasons) among the vendors of stand-alone marketing automation products. Oracle itself has announced an initiative for “cross channel customer experience management” which incorporates its products for CRM, marketing, loyalty, real-time decisions, and ecommerce.
It will be interesting to see whether Oracle’s integrated marketing-plus-sales product leads its not-so-friendly competitors at Salesforce.com to respond with an integrated solution of their own. Or, more broadly, whether IBM sticks to its position that it doesn’t need a CRM product to dominate the integrated marketing world. The B2B marketing automation vendors are definitely mice at an elephant dance. It’s a dangerous but exciting position.
____________________________________________________
* It wasn’t an outright acquisition because Salesforce.com wouldn’t permit integration with an Oracle-owned product. So existing Market2Lead clients remained with the old company until it could migrate them to other marketing platforms. Once this is done, Market2Lead will complete its shutdown.