Showing posts with label marketing measurement. Show all posts
Showing posts with label marketing measurement. Show all posts

Monday, February 19, 2018

How Customer Data Platforms Help with Marketing Performance Measurement

John Wanamaker, patron saint of marketing measurement.
If you’ve been following my slow progress towards a set of screening questions for Customer Data Platforms, you may recall that “incremental attribution” was on the list. The original reason was that some of the systems I first identified as CDPs offered incremental attribution as their primary focus. Attribution also seemed like a specific enough feature that it could be meaningfully distinguished from marketing measurement in general, which nearly any CDP could support to some degree.

But as I gathered answers from the two dozen vendors who will be included the CDP Institute’s comparison report, I found that at best one or two provide the type of attribution I had in mind.  This wasn't enough to include in the screening list.  But there was an impressive variety of alternative answers to the question.  Those are worth a look.

- Marketing mix models.  This is the attribution approach I originally intended to cover. It gathers all the marketing touches that reach a customer, including email messages, Web site views, display ad impressions, search marketing headlines, and whatever else can be captured and tied to an individual. Statistical algorithms then look at customers who had a similar set of contacts except for one item and attribute any difference in performance to that.  In practice, this is much more complicated than it sounds because the system needs to deal with different levels of detail and intelligently combine cases that lack enough data to treat separately.  The result is an estimate of the average value generated by incremental spending in each channel. These results are sometimes combined with estimates created using different techniques to cover channels that can’t be tied to individuals, such as broadcast TV. The estimates are used to find the optimal budget allocation across all channels, a.k.a. the marketing mix.

- Next best action and bidding models.  These also estimate the impact of a specific marketing message on results, but work at the individual rather than channel levels. The system uses a history of marketing messages and results to predict the change in revenue (or other target behavior) that will result from sending a particular message to a particular individual. One typical use is deciding how much to bid for a display ad impression; another is to choose products or offers to make during an interaction. They differ from incremental attribution because they create separate predictions for each individual based on their history and the current context. Several CDP systems offer this type of analysis.  But it’s ultimately not different enough from other predictive analytics to treat it as a distinct specialty.

- First/last/fractional touch.  These methods use the individual-level data about marketing contacts and results, but apply fixed rules to allocate credit.  They are usually limited to online advertising channels.  The simplest rules are to attribute all results to either the first or last interaction with a buyer.  Fractional methods divide the credit among several touches but use predefined rules to do the allocation rather than weights derived from actual data.  These methods are widely regarded as inadequate but are by far the most commonly used because alternatives are so much more difficult.  Several CDPs offer these methods. 

- Campaign analysis. This looks at the impact of a particular marketing campaign on results. Again, the fundamental method is to compare performance of individuals who received a particular treatment with those who didn’t. But there’s usually more of an effort to ensure the treated and non-treated groups are comparable, either by setting up a/b test splits in advance or by analyzing results for different segments after the fact. The primary unit of analysis here is the campaign audience, not the specific individuals. The goal is usually to compare results for campaigns in the same channel, not to compare efforts across channels. This is a relatively simple type of analysis to deliver since it doesn’t required advanced statistics or predictive techniques. As a result, it’s fairly common or could be delivered by many systems even without the vendor creating special features to do it.

- Content performance analysis. This is very similar to campaign analysis except that audiences are defined as people who received a particular piece of content, which could be used across several campaigns. Again, there might be formal split tests or more casual comparison of results. Some implementation draw broader conclusions from the data by grouping content with similar characteristics such as product, message, or offer. But unless the groups are identified using artificial intelligence, even this doesn’t add much technical complexity.

- Journey analysis. Truth be told, no vendor in my survey described journey analysis as a type of incremental attribution. But it does come up in some discussions of marketing measurement and optimization. Like marketing mix and next best action methods, journey analysis examines individual-level interactions to find larger patterns and to identify optimal choices for reaching specified goals. But it looks much more closely at the sequence of events, which requires different technical approaches to deal with the higher resulting complexity.

Marketing measurement is one of the primary uses of Customer Data Platforms. Dropping attribution from the list of CDP screening questions shouldn't be interpreted to suggest it’s unimportant. It just means it’s that measurement  is too complicated to embed in a simple screening question. As with other important CDP features, buyers who want their CDP to support marketing measurement will need to define their specific needs in detail and then closely examine individual CDP vendors to see who can meet them.

Monday, November 06, 2017

TrenDemon and Adinton Offer Attribution Options

I wrote a couple weeks ago about the importance of attribution as a guide for artificial intelligence-driven marketing. One implication was I should pay more attention to attribution systems. Here’s a quick look at two products that tackle different parts of the attribution problem: content measurement and advertising measurement.

TrenDemon

Let’s start with TrenDemon. Its specialty is measuring the impact of marketing content on long B2B sales cycles. It does this by placing a tag on client Web sites to identify visitors and track the content they consume, and then connecting client CRM systems to find which visitor companies ultimately made a purchase (or reached some other user-specified goal). Visitors are identified by company using their IP address and as individuals by tracking cookies.

TrenDemon does a bit more than correlate content consumption and final outcomes. It also identifies when each piece of content is consumed, distinguishing between the start, middle, and end of the buying journey. It also looks at other content metrics such as how many people read an item, how much time they spend with it, and how many read something else after they’re done. These and other inputs are combined to generate an attribution score for each item. The system uses the score to identify the most effective items for each journey stage and to recommend which items should be presented in the future.

Pricing for TrenDemon starts at $800 per month. The system was launched in early 2015 and is currently used by just over 100 companies.

Adinton

Next we have Adinton, a Barcelona-based firm that specializes in attribution for paid search and social ads. Adinton has more than 55 clients throughout Europe, mostly selling travel and insurance online. Such purchases often involve multiple Web site visits but still have a shorter buying cycle than complex B2B transactions.

Adinton has pixels to capture Web ad impressions as well as Web site visits. Like TrenDemon, it tracks site visitors over time and distinguishes between starting, middle, and finishing clicks. It also distinguishes between attributed and assisted conversions. When possible, it builds a unified picture of each visitor across devices and channels.

The system uses this data to calculate the cost of different types of click types, which it combines to create a “true” cost per action for each ad purchase. It compares this with the clients’ target cost per actions to determine where they are over- or under-investing.

Adinton has API connections to gather data from Google AdWords, Facebook Ads, Bing Ads, AdRoll, RocketFuel, and other advertising channels. An autobidding system can currently adjust bids in AdWords and will add Facebook and Bing adjustments in the near future. The system also does keyword research and click fraud identification. Pricing is based on number of clicks and starts as low as $299 per month for attribution analysis, with additional fees for autobidding and click fraud modules. Adinton was founded in 2013.  It launched its first product in 2014 although attribution came later.

Further Thoughts

These two products are chosen almost at random, so I wouldn’t assign any global significance to their features. But it’s still intriguing that both add a first/middle/last buying stage to the analysis. It’s also interesting that they occupy a middle ground between totally arbitrary attribution methodologies, such as first touch/last touch/fractional credit, and advanced algorithmic methods that attempt to calculate the true incremental impact of each touch. (Note that neither TrenDemon nor Adinton’s summary metric is presented as estimating incremental value.)

 Of course, without true incremental value, neither system can claim to develop an optimal spending allocation. One interpretation might be that few marketers are ready for a full-blown algorithmic approach but many are open to something more than the clearly-arbitrary methods. So perhaps systems like TrenDemon and Adinton offer a transitional stage for marketers (and marketing AI systems) that will eventually move to a more advanced approach.

 An alternative view would be the algorithmic methods will never be reliable enough to be widely accepted.  This would see these intermediate systems as about as far as most marketers ever will or should go towards measuring marketing program impact. Time will tell.

Monday, October 16, 2017

Wizaly Offers a New Option for Algorithmic Attribution

Wizaly is a relatively new entrant in the field of algorithmic revenue attribution – a function that will be essential for guiding artificial-intelligence-driven marketing of the future. Let’s take a look at what they do.

First a bit of background: Wizaly is a spin-off of Paris-based performance marketing agency ESV Digital (formerly eSearchVision). The agency’s performance-based perspective meant it needed to optimize spend across the entire customer journey, not simply use first- or last-click attribution approaches which ignore intermediate steps on the path to purchase. Wizaly grew out of this need.

Wizaly’s basic approach to attribution is to assemble a history of all messages seen by each customer, classify customers based on the channels they saw, compare results of customers whose experience differs by just one channel, and attribute any difference in results to that channel   For example, one group of customers might have seen messages in paid search, organic search, and social; another might have seen messages in those channels plus display retargeting. Any difference in performance would be attributed to display retargeting.

This is a simplified description; Wizaly is also aware of other attributes such as the profiles of different customers, traffic sources, Web site engagement, location, browser type, etc. It apparently factors some or all of these into its analysis to ensure it is comparing performance of otherwise-similar customers. It definitely lets users analyze results based on these variables so they can form their own judgements.

Wizaly gets its data primarily from pixels it places on ads and Web pages. These drop cookies to track customers over time and can track ads that are seen, even if they’re not clicked, as well as detailed Web site behaviors. The system can incorporate television through an integration with Realytics, which correlates Web traffic with when TV ads are shown. It can import ad costs and ingest offline purchases to use in measuring results. The system can stitch together customer identities using known identifiers. It can also do some probabilistic matching based on behaviors and connection data and will supplement this with data from third-party cross device matching specialists.

Reports include detailed traffic analysis, based on the various attributes the system collects; estimates of the importance and effectiveness of each channel; and recommended media allocations to maximize the value from ad spending.  The system doesn't analyze the impact of message or channel sequence, compare the effectiveness of different messages, or estimate the impact of messages on long-term customer outcomes. As previously mentioned, it has a partial blindspot for mobile – a major concern, given how important mobile has become – and other gaps for offline channels and results. These are problems for most algorithmic attribution products, not just Wizaly.

One definite advantage of Wizaly is price: at $5,000 to $15,000 per month, it is generally cheaper than better-known competitors. Pricing is based on traffic monitored and data stored. The company was spun off from ESV Digital in 2016 and currently has close to 50 clients worldwide.

Saturday, October 07, 2017

Attribution Will Be Critical for AI-Based Marketing Success


I gave my presentation on Self-Driving Marketing Campaigns at the MarTech conference last week. Most of the content followed the arguments I made here a couple of weeks ago, about the challenges of coordinating multiple specialist AI systems. But prepping for the conference led me to refine my thoughts, so there are a couple of points I think are worth revisiting.

The first is the distinction between replacing human specialists with AI specialists, and replacing human managers with AI managers. Visually, the first progression looks like this as AI gradually takes over specialized tasks in the marketing department:



The insight here is that while each machine presumably does its job much better than the human it replaces,* the output of the team as a whole can’t fundamentally change because of the bottleneck created by the human manager overseeing the process. That is, work is still organized into campaigns that deal with customer segments because the human manager needs to think in those terms. It’s true that the segments will keep getting smaller, the content within each segment more personalized, and more tests will yield faster learning. But the human manager can only make a relatively small number of decisions about what the robots should do, and that puts severe limits on how complicated the marketing process can become.

The really big change happens when that human manager herself is replaced by a robot:



Now, the manager can also deal with more-or-less infinite complexity. This means we no longer need campaigns and segments and can truly orchestrate treatments for each customer as an individual. In theory, the robot manager could order her robot assistants to create custom messages and offers in each situation, based on the current context and past behaviors of the individual human involved. In essence, each customer has a personal robot following her around, figuring out what’s best for her alone, and then calling on the other robots to make it happen. Whether that's a paradise or nightmare is beyond the scope of this discussion.

In my post a few weeks ago, I was very skeptical that manager robots would be able to coordinate the specialist systems any time soon.  That now strikes me as less of a barrier.  Among other reasons, I’ve seen vendors including Jivox and RevJet introduce systems that integrate large portions of the content creation and delivery workflows, potentially or actually coordinating the efforts of multiple AI agents within the process. I also had an interesting chat with the folks at Albert.ai, who have addressed some of the knottier problems about coordinating the entire campaign process. These vendors are still working with campaigns, not individual-level journey orchestration. But they are definitely showing progress.

As I've become less concerned about the challenges of robot communication, I've grown more concerned about robots making the right decisions.  In other words, the manager robot needs a way to choose what the specialist robots will work on so they are doing the most productive tasks. The choices must be based on estimating the value of different options.  Creating such estimates is the job of revenue attribution.  So it turns out that accurate attribution is a critical requirement for AI-based orchestration.

That’s an important insight.  All marketers acknowledge that attribution is important but most have focused their attention on other tasks in recent years.  Even vendors that do attribution often limit themselves to assigning user-selected fractions of value to different channels or touches, replacing the obviously-incorrect first- and last-touch models with less-obviously-but-still-incorrect models such as “U-shaped”, “W-shaped”,  and “time decay”.  All these approaches are based on assumptions, not actual data.  This means they don’t adjust the weights assigned to different marketing messages based on experience. That means the AI can’t use them to improve its choices over time.

There are a handful of attribution vendors who do use data-driven approaches, usually referred to as “algorithmic”. These include VisualIQ (just bought by Nielsen), MarketShare Partners (owned by Neustar since 2015) Convertro (bought in 2014 by AOL, now Verizon), Adometry (bought in 2014 by Google and now part of Google Analytics), Conversion Logic, C3 Metrics, and (a relatively new entrant) Wizaly. Each has its own techniques but the general approach is to compare results for buyers who take similar paths, and attribute differences in results to the differences between their paths. For example: one group of customers might have interacted in three channels and another interacted in the same three channels plus a fourth. Any difference in results would be attributed to the fourth channel.

Truth be told, I don’t love this approach.  The different paths could themselves the result of differences between customers, which means exposure to a particular path isn’t necessarily the reason for different results. (For example, if good buyers naturally visit your Web site while poor prospects do not, then the Web site isn’t really “causing” people to buy more.  This means driving more people to the Web site won’t improve results because the new visitors are poor prospects.) 

Moreover, this type of attribution applies primarily to near-term events such as purchases or some other easily measured conversion.  Guiding lifetime journey orchestration requires something more subtle.  This will almost surely be based on a simulation model or state-based framework describing influences on buyer behavior over time. 

But whatever the weaknesses of current algorithmic attribution methods, they are at least based on actual behaviors and can be improved over time.  And even if they're not dead-on accurate, they should be directionally  correct. That’s good enough to give the AI manager something to work with as it tells the specialist AIs what to do next.  Indeed, an AI manager that's orchestrating contacts for each individual will have many opportunities to conduct rigorous attribution experiments, potentially improving attribution accuracy by a huge factor.

And that's exactly the point.  AI managers will rely on attribution to measure the success of their efforts and thus to drive future decisions.  This changes attribution from an esoteric specialty to a core enabling technology for AI-driven marketing.  Given the current state of attribution, there's an urgent need for marketers to pay more attention and for vendors to improve their techniques. So if you haven’t given attribution much thought recently, it’s a good time to start.

__________________________________________________________________________
* or augments, if you want to be optimistic.

Tuesday, March 29, 2016

Hive9 Marketing Performance Management Includes Customer Journey Optimization

As I mentioned in last week's post on the MarTech Conference, there appears to be an emerging class of vendors doing what might be called “journey management” – although I think I’ll rename that “journey orchestration” since (a) orchestration is a trendier term right now and (b) orchestration more accurately reflects the key notion of a system that coordinates other systems.*  This coordination includes both gathering data from multiple sources and sending messages through other systems. Sending messages distinguishes journey orchestration engines from “pure” Customer Data Platforms, which assemble data but don’t make decisions about customer treatments. Some not-so-pure CDPs do combine the data assembly and decisioning, but they don’t use a system-assembled customer journey as the framework for message selection.

Whoa.  What the heck does that last sentence mean?  Let me unpack it a bit:

- By “system-assembled customer journey” I mean the systems automatically derive a customer journey from the customer data they’ve assembled. That’s quite different from pre-defining an ideal customer journey and trying to force customers to follow it. It’s even more different from taking conventional multi-step campaigns and calling them "journeys”.  A true "system-assembled journey" would be built by examining the sequence of events for each customer and finding the most common paths to purchase. This still isn't a purely objective process because some human or machine judgement is still needed to exclude irrelevant details, assign interactions to journey stages, and select the most important sequences.  But it's much more data-driven than starting with an marketer-created design.

- By “journey as the framework for message selection” I mean that marketing messages or campaigns are triggered when customers reach a particular journey step. Again, this is different from defining selection rules separately for each campaign, which is how conventional marketing automation and real-time interaction systems work. It’s also different from systems that draw journey maps but don’t connect them with campaigns for execution. Attaching all campaigns to a single journey map simplifies creation of selection rules and provides greater visibility into relationships among campaigns. In other words, journey orchestration makes it easier to coordinate customer treatments across multiple campaigns, which is one of the key problems with conventional marketing automation and interaction management approaches.**

Ok, let’s assume you’re now convinced that “journey orchestration engine” has a specific meaning that describes something useful. Your next question, presumably, is where can I buy one? (Oh, you’re not that easy to sell? Listen closely: It’s new. It’s bright. It’s shiny. New. Bright. Shiny. Newbrightshiny. Now are you ready to buy? I thought so.) My blog post listed three vendors from the MarTech show: Pointillist, Usermind, and Thunderhead. I promise I’ll review those soon. But I had already spoken with another relevant vendor before the show, Hive9. So let’s start with them.

If you look at Hive9’s Web site, you may wonder whether I’ve sent you to the right place.  They position themselves as “marketing performance management” with no mention of anything resembling journey orchestration. That’s because Hive9 actually has three connected modules: one for marketing planning, one for marketing measurement, and one for optimization (which is what I’m calling journey orchestration). These were all developed within B2B marketing agency Bulldog Solutions, which spun off Hive9 about a year ago.

The planning module was the original product. It lets marketers set up a hierarchy with plans at the top, going down to programs, campaigns, and tactics. Tactics have owners, budgets, start and end dates, revenue targets, types (usually a channel or asset) and other attributes such as journey stage, audience, business unit, geography, and language. These can be tailored to each client. Tactics can be tied to Workfront for project management, filtered on pretty much any attribute, and displayed on a Gantt chart-style calendar. Integration with Oracle Eloqua and Salesforce.com lets a new tactic automatically create a corresponding campaign in either system.  Each plan can have a marketing funnel with its own set of stages and targets for conversion rates, velocity, and deal size.

The measurement module reads information from plans and imports revenue, accounts, opportunities, and contacts from CRM, marketing automation, and other systems. Standard integrations are available for Salesforce.com, Oracle Eloqua, Marketo, Google Analytics, Adobe Marketing Analytics, and other systems. Other sources can be integrated through API connections or flat file imports. The system relies primarily on customer identifiers provided by source systems although it can stitch together identities when different systems share some IDs.

Once the data is loaded, the measurement module provides dashboards and other reports to show marketing results including revenue impact; counts, conversion rates and velocity by funnel stage; and whatever other data the client has integrated, such as social sentiment or customer satisfaction. Revenue impact can be measured with first-touch, last-touch, evenly-weighted, position-based, and several other algorithms.  The vendor plans to add statistically inferred weights in April. Results can be filtered by plan, audience, tactic type, assets, or other attributes; compared across time periods; and examined for trends. Dashboards are customized by the vendor for each client, although Hive9 plans to add self-service capabilities in the future.

The optimization module is where journey orchestration happens. Journey stages are defined within the optimization module.  Tactics can be tagged directly with stages, or stages can be assigned to assets which are themselves assigned to tactics.  Events or assets managed in other systems can also be tagged with a journey stage and channel.  However the connection is made, campaign responses are tagged with channel and journey stage and then assembled into a journey map. The map shows the number of interactions by channel and stage and highlights the most common path taken by buyers. This isn’t fully automated journey mapping because the stages are preassigned by the marketer. But the system does discover the most popular paths and most effective marketing assets on its own. So that’s pretty close.

Even more important, the optimization module can contain rules that trigger external marketing campaigns when a customer enters a given journey stage. This is what really qualifies Hive9 as a journey optimization engine. Here's how it works: users can set up a rule tied to journey stage, product type, or customer attribute such as industry or persona. Customers who qualify for a rule can be sent to specified marketing automation campaign. Rules can avoid repeating messages to the same person and can select a “next best message” for the external system to deliver. This definitely qualifies as journey orchestration.

Hive9 pricing starts around $25,000 per year for mid-market clients.  Modules are priced separately. Fees are based on the size of the company marketing budget for the planning module, on the number of records, dashboards, and data sources for the the measurement module, and on the number of touchpoints for the optimization module.

________________________________________________________________________________
* Also, this lets me call systems that do this “journey optimization engines”, giving a three letter acronym of JOE, which is so darn cute.

** You may notice that “journey optimization” sounds a lot like what I’ve previously called “state-based marketing”. Both do select marketing treatments based on a customer’s location within a state/stage framework. If I had to draw distinction, I’d say that journeys suggest forward progression from one stage to the next, while movement among states is not necessarily linear. Similarly, journey orchestration engines send marketing messages through other systems, while state-based systems could use internal or external delivery functions. In other words, journey orchestration is a special type of state-based system.

Thursday, February 18, 2016

Future of Marketing Content: Reflections on the Content2Conversion Conference

I spent the early part of this week at Demand Gen Report's Content2Conversion conference. The event was superbly run, as usual, but I didn't sense any over-arching pattern until I was literally on my out the door and stopped for one last chat with some colleagues.  Then I knitted together – at least to my own satisfaction – what had seemed to be disconnected observations.

The first strand was the number of systems that offer detailed information about content consumption. Vendors including Highspot, SnapApp, Ceros, Uberflip, and ion interactive all let marketers track customer behaviors within a piece of content – such as how much time is spent on each page or even regions within a page. On reflection, it struck me as amazing that we have this level of detail available, given that just a few years ago marketers couldn’t even tell whether a given piece of content had been looked at. The uses for this information are obvious, including helping marketers to understand which topics are most appealing and giving salespeople insight into the interests of individual prospects. But I wonder how many marketers or content creators are ready to take advantage of this information. Of course, it’s clear that they should. But I suspect most are already overwhelmed by the less precise information available through less advanced technologies. This leaves them with little appetite for still greater detail.

Naturally, my own preferred solution to this technology-created flood of data is still more technology. Some of this involves advanced analytics to extract the significant needles of information from the hayfields of detail, although I don’t recall seeing vendors who do that type of analysis at the show or hearing speakers discuss them. But the more interesting response is to automate content creation and selection directly, using the detailed information to create new content and to send the most appropriate content to each individual. Again, there weren’t many solutions at the show that promised to do this, apart from Captora – which extracts keywords from a company’s Web site and its competitors’ sites, constructs draft landing pages for the most important topics, and deploys them (after some manual polishing) with links to CRM or marketing automation data capture forms. Captora is focused on paid and organic search marketing, so it can’t pick which ads to display to which prospects. But I also chatted with people from Adaptive Campaigns (which did not exhibit), whose system uses rules to generate highly customized programmatic display ads. And, on the way to the airport, I caught up with Idio, another system that automatically analyzes content and picks the best match for each individual – although Idio doesn’t do any content creation or dynamic customization.

As you know from the Machine Intelligence in Marketing Landscape in my last post, I’ve also identified a several other systems that use automated methods to generate and select content. I’ll even predict that machine generated content will be a major trend in the near future – precisely because it’s the only practical way for marketers to take full advantage of the detailed information now available on content consumption.

This connects to another theme that I did actually hear articulated at the conference: the need to move beyond “quality” content to appropriate content. That’s an interesting evolution, since recent discussions have often focused on the challenge marketers face in just getting the volume of content they need for increasingly segmented programs. That requirement hasn’t ended, but I heard more discussion of how to create the right content mix and how to create content that is compelling enough to attract attention. To some extent, this argues against the notion of machine-generated content, which will probably never be better than mediocre and formulaic. But I can easily imagine a world where humans create a few great pieces of tentpole content and use a lot of simple, machine-created messages to feed people to it.  The machine-based messages won't be brilliant but they'll be effective because they're highly tailored to their targets. This tailoring will be enabled by behavioral and intent data, which were also popular topics at the conference.

I also have one other observation, which was totally unexpected (the best type!).  It might be just my imagination, but I think I sensed a bit of overconfidence among marketers about their ability to buy new technology. This is certainly surprising, given that marketers until recently have been more frightened of technology than anything else. I’ll speculate that a new generation of marketers are more comfortable with technology in general and are now reaching positions where they have control over purchasing decisions. Mostly that's great: the industry can’t advance if marketers are afraid to try new things. But some of these buyers may not realize that they are unfamiliar with the full scope of products available or that deploying complex technology is much harder than signing up for a new software-as-a-service application. Let me be clear that this concern is is based on one conversation I had and one comment that a friend overheard.  So I might be overreacting. Still, it’s something to guard against; overconfidence can lead to cavalier decisions that are just as harmful as indecision based on fear.

Sunday, February 07, 2016

Marketing attribution systems: a quick look at the options

I’ve seen a lot of attribution vendors recently. If you're a regular reader here, you saw my reviews of Claritix (last week) and BrightFunnel (in December).  Last week caught up with Jeff Winsper of Black Ink, which I'll hopefully review before too long.  Bizible also popped up recently although I don’t recall the occasion; possibly something related to their interesting survey on “pipeline marketing” and attribution methods.

My rational brain knows that there’s probably no reason for this flurry of sightings beyond pure coincidence. But it’s human to see patterns where they don’t exist, so I did find myself wondering if attribution is becoming a hot topic. I can easily come up with a good story to explain it: marketing technology has reached a new maturity stage where the data needed for good attribution is now readily available, the cost of processing that data has fallen far enough to make it practical, and the need has reached a tipping point as the complexity of marketing has grown. So, clearly, 2016 will be The Year of Attribution (as Anna Bager and Joe Laszlo of the Internet Advertising Bureau have already suggested).

Or not. Sometimes random is just random. But now that this is on my mind, I've taken a look at the larger attribution landscape.  Quick searches for "attribution" on G2 Crowd and TrustRadius turned up lists of 29 and 17 vendors, respectively – neither including Brightfunnel or Claritix, incidentally.  A closer look found that 13 appeared on both sites, that each site listed several relevant vendors that the other missed, and that both sites listed multiple vendors that were not really relevant. For what it's worth, eight vendors of the 13 vendors listed on both sites were all bona fide attribution systems -- which I loosely define to mean they assign fractions of revenue to different marketing campaigns.  I wouldn't draw any grand conclusions from the differences in coverage on G2 Crowd and TrustRadius, except to offer the obvious advice to check both (and probably some of the other review sites or vendor landscapes) to assemble a reasonably complete set of options.

I've presented the vendors listed in the two review sites below, grouping them based on which site included them and whether I qualified them as relevant to a quest for an attribution vendor.  I've also added a few notes based on the closer look I took at each system in order to classify it.  The main questions I asked were:
  • Does the system capture individual-level data, not just results by channel or campaign?  You need the individual data to know who saw which messages and who ended up making a purchase.  Those are the raw inputs needed for any attempt at estimating the impact of individual messages on the final result.  
  • Does the system capture offline as well as online messages?  You need both to understand all influences on results.  This question disqualified a few vendors that look only at online interactions.  In practice, most vendors can incorporate whatever data you provide them, so if you have offline data, they can use it.  TV is a special case because marketers don't usually know whether a specific individual saw a particular TV message, so TV is incorporated into attribution models using more general correlations.
  • How does the vendor do the attribution calculations?  Nearly all the vendors use what I've labeled an "algorithmic" approach, meaning they perform some sort of statistical analysis to estimate the attributed values.  The main alternative is a "fractional" method that applies user-assigned weights, typically based on position in the buying sequence and/or the channel that delivered the message.  The algorithmic approach is certainly preferred by most marketers, since it is based in actual data rather than marketers' (often inaccurate) assumptions.  But algorithmic methods need a lot of data, so B2B marketers often use fractional methods as a more practical alternative.  It's no accident that the only B2B specialist listed here, Bizible, is the only company that uses a fractional method, as do B2B specialists BrightFunnel and Claritix.  It's also important to note that the technical details of the algorithmic methods differ greatly from vendor to vendor, and of course each vendor is convinced that their method is by far the best approach.
  • Does the vendor provide marketing mix models?  These resemble attribution except they work at the channel level and are not based on individual data.  Classic marketing mix models instead look at promotion expense by channel by market (usually a geographic region, sometimes a demographic or other segment) and find correlations over time between spending levels and sales.  Although mix models and algorithmic attribution use different techniques and data, several vendors do both and have connected them in some fashion.
  • Does the vendor create optimal media plans? I'm defining these broadly to include any type of recommendation that uses the attribution model to suggest how users should reallocate their marketing spend at the channel or campaign level.  Systems may do this at different levels of detail, with different levels of sophistication in the optimization, and with different degrees of integration to media buying systems. 
Of course, there are plenty of other points that differentiate these systems.  But this list should be a useful starting point if you're considering a new attribution system -- as well as a reminder of the need to define your requirements and drill into the details before you make a final selection.

Attribution Systems

G2 Crowd and TrustRadius
  • Abakus: individual data; online and offline; algorithmic; optimal media plans
  • Bizible: individual data; online and offline; fractional; merges marketing automation plus CRM data; B2B
  • C3 Metrics: individual data; online and TV; algorithmic; optimal media plans 
  • Conversion Logic: individual data; online and TV; algorithmic;optimal media plans
  • Convertro: individual data; online and offline; algorithmic; mix model; optimal media plans; owned by AOL
  • MarketShare DecisionCloud: individual data; online and offline; algorithmic; mix models; optimal media plans; owned by Neustar
  • Rakuten Attribution: individual data; online only; algorithmic; optimal media plans; formerly DC Storm, acquired by Rakuten marketing services agency in 2014
  • Visual IQ: individual data; online and offline; algorithmic; optimal media plans
G2 Crowd only
  • BlackInk: individual data; online and offline; algorithmic; provides customer, marketing & sales analytics 
  • Kvantum Inc.: individual data; online and offline; algorithmic; mix models; optimal media plans
  • Marketing Evolution:  individual data; online and offline; algorithmic; mix model; optimal media plans
  • OptimaHub MediaAttribution  individual data; online and offline; attribution method not clear; data analytics agency with tag management, data collection, and analytics solutions
    TrustRadius only
    • Adometry: individual data; online and offline; algorithmic; mix models; optimal media plans; owned by Google
    • ThinkVine: individual data; online and offline; algorithmic; mix models; optimal media plans; uses agent-based and other models
    • Optimine:  individual data; online and offline; algorithmic; optimal media plans
    Other Systems

    G2 Crowd and TrustRadius

    G2 Crowd only
    • Adinton: Adwords bid optimization and attribution; uses Google Analytics for fractional attribution
    • Blueshift Labs: real-time segmentation and content recommendations; individual data but apparently no attribution
    • IBM Digital Analytics Impression Attribution: individual data; online only; shows influence (not clear has fractional or algorithmic attribution); based on Coremetrics
    • LIVE: for clients of WPP group; does algorithmic attribution and optimization
    • Marchex: tracks inbound phone calls
    • Pathmatics: digital ad intelligence; apparently no attribution
    • Sizmek: online ad management; provides attribution through alliance with Abakus
    • Sparkfly: retail specialist; individual data; focus on connecting digital and POS data; campaign-level attribution but apparently not fractional or algorithmic
    • Sylvan: financial services software; no marketing attribution 
    • TagCommander: tag managemenet system; real-time marketing hub with individual profiles and cross-channel data; custom fractional attribution formulas
    • TradeTracker: affiliate marketing network
    • Zeta Interative ZX: digital marketing agency offering DMP, database, engagement and related attribution; mix of tech and services

    Friday, April 04, 2014

    Bottlenose Offers Real-Time Trend Intelligence For Social Media and Beyond

    I had an interesting briefing a few weeks ago from Bottlenose, which sells what it calls a real-time “trend intelligence” system. The general idea is almost boringly straightforward: monitor events as they occur and pick out new and interesting information. But the technology to make this happen is mind-bogglingly complex, since it includes real-time ingestion of diverse data types, several levels of natural language processing, and sophisticated trend detection.


    To give some idea of the scale involved, the company said that simply monitoring “Beyoncé” across social and broadcast media creates 220 billion (with a “b”) data points relating to 2 billion times series tracking more than 100 metrics on 2.5 million entities. The company currently stores trillions (with a "t") of observations for its current dozen or so clients, all big enterprises and agencies including Pepsico, General Motors, Microsoft, Digitas and Razorfish.

    Bottlenose keeps up with the world pretty much the same way that you and I do: it scans news, social media, and other sources for information, extracts what’s relevant to our needs, and identifies new information or trends that might require some action. But while we humans can only process a tiny amount of information at each stage, Bottlenose works on another level entirely.

    • Data sources. The system ingests huge swaths of social media, virtually every TV and radio broadcast in the U.S., U.K., and Canada (via automated speech-to-text conversion), Nielsen ratings and audience demographics, and stock market data. It can also accept other market and industry data, as well as a company’s own Web analytics, customer purchases and service interactions. This is all processed in real time, using technologies that handle thousands of messages per second per processor. The system can accept any data format from structured transactions to unstructured text.

    • Interpretation. Specialized natural language processing extracts entities such as people and topics, identifies concepts and links, and assesses sentiment. This happens without predefined taxonomies or linguistics, although the system does work with nearly 100 rule-based, expansible classes of messages. It appends metadata to entities by matching them with other data sets, such as audience demographics for a TV broadcast. The system maintains profiles on 350 million individuals world-wide, including demographics, cumulative sentiment, language, geography, and social media influence.


    • Trend identification. Bottlenose builds a time line tracking more than 150 metrics per entity, such as cumulative sentiment, audience size, influence scores, and demographics. Software agents constantly scan this data for trends, which could include connections, correlations, overlaps, clusters, or other relationships. When the system finds emerging trends that appear to be more than statistical noise, it highlights them in reports.

    • Actions. Automated alerts for new trends are high on the Bottlenose agenda, but hadn’t been released when we spoke in March. What users get is a variety of interfaces that let them select a given topic, see relationships and what’s trending, and dig into as much detail as they want – all using real-time data. Key capabilities include seeing connections among topics, seeing the volume of messages and trends in sentiment, analyzing audiences demographics, and comparing statistics for two entities such as competing brands or media channels. Practical applications include identifying the most important influencers on a given topic, finding the most effective hash tags for social media, assessing advertising impact, and buying more effective media.

    The underlying technology for all this involves a variety of tools, some proprietary.  The company calls its core processing stack "StreamSense" and says it uses several open source technologies including the Cassandra distributed database and Elasticsearch  real time search and analytics engine.  Although StreamSense is a platform that could be used for many purposes, the company so far has only offered it in conjunction with the Bottlenose trend intelligence application.

    Of course, this platform potential is one reason I find Bottlenose so intriguing.  (The other is, it's just plain cool to work with so many kinds of data in such volume so quickly.)  Bottlenose is certainly not offering a Customer Data Platform, since its system is an application, not a central database available to external applications.  I'm not even sure that StreamSense meets the operational requirements for a CDP database, which have less to do with real-time analytics than easy access and flexibility.  But I do know that CDPs deal with higher data volumes and more varied structures than conventional databases can support, so I'm keeping an eye out for alternative technologies that might be better solutions.  Bottlenose might just have one.

    Bottlenose was founded in 2010 and launched its original social media dashboard in 2011. It has expanded beyond the social listening category with its enterprise product, which adds TV and radio to social activity. Pricing starts at $200,000 to $500,000 per year, although some deals are larger.









    Thursday, January 31, 2013

    MMA Modernizes Marketing Mix Models

    I’ve been spending a lot of time recently looking at marketing measurement systems. This means that you, Dear Reader, will be spending a lot of time reading about them. A good place to start is Marketing Management Analytics, known to its friends as MMA.

    MMA was founded in 1989 and is one of the pioneers in marketing mix modeling. Mix models remain the heart of the company’s business. But while traditional mix models look at direct correlations between advertising and sales, MMA’s current approach takes a more layered view. This includes what the company calls “multistage” attribution, which looks at intermediate touchpoints between an advertisement and the final purchase, and “customer cascade analysis”, which measures the long-term impact of advertisements on brand equity. The company has also beefed up its consulting services to help make its findings more actionable.

    MMA’s foray into attribution is intriguing, since it puts the company into some degree of competition with attribution specialists like VisualIQ, Adometry, and ClearSaleing. But MMA works with aggregate data such as total spend and impressions, with a major emphasis on mass media like television. Those other vendors work primarily with data about individual buyers, which comes largely from digital and direct media. MMA's clients are traditional mass media advertisers, in consumer packaged goods, automotive, financial services, retail, pharmaceuticals, and communications, and it is working for CMOs who are allocating budgets across channels. The other vendors' clients are concentrated in ecommerce and they are answering more tactical questions about spending within the digital channels. What they all share is the goal of measuring the incremental impact of expenditures in specific media.


    MMA recently released the latest version of its flagship software, Avista.  The system is still focused on traditional marketing mix models, although it can incorporate the "multistage" approach of measuring the impact of one channel on another.  The new release, Avista 8, was designed to make it easier for marketers and media planners work directly with the system, rather than relying on technical experts. The main interface displays curves that represent the relationship between spending on each tactic and final sales. Marketers use sliders to adjust the spending levels and the system then estimates the sales that would result.

    Avista can also run optimization routines to automatically find the most effective spending mix. Users can limit how much spending on any one tactic can increase or decrease, can create groups of tactics that draw from a shared budget, and can choose the target of the optimization (maximum profit with a given budget, minimum spend to reach a target revenue level, etc.). Outputs can show details by brand, product, region, sales channel and time period. Users can save scenarios and compare them to each other. Once they’ve chosen a scenario, Avista can convert it to a high-level media plan for buyers to execute.



    The system also has a forecasting feature that runs the same models but also lets users change assumptions about factors other than marketing spend, such as weather, competitive behavior, and distribution channels. Results can be displayed on reports, which in turn can be assembled into custom dashboards.

    MMA also offers its clients a data access tool called MarketView, which lets them view and lightly analyze the data assembled as model inputs. This is a popular service by itself, because model inputs often include data the marketers have never seen before. Giving them early access helps to speed the modeling process by letting them verify the quality of the data.


    Tuesday, January 08, 2013

    Top Five Metrics for Revenue Generation Marketers

    Marketing measurement is a perennially popular topic. I myself have just completed a white paper on Top Five Metrics for Revenue Generation Marketers, sponsored by LeadMD, and touched on it in a separate Gleanster study, Revenue Performance Management - The Evolution of Marketing Automation. With both of these on my mind, I also paid new attention to Eloqua’s list of five key revenue performance indicators (listed in the ‘Take a tour’ graphic on this page). The obvious question was whether these three sources agreed about what’s important.

    The answer is: not exactly. The following table compares the top metrics from each paper, with analogous items on the same row:



    The only item that’s clearly shared across all three lists is the number of leads generated, and even that takes a bit of squinting to include Eloqua’s measure of “reach”, which is really the number of leads currently at different stages. You could also argue that close rate and conversion rates are pretty much the same thing, and therefore also present on all three lists. (Again, a bit of squinting is required). Three of the other items appear just twice (return on investment, revenue, and time to close). The remaining three occur just once.

    What accounts for the inconsistencies? I’d say mostly it’s the nature of the lists. The Gleanster list is from a survey of what marketers actually do: it’s no accident that nearly all items are quite easy to calculate. (Return on investment is a glaring exception, and I very much doubt that 73% of marketers actually calculate it today. So let’s just assume that figure is aspirational.)

    The other two lists are prescriptive: that is, they show what an expert feels should be done, not what marketers actually do. Look closely, and you'll see that the lists are quite similar.  Four of the five measures are shared.  Even the two non-matching items are related: my fifth item is cost and Eloqua's is return, which is a combination of cost and revenue. 

    The apparent difference between the lists is that mine looks more simplistic. It starts with a three-part formula for calculating revenue: (number of leads) x (close rate) x (revenue per closed lead). A fourth factor, cost, combines with revenue to create return on investment. The fifth factor, time, is needed to forecast revenue by period.

    Eloqua’s list breaks those same factors down by stages. That is, instead of a single close rate is has a set of conversion rates from one stage to the next. It similarly breaks number of leads into reach (number of leads at each stage), revenue into value (expected revenue from leads at each stage), and time into velocity (number of days spent at each stage). This makes total sense, and if you read my paper, you’ll see that I recommend breaking the measures into stages in almost exactly the same way.* The reason is that reporting on stages gives much greater insight into what’s working well or poorly, and thus helps marketers to see where they should make changes. Providing this sort of actionable information is probably the most important purpose for any measurement system.

    In short, Eloqua and I pretty much agree on what marketers should measure. Now if the marketers themselves would join the consensus.

    ______________________________________________________________________________
    * The paper also gives plenty of sage advice on how to actually build a system based on these measures.

    Friday, March 30, 2012

    Survey of Surveys: Budgets and Process are Main Barriers to Marketing Technology Success

    I recently gave a Web presentation comprised almost entirely of slides from different surveys. This was a bit of an experiment and, sad to say, it didn’t seem terribly successful. I did weave the slides into a nice little story line – marketers know they need better technology, poor data is the root of their problem, and we know how to solve this – but even that wasn’t enough. Pity.

    Still, preparing the slides gave me a chance to scan the surveys in my archives, which was entertaining in its own little way. Many surveys ask similar questions, which gave me some choices during my preparation. But I didn’t look carefully at how they compare.

    Today I’ll do that. I’ve chosen one of the most popular questions: what are the barriers to marketing technology adoption? I have versions of this from seven different surveys within the past year.

    Of course, each survey uses different terms. To make the comparison, I collapsed the various answers into a few reasonably-distinct categories, committing a certain amount of shoe-horning along the way. I then recorded where each answer ranked in each survey, compiled the results, and did a crude ranking with a combination of mathematical wizardly and body english.  (Multiple answers for the same survey indicate I placed several questions into the same category.)

    Results are below.  I've shaded the first ranked answers in orange and the second and third ranked answers in yellow.


    My first observation was the sheer inconsistency of the answers. Budget issues emerged as a clear number one, but they reached that rank on just four of the seven surveys and ranked quite low on the other two that included them. The second-ranked item (marketing process) was never listed first; it ended where it did because it had the most twos and threes. No other item was ranked first more than once or in the top three more than twice.

    Things made a bit more sense when I looked at the survey audiences. Winterberry and Forrester were specifically about online marketing, Gleanster and Marketing Sherpa were B2B surveys, and IBM and the two CMO Council studies were of general marketers. Since most B2B marketing is also online, it makes sense to look at the first four as one group and the other three as another.

    Now we see some interesting consistencies:

    • Budget isn’t much of an issue for the online and B2B marketers, but dominant for the mixed marketers.

    • Marketing process and marketing staff skills are major concerns for online and B2B but rarely mentioned by the mixed marketers.

    • Senior management support, and to a lesser extent IT support and technology capabilities, are significant barriers for mixed marketers but don’t slow down the online and B2B groups.

    • Metrics, organizational silos, and the economy are cited occasionally by both groups but don’t seem to be major issues for either.

    So there’s a fairly coherent picture after all.

    • Online and B2B marketers are struggling to keep up with a rapidly changing marketplace, meaning their biggest problems are people and process. The importance of their work is obvious enough that budgets and senior management support are generally available. They have the technical savvy and independence to avoid issues with IT support and organizational silos.

    • Mixed marketers, working in traditional channels, still struggle with budgets, metrics, and senior management. They have mature marketing organizations, so process and skills are in place, at least for traditional programs. They do struggle more with IT, technology, and organizational silos, because they lack their own technical skills and have limited clout in the organization.

    • Everybody says they care about metrics but it's rarely a top priority.


    Or at least that’s my take. I’ve displayed the actual surveys below – if you reach other conclusions or spot any other patterns, let me know.























    Wednesday, June 08, 2011

    Coremetrics Offers a Foggy View of Lifecycle Analysis

    I stumbled over an Adexchanger interview yesterday with John Squire, the Chief Strategy Officer of IBM Coremetrics. It first caught my eye because the headline read “IBM’s Vision for the Marketer”, which is always a topic of interest. Then I noticed it was touting new reporting feature called Coremetrics Lifecycle, which the company describes as “the industry’s first application geared to enable online marketers to track and understand how customers progress through long-term conversion lifecycles.”

    This was intriguing. On one hand, I’ve seen plenty of systems that track customers through the buying process, including Eloqua, Marketo, Leadformix, Clear Saleing, C3 Metrics, and Encore Media Metrics. So the claim to be first is questionable. But, on the other hand, seeing another vendor offer this sort of analysis reinforces the importance of the concept.

    But a closer look at Lifecycle itself was disappointing. The product does allow tracking of individual Web site visitors over time, which is the foundation of lifecycle analysis. But, in my opinion, a lifecycle tracking system reports on movement of customers across stages within the lifecycle. That is, it shows conversions from one stage to the next. This implies reports that show the previous stages of customers who enter a new stage (“where they came from”), and show the destinations of customers who leave a stage (“where they went”). These are typically represented as a matrix showing all combinations of previous and current stages, or a flow chart that highlights the most common before-and-after pairs.

    Lifecycle does none of this. Rather, it lets users define any number of segmentation schemes and count the number of customers in each segment. It does report how many customers entered each segment during a specified time period, but not where they came from. In fact, there is no requirement for a logical progression from one segment to the next, which to me is what a lifecycle implies.

    Lifecycle has some other useful features. It can report on the most common marketing treatments received by people who moved into a segment, giving some insight into treatment effectiveness. It calculates the average number of days and Web sessions that customers spend in a segment, which is a limited velocity measure. It also lets users select segment members and send them messages through Coremetrics’ products for email, display ad retargeting, and Web site personalization, although it's not clear the process can be automated.

    But a proper lifecycle analysis tool would go much further. It would calculate the end-to-end completion rates, show the drop-off from one stage to the next, estimate the incremental impact of specific treatments, project future segment counts, and show changes in these measures over time. So while I’m pleased that Coremetrics is promoting the concept of lifecycle analysis, I’m disappointed that its product doesn’t deliver a real lifecycle measurement solution.

    Addendum - June 19, 2011

    After the original post and IBM's comment on it, I reviewed the Lifecycle product with the Coremetrics team. This uncovered no substantive errors in the original post, although a couple of points could have been stated more clearly.

    - the system supports two types of lifecycles, one requiring that customers progress through the stages in sequence and one that does not. Users specify the type when they set up a new lifecycle. In both cases, the stages are defined by selection rules created by the user.

    - there is a limit of six stages per lifeycle.

    - for sequential lifecycles, the system will warn the user if the selection rules are not inherently sequential. (An inherently sequential rule might be based on the number of purchases made; you can't make three purchases without having previously made two. Other stage definitions, such as downloading a white paper or leaving a comment, might come in any order and, therefore, are not inherently sequential.)

    - in a sequential lifecycle, the system will not allow customers to advance outside of sequence even if the definitions would allow it. Nor does it report on customers who would qualify for a later stage but cannot reach it because they didn't qualify for a previous one.

    - the system's primary report shows the number of customers within each stage during a specified date range. Think of this as an inventory. A "Migrator" report shows how many customers entered their current stage during the report period: for example, there were 500 customers in stage 3, of whom 200 first entered stage 3 during this period. This gives some sense of movement, but it's not the classic funnel analysis showing the percentage of customers in each stage who eventually move to the next stage.

    - users can run the standard reports against "segments", which could be defined as anything including a cohort of customers who entered the system during a specified time period. A Lifecycle inventory report for such a cohort would show how many customers reached each stage and got no further. This is the information needed to build a classic funnel analysis, although users would have to extract the data and manipulate it to produce an actual funnel report. This would be done outside of Coremetrics, because there is no end-user report writer.

    - reports show the average number of days and Web sessions it takes customers to reach each stage (i.e., since they first entered the system), not the number of days and sessions spent in each stage, as I wrote originally.

    - users do have the option to create a recurring process that automatically selects customers in a particular stage and sends them an email or other message. The system could apply a few rules to this process, such as eliminating people who had been selected previously. But more sophisticated controls would be handled outside of Coremetrics, in the message delivery system.

    - the system can profile customers in each stage against many attributes (products purchased, geography, social network membership, etc.) in addition to marketing contents received. But, as I wrote originally, the reporting only shows the percentage of customers in each stage who match a particular attribute: this is far from measuring influence, for reasons I'll explain in a future post.

    - we confirmed that the system doesn't do projections of future inventory counts, report on out-of-sequence customer movements, or allow customers to migrate backwards into lower-ranked stages (as might happen if stages were based on recency or ratios).

    I'm happy to have clarified these matters but none of this changes my original assessment: Lifecycle is a useful product that falls far short of serious life stage analysis.

    Monday, April 11, 2011

    [x+1] NexTargeting Conference: Cross-Channel Attribution and Online Ad Scalability Remain Hot Topics

    Continuing my adventures in online ad measurement, I attended [x+1]’s NexTargeting Summit last week. This reinforced and refined my conclusions from last month’s OMMA Metrics conference, which identified the burning industry issues as:

    - better understanding of the interactions between online and offline events (both advertising and results), and

    - better scalability for successful online advertising programs.

    The online / offline connection was covered by MarketShare CEO Jon Vein, who presented studies that showed including the “indirect impact” of online display ads can dramatically improve their reported return on investment. He also said his firm has found that marketing mix models with complete data can explain as much as 98% of the variance in revenue, while optimization based on mix models can typically improve marketing effectiveness by 10% to 15%. Although I don’t recall Vein mentioning it during his actual presentation, he did tell me in a side conversation that his firm purchased JovianData last year in order to expand its ability to work with individual-level data. MarketShare and [x+1] announced an alliance last month to combine MarketShare’s cross-channel analytics with [x+1]’s digital targeting.

    Scalability was covered [x+1] itself, which announced extension of its Media+1 audience targeting platform to combine information from direct media buys and ad exchanges. The relationship between that extension and scalability is a bit complicated, but it boils down to this: combined information lets marketers control the number of ads served to individual consumers across both types of media buys, which segment-level purchases do not. This means that marketers can expand their budgets by targeting ads to new individuals (=effective scaling) rather than bombarding the same people with more messages (=ineffective scaling). That this mimics the reach and frequency measures used in traditional mass marketing (i.e., television) is a happy bonus.

    I’ve skipped some of the subtleties of the Media+1 product. These include tracking the degree of overlap between the audiences of different direct-buy Web purchases; identifying optimal message frequency by customer segment; using direct-buy Web sites to establish a base of impressions and then supplementing these on a customer-by-customer basis through real time bidding on ad exchanges; and using scorecards to track performance after initial customer acquisition. The bottom line on Media+1’s beta client was reallocating 40% of the online ad budget to achieve a 20% improvement in results.

    [x+1] also used the conference to announce an even broader product, called [x+1] Origin, scheduled for release this summer. This will build a customer-level data hub that combines data and sends targeted messages across display ads, Web site, email, and mobile. I asked [x+1] CEO John Nardone whether it’s actually possible to identify the same customer across all those channels, and he said it’s not an issue in many cases, since you can often give the customer a reason to log in or otherwise identify herself.

    Of course, the big exception is acquisition, which seems like a pretty big exception indeed. (“Other than that, how did you like the play, Mrs. Lincoln?”) But tracking mechanisms do get better all the time and there’s plenty of value in better treatment within existing customer relationships. So it’s definitely a good start.

    Thursday, March 24, 2011

    OMMA Metrics Conference: Online Ads Must Prove Real Value To Succeed

    I took a break from my usual obsessions yesterday to attend the New York edition of MediaPost’s OMMA Metrics and Measurement conference. It was a good chance to dive into this particular sector of the marketing analytics universe. (Another version of the program will be presented in San Francisco in July; the company also live streams a free Webcast. You can also download selected presentations from yesterday.)

    If there was an overriding theme to the event, it was frustration that online advertising isn’t attracting as much money as it should. There was more than a little ”TV-envy”: the feeling that TV gets more advertising because buying is based on simple, widely-accepted audience measures. Some speakers argued for duplicating this situation, by removing some middlemen and creating standard online audience measures.

    Others pointed to a deeper issue: that online marketers can’t measure the value of their efforts in terms of revenue or brand metrics like awareness and preference. In this view, TV buyers accept simple measures like Gross Rating Points because these measures have proven over time to correlate with real business results. Media mix modeling has more recently confirmed this. But except for direct response, online media can’t show the same relationship. This forces online marketers to report endless (but never complete) data about who saw what and how they acted, in the hopes that piling on enough details will somehow make advertisers happy. It never does.

    This is the online version of the old joke about the drunk who loses his keys in the alley but looks for them under the streetlamp “because the light is better”. Moral of story: no volume of irrelevant data can substitute for the information you really need.

    In the case of online advertising, the dark alley is the connections between ad placements and final business results. Several speakers touched on parts of this. IBM’s Yuchun Lee gave an opening keynote that highlighted the pervasive influence of online information over all customer activities, not just online purchases. Adometry’s Steve O’Brien explicitly stated that attribution must measure the incremental impact of each marketing effort on final results (although I think he limited this to online results). ForeSee Results’ Larry Freed stressed the need to trace all online and offline behaviors to understand their true role in final outcomes. Others cited studies where careful measurement found that indirect results showed online to be much more powerful than direct attribution alone.

    Yesterday’s speakers also raised the problem of scalability: that is, being able to duplicate and expand on success. This is one area where TV envy makes sense, because it’s easy to add more Gross Rating Points and be reasonably sure of getting the expected results. Online ad buying is more like buying print ads or mailing lists: you may have some sense of the audience demographics, but the only way to really know how it will perform is to run a test. But this isn't a measurement problem: simple, standard measures that hide true audience differences are only going to be unreliable predictors of actual results. What’s really needed are better testing methods to predict as quickly and cheaply as possible how each new audience will perform. The trick is you’re not just looking at immediate response, but all of those indirect effects that are so tricky to capture in the first place. Now you have to predict them in advance as well as measure them after the fact.

    Nobody said it would be easy.

    Friday, October 15, 2010

    Fractional Response Attribution is Worse Than Nothing

    Summary: Should companies apply fractional revenue attribution when more sophisticated methods are impractical? I think not: it gives inaccurate results that could result in bad decisions. Better to avoid financial measures at all if you can't do them properly.

    I spent most of the past week in San Francisco at overlapping conferences for the Direct Marketing Association and Marketo. My Marketo presentation was based on the marketing measurement white paper I recently wrote for them, which argues that measurement should be based on tracking buyers through stages in the purchase process. One corollary to this is not attributing fractions of revenue among different marketing touches. The analogy I’m currently using is baking a cake – it doesn’t make sense to assign partial credit for the final flavor to different ingredients: the recipe as a whole either works or doesn’t. Only testing can determine the impact of making changes.

    Given this mindset, I was more than a little surprised to attend a DMA panel discussion where two of the more sophisticated marketing measurement vendors described their systems as providing fractional attribution. Both vendors also offer more advanced methods and both made clear that they used such methods in appropriate situations. But they seemed to feel that when adequate data is not available, fractional attribution is better than nothing.

    I certainly understand their attitude. Many of the business-to-business marketers at the Marketo conference have exactly this problem: their data volumes are too small to accurately measure the incremental impact of most marketing programs. The best suggestion I can make is that they run whatever tests their volumes make practical. I’d further suggest that testing may actually be more practical than they realize if they actively and creatively look for opportunities to do it.

    But, again, the vendors on my panel knew this. The examples they gave were situations where companies had previously attributed all marketing revenue to the “last touch” before an actual purchase or other conversion event. They used fractional attribution to help people (marketers and those who fund them) see that other contacts also contribute to those final results. The practical goal was to justify funding for early-stage programs that such as search engine optimization and display advertising that precede that “last touch” itself.

    I’m all in favor of recognizing that early-stage contacts have value. But I still feel that assigning a fundamentally arbitrary financial value to those contacts is a mistake. The main danger is that people who don’t know any better may use these numbers to allocate marketing funds to the more “productive” uses. Such figures are not accurate enough to support such decisions.

    I’d rather use non-monetary measures such as correlations between different kinds of touches and ultimate results. These can highlight the connections between early and later touches without providing financial values that are easily misapplied. Maybe this is just wishful thinking, but perhaps refusing to provide unreliable financial metrics will even highlight the need for tests that can provide truly meaningful ones—thus helping marketers to make the necessarily investments.

    So what do you think: is fractional revenue attribution of reasonable compromise or a harmful distraction? Let me know your thoughts.

    Thursday, September 30, 2010

    Four Must-Have Metrics for Marketing Measurement

    Summary: Four critical metrics tell you most of what you need to show the value of your marketing efforts and to optimize your results. And, here's a funny picture.

    There’s still time to sign up for my October 7 Webinar on stage-based marketing measurement (sponsored by Marketo and hosted by the American Marketing Association). During my extensive, um, research, I was very pleased to find the following picture to illustrate the concept of stages:


    I like this picture both because it's amusing (a major priority) and also because it illustrates that stage definitions are constructed, not discovered. (I suppose the proper science is that evolutionary stages are objective facts, in which case our monkey friend in the photo simply has it wrong. But the deeper point still stands: whether it’s evolutionary stages or purchasing stages, someone imposes conceptual order on the jumble of reality.)*

    If the picture isn't enough reason to attend, the Webinar will also present four essential metrics of stage-based marketing measurement. (Quick review: stage-based measurement tracks the ability of marketing programs to move leads through stages in the purchase process. This is more meaningful than attributing some fraction of the final revenue directly to each program. I’ll cover this in the Webinar and also discuss it in a recent whitepaper Winning the Marketing Measurement Marathon).

    In case you can’t attend the Webinar, I thought I’d share the four metrics here.

    1. Marketing ROI.
    Purpose: to show the company’s return on its marketing investment.
    Inputs: marketing costs and marketing-related revenue.
    Metric: return on investment (= revenue / cost)
    Comment: As with any ROI calculation, the trick here is to determine which costs are associated with which revenues. It’s always hard for marketers to know which revenues they helped to generate, but I’ll assume a database or digital environment that identifies the treatments applied to individuals and their actual purchases. In this situation, marketing ROI is calculated by summing all marketing costs for a cohort of customers sharing some common feature such as original source, acquisition date range or first purchase date. Note that a meaningful calculation must also include spending on people who never purchase, so a cohort based on purchase dates must somehow include non-buyers.

    2. Program ROI
    Purpose: measure the relative performance of individual marketing programs.
    Inputs: incremental marketing cost, incremental revenue
    Metric: incremental ROI
    Comment: Obviously the key word here is “incremental”. Marketing programs exist in the context of other activities that influence buyer behavior. The only thing you can really measure is the incremental change that occurs when a particular program is added or removed from the mix. Combined with incremental costs, this gives an incremental ROI for the program. Spending more on high ROI programs and less on low ROI programs is how marketers optimize their results. Remember, though, that ROI is just one part of the equation. In practice, marketers must balance it against considerations such as revenue goals and marketing budgets.

    Incremental measurement requires formal tests that compare performance of two similar groups which differ only in whether they received a particular program. These tests can cover any type of program, including nurture programs that don’t acquire new names. Proper measurement must track through the end of the buying cycle, since a program’s impact on early stages might vanish or even be reversed at later stages. One common example: a free introductory offer that yields higher initial response but doesn't add to the final number of paying customers.

    3. Stage Results
    Purpose: understand movement of leads through the buying stages
    Inputs: marketing costs per stage, conversions (= number of leads that move to the next stage), conversion time (= time in stage before conversion to next stage; a.k.a. velocity), lead inventory (=number of leads in each stage)
    Metrics: conversion rate, cost per conversion, average conversion time
    Comment: These statistics describe how leads are moving from one stage to the next. The information is used to project future behaviors, to identify problem stages, to track changes in stage performance, and to compare the effects of marketing programs. Where leads in different cohorts (based on original source, acquisition date, marketing treatments, etc.) behave differently, statistics should be gathered separately for each cohort.

    One statistic you can't calculate is the ROI for stage investments. This is counter-intuitive: stage ROI should be possible because you're making investments at each stage and the investments produce leads with higher values. But in fact the aggregate value of a cohort of leads remains the same as they move through the stages; all that happens is that unproductive (i.e., valueless) leads drop out. That is, even though the value per lead increases, there is no increase in the value of all leads combined. Without a value change, you can’t calculate a return on investment.

    (Actually, there is a bit of value change as leads move through the stages because leads in later stages will need less additional investment to reach the final sale. But the expected revenue for the cohort stays constant. Of course, to the extent that a particular marketing program creates an incremental change in total value, this can be measured like any other program ROI.)

    4. Revenue Forecast
    Purpose: estimate future period revenues (by week, month, quarter, etc.) from the current lead inventory.
    Inputs: lead inventory per stage, conversion rate per stage, conversion time per stage
    Metric: revenue forecast by period
    Comment: Revenue projections are among the most critical of corporate statistics. The stage-based approach allows more accurate projections of revenue over time, starting with the current lead inventory and known stage statistics. If the projections can distinguish marketing-generated leads from other leads, they can also give a concrete measure of the value that marketing has provided to the organization. If leads from different cohorts behave differently, the projections need to use separate assumptions for each group.

    _____________________________________________________
    * Platonists and creationists, with their respective theories of absolute Forms and divinely-created immutable species, might argue that species actually do have an independent existence. They're wrong.