Showing posts with label marketing ROI. Show all posts
Showing posts with label marketing ROI. Show all posts

Sunday, February 07, 2016

Marketing attribution systems: a quick look at the options

I’ve seen a lot of attribution vendors recently. If you're a regular reader here, you saw my reviews of Claritix (last week) and BrightFunnel (in December).  Last week caught up with Jeff Winsper of Black Ink, which I'll hopefully review before too long.  Bizible also popped up recently although I don’t recall the occasion; possibly something related to their interesting survey on “pipeline marketing” and attribution methods.

My rational brain knows that there’s probably no reason for this flurry of sightings beyond pure coincidence. But it’s human to see patterns where they don’t exist, so I did find myself wondering if attribution is becoming a hot topic. I can easily come up with a good story to explain it: marketing technology has reached a new maturity stage where the data needed for good attribution is now readily available, the cost of processing that data has fallen far enough to make it practical, and the need has reached a tipping point as the complexity of marketing has grown. So, clearly, 2016 will be The Year of Attribution (as Anna Bager and Joe Laszlo of the Internet Advertising Bureau have already suggested).

Or not. Sometimes random is just random. But now that this is on my mind, I've taken a look at the larger attribution landscape.  Quick searches for "attribution" on G2 Crowd and TrustRadius turned up lists of 29 and 17 vendors, respectively – neither including Brightfunnel or Claritix, incidentally.  A closer look found that 13 appeared on both sites, that each site listed several relevant vendors that the other missed, and that both sites listed multiple vendors that were not really relevant. For what it's worth, eight vendors of the 13 vendors listed on both sites were all bona fide attribution systems -- which I loosely define to mean they assign fractions of revenue to different marketing campaigns.  I wouldn't draw any grand conclusions from the differences in coverage on G2 Crowd and TrustRadius, except to offer the obvious advice to check both (and probably some of the other review sites or vendor landscapes) to assemble a reasonably complete set of options.

I've presented the vendors listed in the two review sites below, grouping them based on which site included them and whether I qualified them as relevant to a quest for an attribution vendor.  I've also added a few notes based on the closer look I took at each system in order to classify it.  The main questions I asked were:
  • Does the system capture individual-level data, not just results by channel or campaign?  You need the individual data to know who saw which messages and who ended up making a purchase.  Those are the raw inputs needed for any attempt at estimating the impact of individual messages on the final result.  
  • Does the system capture offline as well as online messages?  You need both to understand all influences on results.  This question disqualified a few vendors that look only at online interactions.  In practice, most vendors can incorporate whatever data you provide them, so if you have offline data, they can use it.  TV is a special case because marketers don't usually know whether a specific individual saw a particular TV message, so TV is incorporated into attribution models using more general correlations.
  • How does the vendor do the attribution calculations?  Nearly all the vendors use what I've labeled an "algorithmic" approach, meaning they perform some sort of statistical analysis to estimate the attributed values.  The main alternative is a "fractional" method that applies user-assigned weights, typically based on position in the buying sequence and/or the channel that delivered the message.  The algorithmic approach is certainly preferred by most marketers, since it is based in actual data rather than marketers' (often inaccurate) assumptions.  But algorithmic methods need a lot of data, so B2B marketers often use fractional methods as a more practical alternative.  It's no accident that the only B2B specialist listed here, Bizible, is the only company that uses a fractional method, as do B2B specialists BrightFunnel and Claritix.  It's also important to note that the technical details of the algorithmic methods differ greatly from vendor to vendor, and of course each vendor is convinced that their method is by far the best approach.
  • Does the vendor provide marketing mix models?  These resemble attribution except they work at the channel level and are not based on individual data.  Classic marketing mix models instead look at promotion expense by channel by market (usually a geographic region, sometimes a demographic or other segment) and find correlations over time between spending levels and sales.  Although mix models and algorithmic attribution use different techniques and data, several vendors do both and have connected them in some fashion.
  • Does the vendor create optimal media plans? I'm defining these broadly to include any type of recommendation that uses the attribution model to suggest how users should reallocate their marketing spend at the channel or campaign level.  Systems may do this at different levels of detail, with different levels of sophistication in the optimization, and with different degrees of integration to media buying systems. 
Of course, there are plenty of other points that differentiate these systems.  But this list should be a useful starting point if you're considering a new attribution system -- as well as a reminder of the need to define your requirements and drill into the details before you make a final selection.

Attribution Systems

G2 Crowd and TrustRadius
  • Abakus: individual data; online and offline; algorithmic; optimal media plans
  • Bizible: individual data; online and offline; fractional; merges marketing automation plus CRM data; B2B
  • C3 Metrics: individual data; online and TV; algorithmic; optimal media plans 
  • Conversion Logic: individual data; online and TV; algorithmic;optimal media plans
  • Convertro: individual data; online and offline; algorithmic; mix model; optimal media plans; owned by AOL
  • MarketShare DecisionCloud: individual data; online and offline; algorithmic; mix models; optimal media plans; owned by Neustar
  • Rakuten Attribution: individual data; online only; algorithmic; optimal media plans; formerly DC Storm, acquired by Rakuten marketing services agency in 2014
  • Visual IQ: individual data; online and offline; algorithmic; optimal media plans
G2 Crowd only
  • BlackInk: individual data; online and offline; algorithmic; provides customer, marketing & sales analytics 
  • Kvantum Inc.: individual data; online and offline; algorithmic; mix models; optimal media plans
  • Marketing Evolution:  individual data; online and offline; algorithmic; mix model; optimal media plans
  • OptimaHub MediaAttribution  individual data; online and offline; attribution method not clear; data analytics agency with tag management, data collection, and analytics solutions
    TrustRadius only
    • Adometry: individual data; online and offline; algorithmic; mix models; optimal media plans; owned by Google
    • ThinkVine: individual data; online and offline; algorithmic; mix models; optimal media plans; uses agent-based and other models
    • Optimine:  individual data; online and offline; algorithmic; optimal media plans
    Other Systems

    G2 Crowd and TrustRadius

    G2 Crowd only
    • Adinton: Adwords bid optimization and attribution; uses Google Analytics for fractional attribution
    • Blueshift Labs: real-time segmentation and content recommendations; individual data but apparently no attribution
    • IBM Digital Analytics Impression Attribution: individual data; online only; shows influence (not clear has fractional or algorithmic attribution); based on Coremetrics
    • LIVE: for clients of WPP group; does algorithmic attribution and optimization
    • Marchex: tracks inbound phone calls
    • Pathmatics: digital ad intelligence; apparently no attribution
    • Sizmek: online ad management; provides attribution through alliance with Abakus
    • Sparkfly: retail specialist; individual data; focus on connecting digital and POS data; campaign-level attribution but apparently not fractional or algorithmic
    • Sylvan: financial services software; no marketing attribution 
    • TagCommander: tag managemenet system; real-time marketing hub with individual profiles and cross-channel data; custom fractional attribution formulas
    • TradeTracker: affiliate marketing network
    • Zeta Interative ZX: digital marketing agency offering DMP, database, engagement and related attribution; mix of tech and services

    Friday, July 18, 2014

    Are Millennial Marketers More Analytical?

    I had an interesting conversation this week with a vendor of marketing measurement systems on the question of why more marketers won’t buy his type of software. After all, surveys often show that marketers and CEOs alike rate better measurement as a high priority. Yet actual measurement techniques don’t improve much from year to year: to cite the most recent report to cross my desk, the 2014 State of Marketing Measurement Survey Report from Ifbyphone found that 45% of marketers are measuring Return on Investment in 2014 vs. 40% in 2013 -- a gain that is probably within the survey's margin of error.  Other, simpler measures are more common and growing more quickly, but that’s exactly the point: marketers don’t invest in meaningful performance measures like ROI.


    My vendor friend’s suspicion was that marketers don’t buy better measurement because, whatever they say in surveys, they really don’t want to be measured. My own opinion, based on comments from marketers over the years, is they don’t have time to put advanced measurement systems in place.

    Of course, time is a matter of prioritization, so this really means that marketers think the time spent on an advanced measurement project will produce less value than if that time were spent on something else.  In other words, marketers don’t invest in advanced measurement because they don’t think the resulting information will drive enough improvement in their marketing results.  That's not an unreasonable belief: much ROI information is in fact interesting but not actionable and, therefore, adds no business value.  Further evidence: the advanced measurement techniques that have been widely adopted, like marketing mix models and multi-touch attribution, all have proven bottom-line impact. The impact of marketing ROI, on the other hand, is often less clear.

    Then our conversation took an unexpected turn: the vendor speculated that younger marketers might be more analytical and hence more inclined to ROI measurement.  This was a new thought to me and offered the cheery prospect of an actual change from the long-term status quo. But neither of us had seen any research on the topic, so we couldn’t judge whether it was likely to be true.  End of discussion.

    I’ve since had time to look into this more deeply. There’s plenty of research on millennials’ (currently 19-34 years old) in general and a fair amount on their behavior in the workplace. Most of it reinforces familiar stereotypes: millenials are collaborative, tech-savvy, results-focused, fast-working, multi-tasking, anti-hierarchical, socially-conscious, company-disloyal, and of course digitally connected. But none of the research shed much light on whether they’re more or less analytical than older generations: since they’re skeptical of authority, you can expect them to be more open to challenging past assumptions, but this doesn’t necessarily mean they rely on data to resolve those challenges. They could just as easily rely on what feels right to them, even though they’ve had little time to sharpen their intuitions on the stone of reality.  Even their presumed affinity for digital media, which is certainly more measurable than traditional media, doesn’t necessarily translate to an interest in ROI measurement. Indeed, most digital measurements such as Web traffic and social media interactions have almost nothing to do with ROI.  Finding that millennials rely heavy on them would bode poorly for advanced measurement methods.

    But all of this is just speculation, and I am definitely a fact-based kinda guy. Has anyone seen any information on how marketers’ behaviors differ by generation? If not, would you find it an interesting topic for a survey?

    Thursday, September 30, 2010

    Four Must-Have Metrics for Marketing Measurement

    Summary: Four critical metrics tell you most of what you need to show the value of your marketing efforts and to optimize your results. And, here's a funny picture.

    There’s still time to sign up for my October 7 Webinar on stage-based marketing measurement (sponsored by Marketo and hosted by the American Marketing Association). During my extensive, um, research, I was very pleased to find the following picture to illustrate the concept of stages:


    I like this picture both because it's amusing (a major priority) and also because it illustrates that stage definitions are constructed, not discovered. (I suppose the proper science is that evolutionary stages are objective facts, in which case our monkey friend in the photo simply has it wrong. But the deeper point still stands: whether it’s evolutionary stages or purchasing stages, someone imposes conceptual order on the jumble of reality.)*

    If the picture isn't enough reason to attend, the Webinar will also present four essential metrics of stage-based marketing measurement. (Quick review: stage-based measurement tracks the ability of marketing programs to move leads through stages in the purchase process. This is more meaningful than attributing some fraction of the final revenue directly to each program. I’ll cover this in the Webinar and also discuss it in a recent whitepaper Winning the Marketing Measurement Marathon).

    In case you can’t attend the Webinar, I thought I’d share the four metrics here.

    1. Marketing ROI.
    Purpose: to show the company’s return on its marketing investment.
    Inputs: marketing costs and marketing-related revenue.
    Metric: return on investment (= revenue / cost)
    Comment: As with any ROI calculation, the trick here is to determine which costs are associated with which revenues. It’s always hard for marketers to know which revenues they helped to generate, but I’ll assume a database or digital environment that identifies the treatments applied to individuals and their actual purchases. In this situation, marketing ROI is calculated by summing all marketing costs for a cohort of customers sharing some common feature such as original source, acquisition date range or first purchase date. Note that a meaningful calculation must also include spending on people who never purchase, so a cohort based on purchase dates must somehow include non-buyers.

    2. Program ROI
    Purpose: measure the relative performance of individual marketing programs.
    Inputs: incremental marketing cost, incremental revenue
    Metric: incremental ROI
    Comment: Obviously the key word here is “incremental”. Marketing programs exist in the context of other activities that influence buyer behavior. The only thing you can really measure is the incremental change that occurs when a particular program is added or removed from the mix. Combined with incremental costs, this gives an incremental ROI for the program. Spending more on high ROI programs and less on low ROI programs is how marketers optimize their results. Remember, though, that ROI is just one part of the equation. In practice, marketers must balance it against considerations such as revenue goals and marketing budgets.

    Incremental measurement requires formal tests that compare performance of two similar groups which differ only in whether they received a particular program. These tests can cover any type of program, including nurture programs that don’t acquire new names. Proper measurement must track through the end of the buying cycle, since a program’s impact on early stages might vanish or even be reversed at later stages. One common example: a free introductory offer that yields higher initial response but doesn't add to the final number of paying customers.

    3. Stage Results
    Purpose: understand movement of leads through the buying stages
    Inputs: marketing costs per stage, conversions (= number of leads that move to the next stage), conversion time (= time in stage before conversion to next stage; a.k.a. velocity), lead inventory (=number of leads in each stage)
    Metrics: conversion rate, cost per conversion, average conversion time
    Comment: These statistics describe how leads are moving from one stage to the next. The information is used to project future behaviors, to identify problem stages, to track changes in stage performance, and to compare the effects of marketing programs. Where leads in different cohorts (based on original source, acquisition date, marketing treatments, etc.) behave differently, statistics should be gathered separately for each cohort.

    One statistic you can't calculate is the ROI for stage investments. This is counter-intuitive: stage ROI should be possible because you're making investments at each stage and the investments produce leads with higher values. But in fact the aggregate value of a cohort of leads remains the same as they move through the stages; all that happens is that unproductive (i.e., valueless) leads drop out. That is, even though the value per lead increases, there is no increase in the value of all leads combined. Without a value change, you can’t calculate a return on investment.

    (Actually, there is a bit of value change as leads move through the stages because leads in later stages will need less additional investment to reach the final sale. But the expected revenue for the cohort stays constant. Of course, to the extent that a particular marketing program creates an incremental change in total value, this can be measured like any other program ROI.)

    4. Revenue Forecast
    Purpose: estimate future period revenues (by week, month, quarter, etc.) from the current lead inventory.
    Inputs: lead inventory per stage, conversion rate per stage, conversion time per stage
    Metric: revenue forecast by period
    Comment: Revenue projections are among the most critical of corporate statistics. The stage-based approach allows more accurate projections of revenue over time, starting with the current lead inventory and known stage statistics. If the projections can distinguish marketing-generated leads from other leads, they can also give a concrete measure of the value that marketing has provided to the organization. If leads from different cohorts behave differently, the projections need to use separate assumptions for each group.

    _____________________________________________________
    * Platonists and creationists, with their respective theories of absolute Forms and divinely-created immutable species, might argue that species actually do have an independent existence. They're wrong.

    Wednesday, June 09, 2010

    Using a Purchase Funnel to Measure Marketing Effectiveness: Better than Last-Click Attribution But Far From Perfect

    Summary: Many vendors are now proposing to move beyond "last click" attribution to measure the impact of advertising on movement of customers through a sequence of buying stages. This is a definite improvement but not a complete solution.

    Marketers have long struggled to measure the impact of individual promotions. Even online marketing, where every click can be captured, and often tracked back to a specific person, doesn’t automatically solve the problem. Merely tracking clicks doesn’t answer the deeper question of the causal relationships among different marketing contacts.

    Current shorthand for the issue is “last click attribution” – as in, “why last click attribution isn’t enough”. Of course, vendors only start pointing out a problem when they’re ready to sell you a solution. So it won’t come as a surprise that a new consensus seems to be emerging on how to measure the value of multiple marketing contacts.

    The solution boils down to this: classify different contacts as related to the different stages in the buying process and then measure their effectiveness at moving customers from one stage to the next. This is no different from the “sales funnel” that sales managers have long measured, nor from the AIDA model (awareness, interest, desire, action) that structures traditional brand marketing. All that’s new, if anything, is the claim to assign a precise value to individual messages.

    Examples of vendors taking this approach include:

    - Marketo recently announced new "Revenue Cycle Analytics" marketing measurement features with its customary hoopla. The conceptual foundation of Marketo’s approach is that it tracks the movement of customers through the buying stages. Although this itself isn’t particularly novel, Marketo has added some significant technology in the form of a reporting database that can reconstruct the status of a given customer at various points in the time. Although this is pretty standard among business intelligence systems, few if any of Marketo's competitors offer anything similar.

    - Clear Saleing bills itself as an “advertising analytics platform”. Its secret sauce is defining a set of advertising goals (introducer, influencer, or closer) and then specifying which goal each promotion supports. Marketers can then calculate their spending against the different goals and estimate the impact of changes in the allocation. Credit within each goal can be distributed equally among promotions or allocated according to user-defined weights. While such allocation is a major advance for most marketers, it’s still far from perfect because the weights are not based on directly measuring each ad's actual impact.

    - Leadforce1 offers a range of typical B2B marketing automation features, but its main distinction is to infer each buyer's position in a four-stage funnel (discovery, evaluation, use, and affinity) based on Web behaviors. The specific approach is to link keywords within Web content to the stages and then track which content each person views. The details are worth their own blog post, but the key point, again, is that the contents are assigned to sales stages and the system tracks each buyer’s progress through those stages. Although the primary focus of LeadForce1 is managing relationships with individuals, the vendor also describes using the data to assess campaign ROI.

    Compared with last click attribution, use of sales stages is a major improvement. But it’s far from the ultimate solution. So far as I know, none of the current products does any statistical analysis, such as a regression model, to estimate the true impact of messages at either the individual or campaign level. They either rely on user-specified weights or simply treat all messages within each stage as a group. This lack of detail makes campaign optimization impossible: at best, it allows stage optimization.

    Even more fundamentally, stage analysis assumes that each message applies to a single marketing stage. This is surely untrue. As brand marketers constantly remind us, a well-designed message can increase lifetime purchases among all recipients, whether or not they are current customers. It’s equally true that some messages affect certain stages more than others. But to ignore the impact on all stages except one is an oversimplification that can easily lead to false conclusions and poor marketing decisions.

    Stage-based attribution has its merits. It gives marketers a rough sense of how spending is balanced across the purchase stages and lets them measure movement and attrition from one stage to the next. Combined with careful testing, it could give insight into the impact of individual marketing programs. But marketers should recognize its limits and keep pressing for solutions that measure the full impact of each program on all their customers.

    Tuesday, January 20, 2009

    Salespeople: One Question Matters Most

    Back in December, the Sales Lead Management Association and LEADTRACK published a survey on lead management practices that I haven’t previously had time to write about. (The survey is still available on the SLMA Web site.) It contained 10 questions, which is about as many as I can easily grasp.

    The two clearest answers came from questions about the information salespeople want and why they don’t follow up on inquiries. By far the most desired piece of information about a lead was purchasing time frame: this was cited by 41% of respondents, compared with budget (17%), application (15%), lead score (15%) and authority (12%). I guess it’s a safe bet that salespeople jump quickly on leads who are about to purchase and pretty much ignore the others, so this finding strongly reinforces the need for nurturing campaigns that allow marketers to keep in contact with leads who are not yet ready to buy.

    Note that none of listed categories included behavioral information such as email clickthroughs or Web page visits, which demand generation vendors make so much of. I doubt they would have ranked highly had they been included. Although behavioral data provides some insights into a lead’s state of mind, it's useful to be reminded that wholly pragmatic facts about time frame are a salesperson's paramount concern.

    The other clear message from the survey was that the main reason leads are not followed up is “not enough info”. This was cited by 55% of respondents, compared with 14% for “inquired before, never bought”, 12% for “no system to organize leads”, 10% for “no phone number”, 7% for "geo undesirable" and 2% because of "no quota on product". This is an unsurprising result, since (a) good information is often missing and (b) salespeople don’t like to waste time on unqualified leads. Based on the previous question, we can probably assume that the critical piece of necessary information is time frame. So this answer reinforces the importance of gathering that information and passing it on.

    One set of answers that surprised me a bit were that 77% or 80% of salespeople were working with an automated lead management system, either “CRM/lead management” or “Software as a Service”. I’ve given two figures because the question was purposely asked two different ways to check for consistency. The categories don’t make much sense to me because they overlap: products like Salesforce.com are both CRM systems and SaaS. Still, this doesn't affect the main finding that nearly everyone has some type of automated system to “update lead status” and “manage your inquires” (the two different questions that were asked). This is higher market penetration than I expected, although I do recognize that those questions deal more with lead management (a traditional sales automation function) than lead generation (the province of demand generation systems). Still, to the extent that CRM systems can offer demand generation functions, there may be a more limited market for demand generation than the vendors expect.

    One final interesting set of figures had to do with marketing measurement. The survey found that 23% of companies measure ROI for all lead generation tactics, 30% measure it for some tactics, and 47% don’t measure it at all. The authors of the survey report seem to find these numbers distressingly low, particularly in comparison with the 80% of companies that have a system in place and, at least in theory, are capturing the data needed for measurement. I suppose I come at this from a different perspective, having seen so many surveys over the years showing that most companies don’t do much measurement. To me, 23% measuring everything seems unbelievably high. (For example, Jim Lenskold's 2008 Marketing ROI and Measurements Study found 26% of respondents measured ROI on some or all campaigns; the combination of "some" and "all" in the SLMA study is 53%.) Either way, of course, there is plenty of room for improvement, and that's what really counts.

    Tuesday, October 02, 2007

    Marketing Performance Measurement: No Answers to the Really Tough Questions

    I recently ran a pair of two-day workshops on marketing performance measurement. My students had a variety of goals, but the two major ones they mentioned were the toughest issues in marketing: how to allocate resources across different channels and how to measure the impact of marketing on brand value.

    Both questions have standard answers. Channel allocation is handled by marketing mix models, which analyze historical data to determine the relative impact of different types of spending. Brand value is measured by assessing the important customer attitudes in a given market and how a particular brand matches those attitudes.

    Yet, despite my typically eloquent and detailed explanations, my students found these answers unsatisfactory. Cost was one obstacle for most of them; lack of data was another. They really wanted something simpler.

    I’d love to report I gave it to them, but I couldn't. I had researched these topics thoroughly as preparation for the workshops and hadn’t found any alternatives to the standard approaches; further research since then still hasn’t turned up anything else of substance. Channel allocation and brand value are inherently complex and there just are no simple ways to measure them.

    The best I could suggest was to use proxy data when a thorough analysis is not possible due to cost or data constraints. For channel allocation, the proxy might be incremental return on investment by channel: switching funds from low ROI to high ROI channels doesn’t really measure the impact of the change in marketing mix, but it should lead to an improvement in the average level of performance. Similarly, surveys to measure changes in customer attitudes toward a brand don’t yield a financial measure of brand value, but do show whether it is improving or getting worse. Some compromise is unavoidable here: companies not willing or able to invest in a rigorous solution must accept that their answers will be imprecise.

    This round of answers was little better received than the first. Even ROI and customer attitudes are not always available, and they are particularly hard to measure in multi-channel environments where the result of a particular marketing effort cannot easily be isolated. You can try still simpler measures, such as spending or responses for channel performance or market share for brand value. But these are so far removed from the original question that it’s difficult to present them as meaningful answers.

    The other approach I suggested was testing. The goal here is to manufacture data where none exists, thereby creating something to measure. This turned out to be a key concept throughout the performance measurement discussions. Testing also shows that marketers are at least doing something rigorous, thereby helping satisfy critics who feel marketing investments are totally arbitrary. Of course, this is a political rather than analytical approach, but politics are important. The final benefit of testing is it gives a platform for continuous improvement: even though you may not know the absolute value of any particular marketing effort, a test tells whether one option or another is relatively superior. Over time, this allows a measurable gain in results compared with the original levels. Eventually it may provide benchmarks to compare different marketing efforts against each other, helping with both channel allocation and brand value as well.

    Even testing isn’t always possible, as my students were quick to point out. My answer at that point was simply that you have to seek situations where you can test: for example, Web efforts are often more measurable than conventional channels. Web results may not mirror results in other channels, because Web customers may themselves be very different from the rest of the world. But this again gets back to the issue of doing the best with the resources at hand: some information is better than none, so long as you keep in mind the limits of what you’re working with.

    I also suggested that testing is more possible than marketers sometimes think, if they really make testing a priority. This means selecting channels in part on the basis of whether testing is possible; designing programs so testing is built in; and investing more heavily in test activities themselves (such as incentives for survey participants). This approach may ultimately lead to a bias in favor of testable channels—something that seems excessive at first: you wouldn’t want to discard an effective channel simply because you couldn’t test it. But it makes some sense if you realize that testable channels can be improved continuously, while results in untestable channels are likely to stagnate. Given this dynamic, testable channels will sooner or later become more productive than untestable channels. This holds even if the testable channels are less efficient at the start.

    I offered all these considerations to my students, and may have seen a few lightbulbs switch on. It was hard to tell: by the time we had gotten this far into the discussion, everyone was fairly tired. But I think it’s ultimately the best advice I could have given them: focus on testing and measuring what you can, and make the best use possible of the resulting knowledge. It may not directly answer your immediate questions, but you will learn how to make the most effective use of your marketing resources, and that’s the goal you are ultimately pursuing.

    Thursday, August 30, 2007

    Marketing Performance Involves More than Ad Placement

    I received a thoughtful e-mail the other day suggesting that my discussion of marketing performance measurement had been limited to advertising effectiveness, thereby ignoring the other important marketing functions of pricing, distribution and product development. For once, I’m not guilty as charged. At a minimum, a balanced scorecard would include measures related to those areas when they were highlighted as strategic. I’d further suggest that many standard marketing measures, such as margin analysis, cross-sell ratios, and retail coverage, address those areas directly.

    Perhaps the problem is that so many marketing projects are embedded in advertising campaigns. For example, the way you test pricing strategies is to offer different prices in the marketplace and see how customers react. Same for product testing and cross-sales promotions. Even efforts to improve distribution are likely to boil down to campaigns to sign up new dealers, training existing ones, distribute point of sale materials, and so on. The results will nearly always be measured in terms of sales results, exactly as you measure advertising effectiveness.

    In fact, since everything is measured through advertising it and recording the results, the real problem may be how to distinguish “advertising” from the other components of the marketing mix. In classic marketing mix statistical models, the advertising component is representing by ad spend, or some proxy such as gross rating points or market coverage. At a more tactical level, the question is the most cost-effective way to reach the target audience, independent of the message content (which includes price, product and perhaps distribution elements, in addition to classic positioning). So it does make sense to measure advertising effectiveness (or, more precisely, advertising placement effectiveness) as a distinct topic.

    Of course, marketing does participate in activities that are not embodied directly in advertising or cannot be tested directly in the market. Early-stage product development is driven by market research, for example. Marketing performance measurement systems do need to indicate performance in these sorts of tasks. The challenge here isn’t finding measures—things like percentage of sales from new products and number of research studies completed (lagging and leading indicators, respectively) are easily available. Rather, the difficulty is isolating the contribution of “marketing” from the contribution of other departments that also participate in these projects. I’m not sure this has a solution or even needs one: maybe you just recognize that these are interdisciplinary teams and evaluate them as such. Ultimately we all work for the same company, eh? Now let’s sing Kumbaya.

    In any event, I don’t see a problem using standard MPM techniques to measure more than advertising effectiveness. But it’s still worth considering the non-advertising elements explicitly to ensure they are not overlooked.

    Thursday, July 05, 2007

    Is Marketing ROI Important?

    You may have noticed that my discussions of marketing performance measurement have not stressed Return on Marketing Investment as an important metric. Frankly, this surprises even me: ROMI appears every time I jot down a list of such measures, but it never quite fits into the final schemes. To use the categories I proposed yesterday, ROMI isn’t a measure of business value, of strategic alignment, or of marketing efficiency. I guess it comes closest to the efficiency category, but the efficiency measures tend to be more simple and specific, such as a cost per unit or time per activity. Although ROMI could be considered the ultimate measure of marketing efficiency, it is too abstract to fit easily into this group.

    Still, my silence doesn’t mean I haven’t been giving ROMI much thought. (I am, after all, a man of many secrets.) In fact, I spent some time earlier this week revisiting what I assume is the standard work on the topic, James Lenskold’s excellent Marketing ROI. Lenksold takes a rigorous and honest view of the subject, which means he discusses the challenges as well as the advantages. I came away feeling ROMI faces two major issues: the practical one of identifying exactly which results are caused by a particular marketing investment, and the more conceptual one of how to deal with benefits that depend in part on future marketing activities.

    The practical issue of linking results to investments has no simple solution: there’s no getting around the fact that life is complex. But any measure of marketing performance faces the same challenge, so I don’t see this as a flaw in ROMI itself. The only thing I would say is that ROMI may give a false illusion of precision that persists no matter how many caveats are presented along with the numbers.

    How to treat future, contingent benefits is also a problem any methodology must face. Lenskold offers several options, from treating several investments into a single investment for analytical purposes, to reporting the future benefits separately from the immediate ROMI, to treating investments with long-term results (e.g. brand building) as overhead rather than marketing. Since he covers pretty much all the possibilities, one of them must be the right answer (or, more likely, different answers will be right in different situations). My own attitude is this isn’t something to agonize over: all marketing decisions (indeed, all business decisions) require assumptions about the future, so it’s not necessary to isolate future marketing programs as something to treat separate from, say, future product costs. Both will result in part from future business decisions. When I calculate lifetime value, I certainly include the results of future marketing efforts in the value stream. Were I to calculate ROMI, I’d do the same.

    So here's what it comes down to. Even though I'm attracted to the idea of ROMI, I find it isn't concrete enough to replace specific marketing efficiency measures like cost per order, but is still too narrow to provide the strategic insight gained from lifetime value. (This applies unless you define ROMI to include the results of future marketing decisions, but then it's really the same as incremental LTV.)

    Now you know why ROMI never makes my list of marketing performance measures.