Friday, October 15, 2010

Fractional Response Attribution is Worse Than Nothing

Summary: Should companies apply fractional revenue attribution when more sophisticated methods are impractical? I think not: it gives inaccurate results that could result in bad decisions. Better to avoid financial measures at all if you can't do them properly.

I spent most of the past week in San Francisco at overlapping conferences for the Direct Marketing Association and Marketo. My Marketo presentation was based on the marketing measurement white paper I recently wrote for them, which argues that measurement should be based on tracking buyers through stages in the purchase process. One corollary to this is not attributing fractions of revenue among different marketing touches. The analogy I’m currently using is baking a cake – it doesn’t make sense to assign partial credit for the final flavor to different ingredients: the recipe as a whole either works or doesn’t. Only testing can determine the impact of making changes.

Given this mindset, I was more than a little surprised to attend a DMA panel discussion where two of the more sophisticated marketing measurement vendors described their systems as providing fractional attribution. Both vendors also offer more advanced methods and both made clear that they used such methods in appropriate situations. But they seemed to feel that when adequate data is not available, fractional attribution is better than nothing.

I certainly understand their attitude. Many of the business-to-business marketers at the Marketo conference have exactly this problem: their data volumes are too small to accurately measure the incremental impact of most marketing programs. The best suggestion I can make is that they run whatever tests their volumes make practical. I’d further suggest that testing may actually be more practical than they realize if they actively and creatively look for opportunities to do it.

But, again, the vendors on my panel knew this. The examples they gave were situations where companies had previously attributed all marketing revenue to the “last touch” before an actual purchase or other conversion event. They used fractional attribution to help people (marketers and those who fund them) see that other contacts also contribute to those final results. The practical goal was to justify funding for early-stage programs that such as search engine optimization and display advertising that precede that “last touch” itself.

I’m all in favor of recognizing that early-stage contacts have value. But I still feel that assigning a fundamentally arbitrary financial value to those contacts is a mistake. The main danger is that people who don’t know any better may use these numbers to allocate marketing funds to the more “productive” uses. Such figures are not accurate enough to support such decisions.

I’d rather use non-monetary measures such as correlations between different kinds of touches and ultimate results. These can highlight the connections between early and later touches without providing financial values that are easily misapplied. Maybe this is just wishful thinking, but perhaps refusing to provide unreliable financial metrics will even highlight the need for tests that can provide truly meaningful ones—thus helping marketers to make the necessarily investments.

So what do you think: is fractional revenue attribution of reasonable compromise or a harmful distraction? Let me know your thoughts.

4 comments:

  1. David, I certainly tend to agree with your conclusion that fractional attribution is undesirable, but I'm not clear on that basis how early-stage responses can be properly credited? Given that last-touch attribution is obviously a discredited approach, what's the best alternative? Interested to know what you think.

    ReplyDelete
  2. The only way to get a meaningful estimate of the impact of an early stage program (or any other program) is to test what happens when you eliminate it. Many people hate to hear that because they feel they don't have the volume of data necessary for a test or their systems can't measure tests properly. Fair enough -- but it's better to at least recognize the problem and do nothing than to than apply fractional attribution and get a clearly wrong result.

    I'll go further: some might argue that fractional attribution provides an approximately correct answer which is better than nothing. I disagree, at least if you're going to use the results to allocate investment among programs. Fractional attribution based on fixed (and arbitrary) weights will not adjust when actual results change, so it won't tell you which programs are working better or worse than expected. This makes it a very poor guide to action.

    Case in point: I was speaking yesterday with a vendor who described a sophisticated fractional attribution method that recalculated credits every night, based on all touches and all responses for each individual. Sounds good, but think about it: if you add a totally useless (or actively harmful) program, it still gets a fraction of the credit and REDUCES the credits earned by other programs whose actual value hasn't changed at all. Thus, you get exactly the wrong signal. The same objection applies if you add a fabulously effective program, which doesn't get anywhere near the credit it deserves but does increase the apparent value of other programs. Explain to me again why we're doing this?

    ReplyDelete
  3. David-
    I agree that it is challenging to manage campaigns based on fractional attribution given the lack of integration across data sources and channels and real-time reporting. However, isn't it still valuable to identify the upstream touch points and assign value to them? Especially in this economic climate, where advertisers want to understand the return on each dollar and reduce spend where there is no return? At least from a budget allocation standpoint one can invest appropriately to drive additional conversions and improve overall performance. Couldn't you use fractional attribution as a method to pull levers for testing to define optimal budget allocatoin? Thoughts?

    ReplyDelete
  4. I think it's important to track upstream touchpoints. Some vendors report on this in terms of "influenced" as opposed to "attributed" deals. But if you can't measure the actual impact of the touchpoints -- which requires testing or advanced statistical modeling -- then reporting an arbitrary figure is just making things worse. The client won't like to hear that but it's another argument to convince them to do some testing.

    At best (worst?), you could do a static analysis that estimates ROI based on your arbitrary assumptions, but it's almost guaranteed that they'll misinterpret what you show them.

    ReplyDelete