Tuesday, July 10, 2012

The Marketing Funnel Is Dead. Let's Have Dessert.

Last week’s post on lead scoring attracted more positive attention than I expected. This was doubly surprising because first, I didn’t think lead scoring was such a hot topic and second, I don’t really agree with the approaches I described.

To clarify that second point, I’m not saying what I wrote was wrong or insincere. Rather, I consider it an accurate description of an approach I find problematic. The approach was using lead scoring as a way to define lead stages. My problem is the concept of lead stages themselves.

This verges on heresy, but I’m having an increasingly hard time with lead stages as a way to organize a marketing program. Of course, stages make perfect intuitive sense, and they’re ultimately based on the AIDA (Awareness, Interest, Desire, Action) model of the sales process that has been around for more than 100 years.*

But we all know in our heart of hearts that real buyers don’t follow such an orderly sequence. Indeed, there has been a fair amount of research questioning the validity of AIDA and similar “hierarchy of effects” models. The fundamental criticism is that decision making isn’t as rational as AIDA suggests because emotions play a much stronger part than AIDA allows. I’d also add – without a shred of empirical proof, thanks for asking – that B2B decision processes flit among stages in no particular sequence, depending on who asks what questions at any given moment. This randomness is abetted by the Internet, which makes information appropriate to all stages equally accessible on demand. But I suspect the process was always more chaotic than marketers cared to admit.

I’d further argue that buyers’ interests are especially fluid early in the purchase process, which is where marketers are involved. It may be more structured towards the end where salespeople can shepherd buyers through a defined set of stages. No, I don’t have any evidence for this either.

The point is this: if buyers don’t move through a fixed set of stages, then it doesn’t make sense to use lead scoring to determine which stage a buyer is at. Nor, for that matter, does it make sense to structure lead nurturing programs to lead (or follow) buyers from one stage to the next. As I said, heresy.

But any jackass can kick down a barn.** I wouldn't discard the funnel model without offering a better alternative – and by better, I specifically mean more effective at producing productive leads. Here’s my two-part modest proposal:

- within nurture programs, leads should be offered whatever materials they are most likely to select next, based on their recent behavior. This is exactly the same as offering customers the products they are most likely to buy (think Amazon book recommendation or Netflix’s movie suggestions) and it can be based on similar advanced predictive modeling technology. And, just as Amazon and Netflix offer more than one option, nurture programs should also offer several items – within limits, since too many choices can depress response. There’s an important humility in offering choices: it recognizes how poor we are at predicting what people want.

- for lead scoring, the goal is to predict which leads the sales force will like. I chose that word carefully – it’s not a question of whether sales will accept a lead, but whether they’ll decide it’s worth sustained effort. Yes, there could be a “like” button that lets sales rate the leads, but don't be so literal-minded.  It would be simpler and more effective to check how much activity sales has invested in the lead within, say, thirty days after they received it. Leads that sales is working are, by definition, leads that sales thinks is worthwhile. Leads they don’t work should never have been sent to them. This approach doesn’t magically solve the problem of connecting marketing leads to sales results, but it’s easier than tying leads to actual revenue.

Of these two proposals, the first one is the more radical since it implies a change in the structure of nurture campaigns. Today, sequential campaigns are the gold standard and complex branching structure are the mark of sophistication. A campaign that just presented the most relevant materials would have a vastly simpler structure – essentially a big loop that kept coming back with more messages, which would only differ in which offers they included. The sophistication would lie in the offer selection, not the campaign logic. Lead scoring's only role would be to run in the background and continuously assess whether a lead is ready to send to sales.

Even this choice-based approach doesn’t fully discard a sequential model. You need something to help decide what kinds of content to create, and the most logical tool is the content matrix that marketers already use to ensure they have content for all personas at all buying stages. But while you’re still cooking a full range of dishes, you’re offering them as a buffet rather than a fixed-course dinner. If a customer wants to eat dessert first, why argue?


* Usually attributed to Elias St. Elmo Lewis in 1898, although there is some controversy.

** Sam Rayburn, although I bet he didn't originate it.

Sunday, July 01, 2012

3 Ways to Use Lead Scoring Within Your Marketing Automation Programs

I wrote last week about the difficulty of linking marketing leads to sales results. One reason the topic was on my mind is I’m also thinking a lot these days about lead scoring. The practical use of lead scoring is to decide which leads to pass from marketing automation to sales, or, even more pragmatically, to predict which leads will be accepted by sales.* But the ultimate goal is to identify the leads most likely to generate revenue. Building an accurate scoring model therefore requires an accurate view of how leads and revenue are connected.

For all the reasons I discussed last week, that lead-to-revenue connection is hard to make. This is one reason that most lead scoring projects focus instead on the criteria that salespeople use in judging which leads to accept. The other reason is that salespeople can decide which leads they’ll work on – so giving them what they want, regardless of whether it’s what they really need, is the key to lead scoring being considered a success.

Many companies today have inserted a phone call between marketing automation and the sales department, screening every plausible lead before sending them to actual salespeople. This reduces the need for scoring accuracy because the phone call will clarify whether the lead is sales ready.  Since the cost of a missed opportunity is much higher than the cost of a wasted phone call, scoring in this situation must simply find all leads with a reasonable chance of success.

In short, scoring programs face two scenarios:

- for scores that directly determine which leads are sent to sales, accuracy is needed but data on past results (necessary to build a good model) is scarce

- for scores that determine which leads get a screening call, accuracy isn’t very important.

Perhaps this is why so few companies use lead scoring (just 19% in a recent MarketingSherpa study) and why the scoring models tend to be simplistic. Investment in more sophisticated techniques, such as statistically-based predictive models, is rarely worth the cost.

There is, however, another use for lead scoring: assigning leads to stages as they move through the marketing funnel.**

Conceptually, assigning leads to funnel stages is quite different from calculating their probability of making a purchase. A funnel stage is defined by meeting specific criteria such as BANT (budget, authority, need and timing) and engagement (downloading a paper or providing contact information). This is more like a checklist than a numeric score, although items like the number of specified behaviors may be calculated. Still, it's sometimes convenient to use score ranges as stage definitions.

In this context, scoring can be used in three ways.

- assign points  to directly to stage criteria.  For example, imagine a three-stage funnel of Respondent (replied to an email), Qualified Respondent (meets BANT conditions) and Sales Ready Lead (demonstrates engagement). If the scoring rules give 100 points for a response, 100 points for meeting BANT criteria, and 100 points for demonstrating sufficient engagement, then people with 100 points are Respondents, people with 200 points are Qualified Respondents, and people with 300 points are Sales Ready Leads. This is a common approach, although it’s not much different from applying the same rules to classify leads directly.

- treat the score as a probability estimate of reaching the final goal (sales readiness, sales acceptance, or revenue). Under this approach, a Respondent might be someone with a goal probability of under 10%; a Qualified Respondent might have a goal probability of 10% to 50%, and Sales Ready Lead might have a goal probability above 50%. This method avoids the need to define specific lead stage criteria, replacing them with objective predictive modeling methods that are likely to be more accurate.

- treat the score as a probability estimate of reaching the next stage (Respondent, Qualified Respondent, etc.). This retains the explicit stage criteria, which may help marketers visualize who is in each stage and how best to treat them. The predictive model provides additional segmentation within each stage, so marketers can focus their efforts on the most promising leads. Since linking leads to stage movement is easier than linking them to revenue, these predictive models are easier to build.

Today, most companies probably do a hybrid of the first and second options. That is, they assign points based on specified criteria (first option) but assign stages based on point ranges (second option). This combines the familiarity of criteria-based scoring rules with the convenience of numerical stage definitions, making it the easiest method available. But it is also doubly arbitrary, since neither the point values nor the range boundaries can be measured against an objective standard.

I’d suggest that marketers move towards a purer version of the second method, building statistical models that predict the final goal (revenue if available; sales acceptance or sales-ready lead criteria if not). Stage definitions can be arbitrary ranges but correlated against existing stage criteria. Eventually, marketers may want to move toward the third method, with separate models for each stage. This makes it easier to focus on advancing leads from one stage to the next while retaining the rigor of a statistically based approach.


* For example, Marketo’s Definitive Guide to Lead Scoring defines lead scoring as “a shared sales and marketing methodology for ranking leads in order to determine their sales-readiness.”

**Eloqua’s Grande Guide to Lead Scoring puts it nicely: lead scoring “helps marketing and sales professionals identify where each prospect is in the buying process.”