Wednesday, June 25, 2008

More Blathering About Demand Generation Software

When I was researching last week’s piece on Market2Lead, one of the points that vendor stressed was their ability to create a full-scale marketing database with information from external sources to analyze campaign results. My understanding of competitive products was that they had similar capabilities, at least hypothetically, so I choose not to list that as a Market2Lead specialty.

But I recently spoke with on-demand business intelligence vendor LucidEra , who also said they had found that demand generation systems could not integrate such information. They even cited one demand generation vendor that had turned to them for help. (In fact, LucidEra is releasing a new module for lead conversion analysis today to address this very need. I plan to write more about LucidEra next week.)

Yet another source, Aberdeen Group’s recent study on Lead Prioritization and Scoring: The Path to Higher Conversion (free with registration, for a limited time) also showed that linking marketing campaigns to closed deals to be the least commonly available among all the key capabilities required for effective lead management. Just 64% of the best-in-class vendors had this capability, even though 92% had a lead management or demand generation system.

Quite frankly, these results baffle me, because every demand generation vendor I’ve spoken with has the ability to import data from sales automation systems. Perhaps I'm missed some limit on exactly what kind of data they can bring back. I’ll be researching this in more detail in the near future, so I’ll get to the bottom of it fairly soon.

In the meantime, the Aberdeen report provided some other interesting information. Certainly the overriding point was that technology can’t do the job by itself: business processes and organizational structures must be in place to ensure that marketing and sales work together. Of course, this is true about pretty much any technology, but it’s especially important with lead management because it crosses departmental boundaries. Unfortunately, this is also a rather boring, nagging, floss-your-teeth kind of point that isn’t much fun to discuss once you’re made it. So, having tipped our hat to process, let’s talk about technology instead.

I was particularly intrigued at what Aberdeen found about the relative deployment rates for different capabilities. The study suggests—at least to me; Aberdeen doesn’t quite put it this way—that companies tend to start by deploying a basic lead management platform, followed by lead nurturing programs, and then adding lead prioritization and scoring. These could all be done by the same system, so it’s less a matter of swapping software as you move through the stages than of making fuller use of the system.

If you accept this progression, then prioritization and scoring is at the leading edge of lead management sophistication. Indeed, it is the least common of the key technologies that Aberdeen lists, in place at just 77% of the best-in-class companies. (Although the 64% figure for linking campaigns to closed deals is lower, Aberdeen lists that under performance measurement, not technology.) Within lead scoring itself, Aberdeen reports that customer-provided information such as answers to survey questions are used more widely than inferred information such as Web site behavior. Aberdeen suggests that companies will add inferred data, and in general make their scoring models increasingly complex, as they grow in sophistication.

This view of inferred data in particular and scoring models in general as leading edge functions is important. Many of the demand management vendors I’ve spoken with are putting particular stress on these areas, both in terms of promoting their existing capabilities and of adding to them through enhancements. In doing this, they are probably responding to the demands of their most advanced customers—a natural enough reaction, and one that is laudably customer-driven. But there could also be a squeaky wheel problem here: vendors may be reacting to a vocal minority of existing customers, rather than a silent majority of prospects and less-advanced clients who have other needs. Weaknesses in campaign results reporting, external data integration and other analytics are one area of possible concern. General ease of use and customization could be another.

In a market that is still in its very early stages, the great majority of potential buyers are still quite unsophisticated. It would be a big mistake for vendors to engage in a typical features war, adding capabilities to please a few clients at the cost of adding complexity that makes the system harder for everyone else. Assuming that buyers can accurately assess their true needs—a big if; who isn’t impressed by bells and whistles?—adding too many features would harm the vendors own sales as well.

The Aberdeen report provides some tantalizing bits of data on this issue. It compares what buyers said was important during the technology assessment with what they decided was important after using the technology. But I’m not sure what is being reported: there are five entries in each group (the top five, perhaps?), of which only “customizable solution” appears in both. The other four listed for pre-purchase were: marketing maintained and operated; easy to use interface; integration with CRM; and reminders and event triggers. The other four for post-purchase were: Web analytics; lead scoring flexibility; list segmentation and targeting; and ability to automate complex models.

The question is how you interpret this. Did buyers change their minds about what mattered, or did their focus simply switch once they had a solution in place? I’d guess the latter. From a vendor perspective, of course, you want to emphasize features that will make the sale. Since ease of use ranks in the pre-purchase group, that would seem to favor simplicity. But you want happy customers too, which means providing the features they’ll need. So do you add the features and try to educate buyers about why they’re important? Or do you add them and hide them during the sales process? Or do you just not add them at all?

Would your answer change if I told you, Monty Hall style, that there is little difference between best-in-class companies and everyone else on the pre-sales considerations, but that customization and list segmentation were much less important to less sophisticated customers in the post-sales ranking?

In a way, this is a Hobson’s choice: you can’t not provide the features customers need to do their jobs, and you don’t want to them to start with you and switch to someone else. So the only question is whether you try to hide the complexity or expose it in all its glory. The latter would work for advanced buyers, but, at this stage in the market, those are few in number. So it seems to me that clever interface design—exposing just as many features as the customer needs at the moment--is the way to go.

4 comments:

Landon Ray said...

David,

Again, you're spot on with this analysis.. and the questions you pose.

Our answer? Build the functionality but skim over it in demos and marketing materials, unless the client digs for answers. Our sales people even have a quote from JustSell on their wall that says 'Details are for enthusiasts'.

This stuff is complicated. Once you start to 'get' the basics, all the 'next' features that you want and need become obvious. To not have them is not an option if you, as a company, aim to actually produce the promised benefits.

The choice about whether to talk about them at length in demos, blogs, etc.. seems to me to be a matter of who your target is. If you're Eloqua are are going after the biggest companies, isn't it safe to assume that the marketers at those companies know what they're getting into and can handle the truth?

Our experience with our larger clients suggests so. These guys walked in the door talking about the most sophisticated possible stuff, they knew what they needed to see, and once they saw it they came aboard.

But the biggest companies aren't our target. We're targeting small to mid-sized companies, and the level of understanding of 'what this stuff is all about' is just not at the same level.

It's incumbent on companies like ours to have easy to use interfaces, ability to launch quickly, and the ability to show results fast. Then, some percent of our client base gets the 'automation' bug, and want to go deep.

Your/Aberdeen's finding that companies can't do some of the basics, like track campaign results with external data, is sort of baffling. That lead scoring is high on the list of post-purchase users is also surprising to me.

Frankly, (although we have fairly robust lead scoring that does consider both form responses AND behavioral data like web page visits, link clicks, downloads, email opens and clicks, etc.) I'm surprised to find most vendors promoting these features as THE killer app.

In my experience, it's simply not what makes the difference. Where the MAIN benefits come from, in my experience, are:

1. The ability to CREATE an INTEGRATED sales and marketing SYSTEM that's repeatable, automate-able, measurable, and bullet-proof.

The fact is that companies drop leads like crazy - either because they don't know any better, or just as often, by accident. There's BIG unclaimed money in most companies lead lists. Systems like ours stop them from dropping the ball.

2. Tracking what's working and what's not. Again, BIG money is flushed down the drain in most companies because marketers aren't clear what's working and where to focus their optimization efforts.

Is this ad pulling better that that one? Is the lifetime value of my Adwords leads better or worse than my tradeshow or print leads? Are my landing pages converting as well as they could? Are my emails and direct mail pieces pulling?

What we know, because we see it over and over, is that little tweaks make BIG differences in results. If you can stop doing the stuff that isn't working, and do more of what is, it matters... right away.

Lead scoring, the ability to show your sales folk who's on your site and what their behavioral history is, etc... those are nice to have, take-it-up-a-notch kind of features (that we DO offer!).. but I don't understand why a company that offers so much more important stuff would spend their time talking about lead scoring.

Thats my two (or so) cents.

Landon Ray
CEO
OfficeAutopilot.com

David Raab said...

Thanks Landon. I think you're quite right about automated follow-up and promotion evaluation being the two main benefits. Especially the latter--it's always amazing to me how little companies know about the performance of their marketing efforts. This applies to large firms as well as small ones.

Incidentally, I'm not yet convinced that most demand generation systems cannot track campaign results with external data. I'm continuing to research the topic.

Landon Ray said...

Well, since we agree that ROI tracking (or promotion evaluation) is a top priority, let me ask you this:

You mentioned somewhere that one of our competitors offers the ability to give 'credit' to several touch points that may lead to a sale, presumably in some kind of algorithmic calculation that takes into consideration recency of the touch as well as the order. That is, if someone visits your tradeshow, then goes and looks you up and google and clicks you adwords ad... the tradeshow should derive most of the benefit for the visit/sale/action/whatever... but the adwords ad probably mattered too.

Anyway, we currently DON'T get so sophisticated with OfficeAutopilot, though one or two of our largest customers have mentioned that they'd use features like this if we offered them.

We've done quite a bit of thinking on the issue and have looked at various ways to get this right.. but frankly, they're all pretty scary from a usability point of view which, as I mentioned, is one of our chief concerns since we target small to mid-sized businesses.

So, what's your take on this issue? Have you seen it solved in a satisfactory way at all? How about in a way that's not totally mind-boggling to an average user?

And, how important do you think this part of the puzzle actually is? Is counting 'last action' as the source good enough?

So far, we think so. For two reasons: most of our clients aren't advertising EVERYWHERE. So, it's less likely that a prospect will bump into more than one ad. And, two, we don't think the extra work and management of developing and maintaining such a sophisticated system is worth it for most smaller companies.

Simplicity, I think, really is key.

Interested to hear your thoughts..

Landon

David Raab said...

This is a topic I deal with more often on my MPM Toolkit blog. The bottom line is, it's very difficult and expensive to build a remotely accurate picture of how much each element is contributing to marketing results. So a rigorous approach is simply not an option for most businesses.

This leaves you with two practical choices. One is to develop algorithms that represent your best guess at how credit should be distributed. I do think this can be done with a reasonably simple user interface.

The other is to continue crediting the first source, but to recognize that this is very inaccurate. This is what most people do, consciously or not. Otherwise, they would not spend on mass advertising (which is basically untrackable) or lead nurturing (which is never the first contact). The problem with this approach is that you cannot even pretend to measure the value of different efforts, which makes any kind of spending optimization impossible.

What I think makes the most sense is a funnel approach. Look at different programs as supporting different stages in the purchase process, and track their effectiveness at moving customers from one stage to the next. This avoids the arbitrary division of credit among all programs, while still measuring each program's relative performance. Relative performance is what you need to shift resources from poorly performing programs to better ones, which is why we care about performance measurement in the first place.