Tuesday, July 29, 2008

How Do You Classify Demand Generation Systems?

I’ve been pondering recently how to classify demand generation systems. Since my ultimate goal is to help potential buyers decide which product to purchase, the obvious approach is to first classify the buyers themselves and then determine which systems best fit which group. Note that while this seems obvious, it’s quite different from how analyst firms like Gartner and Forrester set up their classifications. Their ratings are based on market positions, with categories such as “leaders”, “visionaries”, and “contenders”.

This approach has always bothered me. Even though the analysts explicitly state that buyers should not simply limit their consideration to market “leaders”, that is exactly what many people do. The underlying psychology is simple: people (especially Americans, perhaps) love a contest, and everyone wants to work with a “leader”. Oh, and it’s less work than trying to understand your actual requirements and how well different systems match them.

Did you detect a note of hostility? Indeed. Anointing leaders is popular but it encourages buyers to make bad decisions. This is not quite up there with giving a toddler a gun, since the buyers are responsible adults. But it could, and should, be handled more carefully.

Now I feel better. What was I writing about? Right--classifying demand generation systems.

Clearly one way to classify buyers is based on the size of their company. Like the rich, really big firms are different from you and I. In particular, really big companies are likely to have separate marketing operations in different regions and perhaps for different product lines and customer segments. These offices must work on their own projects but still share plans and materials to coordinate across hundreds of marketing campaigns. They need fine-grained security so the groups don't accidentally change each other's work. Large firms may also demand an on-premise rather than externally-hosted solution, although this is becoming less of an issue.

So far so good. But that's just one dimension, and Consultant Union rules clearly state that all topics must be analyzed in a two-dimensional matrix.

It’s tempting to make the second dimension something to do with user skills or ease of use, which are pretty much two sides of the same coin. But everyone wants their system to be as easy to use as possible, and what’s possible depends largely on the complexity of the marketing programs being built. Since the first dimension already relates to program complexity, having ease of use as a second dimension would be largely redundant. Plus, what looks hard to me may seem simple to you, so this is something that’s very hard to measure objectively.

I think a more useful second dimension is the scope of functions supported. This relates to the number of channels and business activities.

- As to channels: any demand generation system will generate outbound emails and Web landing pages, and send leads them to a sales automation system. For many marketing departments, that’s plenty. But some systems also outbound call centers, mobile (SMS) messaging, direct mail, online chat, and RSS feeds. Potential buyers vary considerably in which of these channels they want their system to support, depending on whether they use them and how happy they are with their current solution.

- Business activities can extend beyond the core demand generation functions (basically, campaign planning, content management and lead scoring) to the rest of marketing management: planning, promotion calendars, Web analytics, performance measurement, financial reporting, predictive modeling, and integration of external data. Again, needs depend on both user activities and satisfaction with existing systems.

Scope is a bit tricky as a dimension because systems will have different combinations of functions, and users will have different needs. But it’s easy enough to generate a specific checklist of items for users to consult. A simple count of the functions supported will give a nice axis for a two-dimensional chart.

So that’s my current thinking on the subject: one dimension measures the ability to coordinate distributed marketing programs, and the other measures the scope of functions provided. Let me know if you agree or what you'd propose as alternatives.


Landon Ray said...

Yeah, this is a bit better than the magic quadrant or whatever... but it also seems like a sort of glorified features table. And I think you're right that features don't get to the heart of the matter for most companies.

Although this'll never get done, I'd suggest a Pepsi challenge. Er, rather, an automation-off.

Three events: small-biz, midsized, and enterprise. The difference being the complexity of the marketing system to be created. That is, small-biz includes email and direct mail, midsized adds landing pages, voice broadcast and ROI tracking, enterprise adds campaign planning, SMS, predictive modeling, MRM.

All players with experienced jockeys, a multi-step multi-media campaign for all to create, a stop watch, and judges.

Because, really, a good chunk of what we do is supposed to make life easier for marketers. The other chunk is that we're supposed to make previously impossible lead management systems both possible and doable by your average marketing person - not a tech guy.

So, one factor would be 'speed to complete'. Then the judges would weigh on the completeness of the system built, the depth of contingency planning, and any other goodies that were thrown in. Maybe a report or two.

Just like Olympic gymnastics... the difficulty multiplies the score.. and longer time reduces it somehow.

That would be the ranking system I'd like to see.

While we're at it, I wouldn't mind seeing the same thing for CRM systems...



David Raab said...

Thanks Landon. You're right that it's a features list. The trick is to find features that determine whether a system is suitable for a particular set of marketers.

The bake-off approach is very hard to pull off. But I've seen it done, and it's less helpful than you might think. Finding that one system lets you work, say 20% faster than another is just one factor in a decision. And trying to come up with a single number the combines speed with program quality is even worse, since it assumes everyone has the same needs to begin with.

No matter how hard people try to avoid it, they really must identify their own requirements, weight those by their own priorities, and judge individual systems against them.

Incidentally, I'm about to add a longer and somewhat more formal version of this post to my article archive at www.archive.raabassociatesinc.com.