Call me a cock-eyed optimist, but marketers may actually be getting better at buying software. Our research has long shown that the most satisfied buyers base their selection on features, not cost or ease of use. But feature lists alone are never enough: even if buyers had the knowledge and patience to precisely define their actual requirements, no set of checkboxes could capture the nuance of what it’s actually like to use a piece of software for a specific task. This is why experts like Tony Byrne at Real Story Group argue instead for defining key use cases (a.k.a. user stories) and having vendors demonstrate those. (If you really want to be trendy, you can call this a Clayton Christensen-style “job to be done”.)
In fact, use cases have become something of an obsession in their own right. This is partly because they are a way of getting concrete answers about the value of a system: when someone asks, “What’s the use case for system X”, they’re really asking, “How will I benefit from buying it?” That’s quite different from the classic definition of a use case as a series of steps to achieve a task. It’s this traditional definition that matters when you apply use cases to system selection, since you want the use case to specify the features to be demonstrated. You can download the CDP Institute’s use case template here.
But I suspect the real reason use cases have become so popular is that they offer a shortcut past the swamp of defining comprehensive system requirements. Buyers in general, and marketers in particular, lack the time and resources to create complete requirements lists based on their actual needs (although they're perfectly capable of copying huge, generic lists that apply to no one). Many buyers are convinced it’s not necessary and perhaps not even possible to build meaningful requirements lists: they point to the old-school “waterfall” approach used in systems design, which routinely takes too long and produces unsatisfactory results. Instead, buyers correctly see use cases as part of an agile methodology that evolves a solution by solving a sequence of concrete, near-term objectives.
Of course, any agile expert will freely admit that chasing random enhancements is not enough. There also needs to be an underlying framework to ensure the product can mature without extensive rework. The same applies to software selection: a collection of use cases will not necessarily test all the features you’ll ultimately need. There’s an unstated but, I think, implicit assumption that use cases are a type of sampling technique: that is, a system that meets the requirements of the selected use cases will also meet other, untested requirements. It’s a dangerous assumption. (To be clear: a system that can’t support the selected use cases is proven inadequate. So sample use cases do provide a valuable screening function.)
Consciously or subconsciously, smart buyers know that sample use cases are not enough. This may be why I’ve recently noticed a sharp rise in the use of proof of concept (POC) tests. Those go beyond watching a demonstration of selected use cases to actually instal a trial version of a system and seeihow it runs. This is more work than use case demonstrations but gives much more complete information.
Proof of concept engagements used to be fairly rare. Only big companies could afford to run them because they cost quite a bit in both cash (most vendors required some payment) and staff time (to set up and evaluate the results). Even big companies would deploy POCs only to resolve specific uncertainties that couldn’t be settled without a live deployment.
The barriers to POCs have fallen dramatically with cloud systems and Software-as-a-Service. Today, buyers can often set up a test system with a just a few mouse clicks (although it may take several days of preparation before those clicks will work). As a result, POCs are now so common that they can almost be considered a standard part of the buying process.
Like the broader application of use cases, having more POCs is generally a good thing. But, also like use cases, POCs can be applied incorrectly.
In particular, I’ve recently seen several situations where POCs were used as an alternative to basic information gathering. The most frightening was a company that told me they had selected half a dozen wildly different systems and were going to do a POC with each of them to figure out what kind of system they really needed.
The grimace they didn’t see when I heard this is why I keep my camera off during Zoom meetings. Even if the vendors do the POCs for free, this is still a major commitment of staff time that won’t actually answer the question. At best, they’ll learn about the scope of the different products. But that won’t tell them what scope is right for them.
Anther company told me they ran five different POCs, taking more than six months to complete the process, only to later discover that they couldn’t load the data sources they expected (but hadn’t included in their POCs). Yet another company let their technical staff manage a POC and declare it successful, only later to learn the system had been configured in a way that didn’t meet actual user needs.
You’re probably noticing a dreary theme here: there’s no shortcut for defining your requirements. You’re right about that, and you’re also right that I’m not much fun at parties. As to POCs, they do have an important role but it’s the same one they played when they were harder to do: they resolve uncertainties that can’t be resolved any other way.
For Customer Data Platforms, the most common uncertainty is probably the ability to integrate different data sources. Technical nuances and data quality are almost impossible to assess without actually trying to load each system. Since these issues have more to do with the data source than the CDP, this type of POC is more about CDP feasibility in general than CDP system selection. That means you can probably defer your POC until you’ve narrowed your selection to one or two options – something that will reduce the total effort, encourage the vendor to learn more about your situation, and help you to learn about the system you’re most likely to use.
The situation may be different with other types of software. For example, you might to test q wide variety of predictive modeling systems if the key uncertainty is how well their models will perform. That’s closer to the classic multi-vendor “bake-off”. But beware of such situations: the more products you test, the less likely your staff is to learn each product well.
With a predictive modeling tool, it’s obvious that user skill can have a major impact on results. With other tools, the impact of user training on outcomes may not be obvious. But users who are assessing system power or usability may still misjudge a product if they haven’t invested enough time in learning it. Training wheels are good for beginners but get in the way of an expert. Remember that your users will soon be experts, so don’t judge a system by the quality of its training wheels.
This brings us back to my original claim. Are marketers really getting better at buying software? I’ll stand by that and point to broader use of tools like use cases and proof of concepts as evidence. But I’ll repeat my caution that use cases and POCs must be used to develop and supplement requirements, not to replace them. Otherwise they become an alternate route to poor decisions rather than
guideposts on the road to success.
Showing posts with label software selection. Show all posts
Showing posts with label software selection. Show all posts
Saturday, July 25, 2020
Sunday, May 06, 2012
What Brain Research Teaches about Selecting Marketing Automation Software
I’m spending more time on airplanes these days, which means more time browsing airport bookshops. Since spy stories and soft core porn are neither to my taste, the pickings are pretty slim. But I did recently stumble across Jonah Lehrer’s How We Decide, one of several recent books that explain the latest scientific research into human decision-making.
Lehrer’s book shuttles between commonly-known irrationalities in human behavior – things like assigning a higher value to avoiding loss than achieving gain – and the less known (to me, at least) brain mechanisms that drive them. He makes a few key points, including the importance of non-conscious learning to drive everyday decisions (it turns out that people who can only make conscious, rational decisions are pretty much incapable of functioning), the powerful influence of irrelevant facts (for example, being exposed to a random number influences the price you’re willing to pay for an unrelated object), and the need to suppress emotion when faced with a truly unprecedented problem (because your previous experience is irrelevant).
These are all relevant to marketing, since they give powerful insights into ways to get people to do things. Indeed, it’s frightening to recognize how much this research can help people manipulate others to act against their interests. But good marketers, politicians, and poker players have always used these methods intuitively, so exposing them may not really make the world a more dangerous place.
In any event, my own dopamine receptors were most excited by research related to formal decision making, such as picking a new car, new house, or strawberry jam. Selecting software (or marketing approaches) falls into the same category. Apparently the research shows that carefully analyzing such choices actually leads to worse decisions than making a less considered judgment. The mechanism seems to be that people consider every factor they list, even the ones that are unimportant or totally irrelevant.
It's not that snap judgments are inherently better. The most effective approach is to gather all the data but then let your mind work on it subconsciously – what we normal folks call “mulling things over” – since the emotional parts of the brain are better at balancing the different factors than the rational brain. (I’m being horribly imprecise with terms like “emotional” and “rational”, which are shorthand for different processes in different brain regions. Apologies to Lehrer.)
As someone who has spent many years preparing detailed vendor analyses, I found this intriguing if unwelcome news. Since one main point of the book is that people rationalize opinions they’ve formed in advance, I’m quite aware that “deciding” whether to accept this view is not an objective process. But I also know that first impressions, at least where software is concerned, can’t possibly uncover all the important facts about a product. So the lesson I’m taking is the need to defer judgment until all factors have been identified and then to carefully and formally weight them so the irrelevant ones don’t distort the final choice.
As it happens, that sort of weighting is exactly what I’ve always insisted is important in making a sound selection. My process has been to have clients first list the items to consider and then assign them weights that add to 100%. This forces trade-offs to decide what’s most important. The next step is to score each vendor on each item. I always score one item at a time across all vendors, since the scores are inherently relative. Finally, I use the weights to build a single composite score for vendor ranking.
In theory, the weighting reduces the impact of unimportant factors, setting the weights separately from the scoring avoids weights that favor a particular vendor, and calculating composite scores prevents undue influence by the first or last item reviewed. Whether things work as well as I’d like to believe, I can’t really say. But I can report three common patterns that seem relevant.
- the final winner often differs from one I originally expected. This is the “horse race” aspect of the process and I think it means we’re successfully avoiding being stuck with premature conclusions.
- when the composite scores don’t match intuitive expectations, there’s usually a problem with the weights. I interpret this to mean that we’re listening to the emotional part of the brain and taking advantage of its insights.
- as scoring proceeds, one vendor often emerges as the consistent winner, essentially “building momentum” as we move towards a conclusion. I’ve always enjoyed this, since it makes for an easy final decision. But now I’m wondering whether we're making the common error of seeing patterns that don’t exist. Oh well, two out of three isn’t bad.
Perhaps I could reduce the momentum effect by hiding the previous scores when each new item is assessed. In any event, I’ve always felt the real value of this process was in the discussions surrounding the scoring rather than the scores themselves. As I said, the scores are usually irrelevant because the winner is apparent before we finish.
Still, having a clear winner doesn’t mean we made the right choice. The best I can say is that clients have rarely reported unpleasant surprises after deployment. We may not have made the best choice, but at least we understood what we were getting into.
I guess it’s no surprise that I’d conclude my process is a good one. Indeed, research warns that people see what they want to see (the technical term is “confirmation bias”; the colloquial term is “pride”). But I honestly don’t see much of an alternative. Making quick judgments on incomplete information is surely less effective, and gathering data without any formal integration seems hopelessly subjective. Perhaps the latter approach is what Lehrer’s research points to, but I’d (self-servingly) argue that software choices fall into the category of unfamiliar problems, which the brain hasn’t trained itself to solve through intuition alone.
Lehrer’s book shuttles between commonly-known irrationalities in human behavior – things like assigning a higher value to avoiding loss than achieving gain – and the less known (to me, at least) brain mechanisms that drive them. He makes a few key points, including the importance of non-conscious learning to drive everyday decisions (it turns out that people who can only make conscious, rational decisions are pretty much incapable of functioning), the powerful influence of irrelevant facts (for example, being exposed to a random number influences the price you’re willing to pay for an unrelated object), and the need to suppress emotion when faced with a truly unprecedented problem (because your previous experience is irrelevant).
These are all relevant to marketing, since they give powerful insights into ways to get people to do things. Indeed, it’s frightening to recognize how much this research can help people manipulate others to act against their interests. But good marketers, politicians, and poker players have always used these methods intuitively, so exposing them may not really make the world a more dangerous place.
In any event, my own dopamine receptors were most excited by research related to formal decision making, such as picking a new car, new house, or strawberry jam. Selecting software (or marketing approaches) falls into the same category. Apparently the research shows that carefully analyzing such choices actually leads to worse decisions than making a less considered judgment. The mechanism seems to be that people consider every factor they list, even the ones that are unimportant or totally irrelevant.
It's not that snap judgments are inherently better. The most effective approach is to gather all the data but then let your mind work on it subconsciously – what we normal folks call “mulling things over” – since the emotional parts of the brain are better at balancing the different factors than the rational brain. (I’m being horribly imprecise with terms like “emotional” and “rational”, which are shorthand for different processes in different brain regions. Apologies to Lehrer.)
As someone who has spent many years preparing detailed vendor analyses, I found this intriguing if unwelcome news. Since one main point of the book is that people rationalize opinions they’ve formed in advance, I’m quite aware that “deciding” whether to accept this view is not an objective process. But I also know that first impressions, at least where software is concerned, can’t possibly uncover all the important facts about a product. So the lesson I’m taking is the need to defer judgment until all factors have been identified and then to carefully and formally weight them so the irrelevant ones don’t distort the final choice.
As it happens, that sort of weighting is exactly what I’ve always insisted is important in making a sound selection. My process has been to have clients first list the items to consider and then assign them weights that add to 100%. This forces trade-offs to decide what’s most important. The next step is to score each vendor on each item. I always score one item at a time across all vendors, since the scores are inherently relative. Finally, I use the weights to build a single composite score for vendor ranking.
In theory, the weighting reduces the impact of unimportant factors, setting the weights separately from the scoring avoids weights that favor a particular vendor, and calculating composite scores prevents undue influence by the first or last item reviewed. Whether things work as well as I’d like to believe, I can’t really say. But I can report three common patterns that seem relevant.
- the final winner often differs from one I originally expected. This is the “horse race” aspect of the process and I think it means we’re successfully avoiding being stuck with premature conclusions.
- when the composite scores don’t match intuitive expectations, there’s usually a problem with the weights. I interpret this to mean that we’re listening to the emotional part of the brain and taking advantage of its insights.
- as scoring proceeds, one vendor often emerges as the consistent winner, essentially “building momentum” as we move towards a conclusion. I’ve always enjoyed this, since it makes for an easy final decision. But now I’m wondering whether we're making the common error of seeing patterns that don’t exist. Oh well, two out of three isn’t bad.
Perhaps I could reduce the momentum effect by hiding the previous scores when each new item is assessed. In any event, I’ve always felt the real value of this process was in the discussions surrounding the scoring rather than the scores themselves. As I said, the scores are usually irrelevant because the winner is apparent before we finish.
Still, having a clear winner doesn’t mean we made the right choice. The best I can say is that clients have rarely reported unpleasant surprises after deployment. We may not have made the best choice, but at least we understood what we were getting into.
I guess it’s no surprise that I’d conclude my process is a good one. Indeed, research warns that people see what they want to see (the technical term is “confirmation bias”; the colloquial term is “pride”). But I honestly don’t see much of an alternative. Making quick judgments on incomplete information is surely less effective, and gathering data without any formal integration seems hopelessly subjective. Perhaps the latter approach is what Lehrer’s research points to, but I’d (self-servingly) argue that software choices fall into the category of unfamiliar problems, which the brain hasn’t trained itself to solve through intuition alone.
Wednesday, January 25, 2012
Nimble Adds Social Data to CRM
I had an intriguing demonstration yesterday from social CRM vendor Nimble. Since “social CRM” could mean just about anything, it’s important to explain what Nimble actually does: it combines traditional contact management with automated access to social media information about those contacts.
That might not sound like much, but in practice it’s pretty darn slick.
Here’s how it works. Say you’re selling a product related to, oh, circuit boards. You can do a Twitter search for messages on that keyword, scan the Twitter profiles and Klout scores of people sending those messages, and push a button to add the interesting people to your contact list. Once you’ve added a contact, Nimble will automatically display their most recent Twitter, Facebook and LinkedIn activity every time you call up their record and let you send them messages through any of those products or by email. This is all done in the same system as traditional contact management activities: tagging contacts, assigning tasks and events, sending and receiving emails, tracking deals, building lists, searching your database, and managing your calendar.
Seamless combination of contact management with social media is a big deal: when I showed Nimble to a colleague who runs a public relations agency, her eyes lit up. From her perspective, having an immediate view of each contact’s social activity saved time, made it easier to tailor conversations to their interests, and let her reach them through their preferred medium. From a corporate perspective, it means the system contains data that salespeople didn’t enter manually – helping to overcome their perpetual complaint that salespeople enter data into corporate CRM systems without getting anything in return.
For the users themselves, Nimble has one more advantage: its lets them stick with familiar communication tools. Integrations are currently available for Outlook, Gmail, Google Calendar, MailChimp email, Wufoo web forms, and HubSpot marketing automation. An open API to import contacts from other sources is due by the end of February.
The email integrations copy messages from the external email systems into the Nimble activity history, where they're available for searches and list selection. The social media and HubSpot integrations also import contacts from those systems, but display messages and other data without storing them. Users do have the option to manually save individual social comments or assign tasks based on them.
Nimble plans to add more functions, including automated processes that could support multi-step nurture campaigns. But the company is focused on combining information from other systems, not replacing them. The relationship with HubSpot is especially intriguing, since HubSpot itself lacks a CRM component, making the two products highly complementary, and both companies target small-to-mid-size businesses. It’s also worth noting that Nimble recently announced $1 million investment whose participants included Google Ventures, which is also a HubSpot investor, and HubSpot Co-Founder and CTO Dharmesh Shah.
A beta version of Nimble was released early last year. The system is currently available in a free personal edition limited to 3,000 contacts and a $15 per user per month multi-user edition allowing up to 30,000 contacts and some other advanced features. Nimble already has more than 25,000 users across all versions and has a network of more than 250 solution partners.
That might not sound like much, but in practice it’s pretty darn slick.
Here’s how it works. Say you’re selling a product related to, oh, circuit boards. You can do a Twitter search for messages on that keyword, scan the Twitter profiles and Klout scores of people sending those messages, and push a button to add the interesting people to your contact list. Once you’ve added a contact, Nimble will automatically display their most recent Twitter, Facebook and LinkedIn activity every time you call up their record and let you send them messages through any of those products or by email. This is all done in the same system as traditional contact management activities: tagging contacts, assigning tasks and events, sending and receiving emails, tracking deals, building lists, searching your database, and managing your calendar.
Seamless combination of contact management with social media is a big deal: when I showed Nimble to a colleague who runs a public relations agency, her eyes lit up. From her perspective, having an immediate view of each contact’s social activity saved time, made it easier to tailor conversations to their interests, and let her reach them through their preferred medium. From a corporate perspective, it means the system contains data that salespeople didn’t enter manually – helping to overcome their perpetual complaint that salespeople enter data into corporate CRM systems without getting anything in return.
For the users themselves, Nimble has one more advantage: its lets them stick with familiar communication tools. Integrations are currently available for Outlook, Gmail, Google Calendar, MailChimp email, Wufoo web forms, and HubSpot marketing automation. An open API to import contacts from other sources is due by the end of February.
The email integrations copy messages from the external email systems into the Nimble activity history, where they're available for searches and list selection. The social media and HubSpot integrations also import contacts from those systems, but display messages and other data without storing them. Users do have the option to manually save individual social comments or assign tasks based on them.
Nimble plans to add more functions, including automated processes that could support multi-step nurture campaigns. But the company is focused on combining information from other systems, not replacing them. The relationship with HubSpot is especially intriguing, since HubSpot itself lacks a CRM component, making the two products highly complementary, and both companies target small-to-mid-size businesses. It’s also worth noting that Nimble recently announced $1 million investment whose participants included Google Ventures, which is also a HubSpot investor, and HubSpot Co-Founder and CTO Dharmesh Shah.
A beta version of Nimble was released early last year. The system is currently available in a free personal edition limited to 3,000 contacts and a $15 per user per month multi-user edition allowing up to 30,000 contacts and some other advanced features. Nimble already has more than 25,000 users across all versions and has a network of more than 250 solution partners.
Friday, November 18, 2011
Marketing Vendor Selection: Trends You'll Need to Support
As I wrote yesterday, no one knows exactly what we’ll want from our marketing automation systems in the future. But it's still worth taking a guess at what looks likely. Here are some trends I expect will be important.
Social Media. The first wave of marketing automation features for social media is now several years old. These included making it easier to share emails and Web pages, tracking shares through embedded URLs, and monitoring social media conversations. The second wave is just starting. It includes more sophisticated features for working within social media platforms, such as delivering forms and personalized ads within Facebook, using social sign-on to capture more data, and building more detailed profiles based on activities, consumption, connections and influence. Beyond the execution technology itself, these features will require substantial increase in analytical horsepower to make sense of the results.
Mobile. Many marketing automation vendors have added mobile interfaces for the marketers and salespeople who work with them. But the focus is now shifting to marketing campaigns that are delivered by mobile.. The first change is to create standard materials in mobile-friendly formats. But this will soon be followed by more profound adjustments for touch screens, shorter view times, QR codes, special-purpose apps, gamification, social interactions, location awareness, and other mobile-specific possibilities. Third party developers will probably pioneer these capabilities, so look for marketing automation vendors who are good at integrating with outsiders and, eventually, have the money to acquire their technology.
Video. Plenty of video is used already in marketing promotions. It's particularly useful as a way to generate lots of content at relatively low cost. But marketing automation vendors haven’t built many special features to make video easier to use. One big need is better tagging to make video more search-friendly; others are better upload and content analysis to support user-generated content. This may be another area where marketing automation vendors rely on external developers rather than pioneering for themselves.
Benchmarks. This is a hot topic among vendors, both because clients love benchmarks and because there are now enough clients to supply sufficient data. Benchmarking requires standard definitions to allow comparisons across program types, funnel stages, responses, and industry groups. It also needs ways to present the information so marketers can easily understand it. Eventually, benchmark systems will start making recommendations on what to try next – although I've yet to see that happen.
Testing. Too few marketers have a rigorous testing program, and, perhaps for that reason, most marketing automation vendors have focused their energies elsewhere. This may be changing, as marketers see simple and effective testing in other areas like Web landing pages and paid search. Speaking as someone who trained in traditional direct marketing, where testing is an absolute religion, I can only hope so.
Automation. Let’s face it: most marketing automation today is still pretty darn manual. The automation I'm talking about here is having the system make choices so marketers don’t have to. Think about lead scoring, where the traditional approach is for a team of people to sit around a table and negotiate a set of scoring rules. An automated approach would eliminate that by using techniques like regression analysis to derive the formulas directly from the data. Other automated applications could be matching contents to user behavior and choosing the optimal timing for campaign messages. This type of automation is a way to overcome the skill shortage that has slowed the growth of the automation industry. In that sense, it’s an alternative, or at least a supplement, to better training (creating more skilled people) and easier interfaces (making the few skilled people more productive). Delivering this automation requires major investments in statistical technology, standardized definitions, and process monitoring to avoid the “sorcerer’s apprentice” problem of uncontrolled execution.
External data. Marketing automation systems are increasingly gathering data from external sources like social media, list compilers, and online behavior tracking. They’re also moving past CRM to tap other internal systems like accounting, manufacturing and order processing. This poses a major challenge for some marketing automation vendors, who didn't design their system for sources outside of CRM. It requires more flexible data models, APIs for smooth data exchange, and often a substantial increase in total data volume. More complex data also implies much higher implementation and maintenance costs, making marketing automation tougher to sell.
Pay per Result. This is the ultimate extension of external data: instead of buying information, marketers can just buy qualified leads directly. It's also another way to compensate for the skills shortage. Of course, some pay-per-lead programs have been around for years. But as marketers use them more aggressively, the marketing automation systems will need to get better at merging their inputs, identifying duplicates, estimating the value of new names, and analyzing long-term results.
Analytics. Many marketers claim they want better analysis but few have made the investment. Perhaps this will finally change as data becomes more widely available, CEOs press for clearer return on marketing investments, and the exploding complexity requires better measurement to keep marketing under control. We’re seeing two specific applications: revenue analytics that look beyond marketing to track the entire customer life cycle, and optimization to allocate resources across the many different marketing opportunities. Both require substantial investments in new data structures, reporting tools, visualization, dashboards, information distribution, and user management. Marketers who are serious about analytics need to look closely at which vendors have created the necessary foundations and will continue build on them.
No one vendor will be top of all these trends and neither will any one marketer. My advice is to pick the areas you feel are most important and study what each prospective vendor can do in them today and has planned for the future. Beyond that, take a look at yesterday’s suggestions on finding a future-safe vendor, and pick one you feel reasonably comfortable will adapt to whatever tomorow may bring.
Social Media. The first wave of marketing automation features for social media is now several years old. These included making it easier to share emails and Web pages, tracking shares through embedded URLs, and monitoring social media conversations. The second wave is just starting. It includes more sophisticated features for working within social media platforms, such as delivering forms and personalized ads within Facebook, using social sign-on to capture more data, and building more detailed profiles based on activities, consumption, connections and influence. Beyond the execution technology itself, these features will require substantial increase in analytical horsepower to make sense of the results.
Mobile. Many marketing automation vendors have added mobile interfaces for the marketers and salespeople who work with them. But the focus is now shifting to marketing campaigns that are delivered by mobile.. The first change is to create standard materials in mobile-friendly formats. But this will soon be followed by more profound adjustments for touch screens, shorter view times, QR codes, special-purpose apps, gamification, social interactions, location awareness, and other mobile-specific possibilities. Third party developers will probably pioneer these capabilities, so look for marketing automation vendors who are good at integrating with outsiders and, eventually, have the money to acquire their technology.
Video. Plenty of video is used already in marketing promotions. It's particularly useful as a way to generate lots of content at relatively low cost. But marketing automation vendors haven’t built many special features to make video easier to use. One big need is better tagging to make video more search-friendly; others are better upload and content analysis to support user-generated content. This may be another area where marketing automation vendors rely on external developers rather than pioneering for themselves.
Benchmarks. This is a hot topic among vendors, both because clients love benchmarks and because there are now enough clients to supply sufficient data. Benchmarking requires standard definitions to allow comparisons across program types, funnel stages, responses, and industry groups. It also needs ways to present the information so marketers can easily understand it. Eventually, benchmark systems will start making recommendations on what to try next – although I've yet to see that happen.
Testing. Too few marketers have a rigorous testing program, and, perhaps for that reason, most marketing automation vendors have focused their energies elsewhere. This may be changing, as marketers see simple and effective testing in other areas like Web landing pages and paid search. Speaking as someone who trained in traditional direct marketing, where testing is an absolute religion, I can only hope so.
Automation. Let’s face it: most marketing automation today is still pretty darn manual. The automation I'm talking about here is having the system make choices so marketers don’t have to. Think about lead scoring, where the traditional approach is for a team of people to sit around a table and negotiate a set of scoring rules. An automated approach would eliminate that by using techniques like regression analysis to derive the formulas directly from the data. Other automated applications could be matching contents to user behavior and choosing the optimal timing for campaign messages. This type of automation is a way to overcome the skill shortage that has slowed the growth of the automation industry. In that sense, it’s an alternative, or at least a supplement, to better training (creating more skilled people) and easier interfaces (making the few skilled people more productive). Delivering this automation requires major investments in statistical technology, standardized definitions, and process monitoring to avoid the “sorcerer’s apprentice” problem of uncontrolled execution.
External data. Marketing automation systems are increasingly gathering data from external sources like social media, list compilers, and online behavior tracking. They’re also moving past CRM to tap other internal systems like accounting, manufacturing and order processing. This poses a major challenge for some marketing automation vendors, who didn't design their system for sources outside of CRM. It requires more flexible data models, APIs for smooth data exchange, and often a substantial increase in total data volume. More complex data also implies much higher implementation and maintenance costs, making marketing automation tougher to sell.
Pay per Result. This is the ultimate extension of external data: instead of buying information, marketers can just buy qualified leads directly. It's also another way to compensate for the skills shortage. Of course, some pay-per-lead programs have been around for years. But as marketers use them more aggressively, the marketing automation systems will need to get better at merging their inputs, identifying duplicates, estimating the value of new names, and analyzing long-term results.
Analytics. Many marketers claim they want better analysis but few have made the investment. Perhaps this will finally change as data becomes more widely available, CEOs press for clearer return on marketing investments, and the exploding complexity requires better measurement to keep marketing under control. We’re seeing two specific applications: revenue analytics that look beyond marketing to track the entire customer life cycle, and optimization to allocate resources across the many different marketing opportunities. Both require substantial investments in new data structures, reporting tools, visualization, dashboards, information distribution, and user management. Marketers who are serious about analytics need to look closely at which vendors have created the necessary foundations and will continue build on them.
No one vendor will be top of all these trends and neither will any one marketer. My advice is to pick the areas you feel are most important and study what each prospective vendor can do in them today and has planned for the future. Beyond that, take a look at yesterday’s suggestions on finding a future-safe vendor, and pick one you feel reasonably comfortable will adapt to whatever tomorow may bring.
Wednesday, April 13, 2011
Step-by-Step Guide to Selecting the Right Marketing Automation System - Part 2
Yesterday' post described the first three steps in Raab Associates' vendor selection process: defining requirements, researching options, and testing vendors against scenarios. This post lists the four steps needed to complete the task. As before, there's a worksheet for each step that can be a model for your own, more detailed version. And remember, the complete set is available for free in our Vendor Selection Workbook in the Resource Library at the Raab Guide Web site.
4. Talk To References
This is an often-overlooked source of insight. The question isn’t whether the references are happy, but whether your situations are similar enough that you’re likely to be happy as well. Find out whether the reference is using the system functions you care about, how long they took to get started, the amount of training and process change required, what problems they had, and how the vendor responded.
5. Consider A Trial
Nearly all marketing automation vendors will let you try their system for a limited period. Trials are a great way to learn what it’s really like to use a system, but only if they are managed effectively. This means you need to invest in training and then set up and execute actual projects. As with scenario demonstrations, you may still rely on the vendor to handle some of the more demanding aspects of the project, but, again, make sure you see how hard it will eventually be to do them for yourself.
6. Make A Decision
Don’t let the selection process drag on. Selection is a means to an end, not a goal in itself. Unless you have very specialized needs, there are probably several marketing automation systems that will meet your requirements. Look at your key criteria and assess how well each vendor matches them – bearing in mind that a system can be too powerful as well as too simple. Once you’ve found one that you are confident will be sufficient, go ahead and buy it. Then you can start on what’s really important: better marketing results.
7. Invest In Deployment
Marketing automation systems allow major improvements in marketing results. But those improvements require more than just a new system. If you don’t already have a formal description of the stages that prospects move through to become buyers, build one and instrument your systems to measure it. Use the stages as a framework to plan, design and develop a balanced set of marketing programs. Invest in the staff training and content to execute those programs successfully. Document and improve internal marketing processes. Work closely with sales to define lead scoring rules, hand-off mechanisms and service levels, and ways to capture results. Build measurement systems and use them to hold marketers at every level of the department responsible for results they control. Bring in outside resources, such as agencies and consultants, when you lack the internal expertise or time to do the work in-house.
4. Talk To References
This is an often-overlooked source of insight. The question isn’t whether the references are happy, but whether your situations are similar enough that you’re likely to be happy as well. Find out whether the reference is using the system functions you care about, how long they took to get started, the amount of training and process change required, what problems they had, and how the vendor responded.
Issue | Questions to ask |
System fit vs. my needs | What kinds of programs do you run with the system? |
How many programs do you run each month? | |
How many people at your company use the system? | |
System reliability | How often has the system been unavailable? |
What kinds of bugs have you run into? | |
Ease of use | How much training did you need to use the system? |
What kinds of tasks need outside help to accomplish? | |
How long does it take to set up different kinds of programs? | |
Vendor support | How well does the vendor respond when you ask for help? |
How quickly do problems get solved? | |
Does the vendor ever offer assistance before you ask? | |
What help does the vendor provide with email deliverability? | |
Cost | Did you negotiate any special pricing? |
Did you pay extra for implementation and on-going support? | |
Were there any unexpected costs after you started? |
5. Consider A Trial
Nearly all marketing automation vendors will let you try their system for a limited period. Trials are a great way to learn what it’s really like to use a system, but only if they are managed effectively. This means you need to invest in training and then set up and execute actual projects. As with scenario demonstrations, you may still rely on the vendor to handle some of the more demanding aspects of the project, but, again, make sure you see how hard it will eventually be to do them for yourself.
What you can learn from a trial | How hard it is to install the system |
How hard it is to set up a campaign | |
How hard it is to make changes and reuse materials | |
What features are available or missing (if you test them) | |
Quality of training classes and materials (if you try them) | |
What you can’t learn from a trial | How the system handles large volumes of data, users, etc. |
Results from complex or long-running campaigns | |
Accuracy of scoring and reports | |
Quality of customer service and support | |
Quality of vendor partners (agencies, integrators, etc.) |
6. Make A Decision
Don’t let the selection process drag on. Selection is a means to an end, not a goal in itself. Unless you have very specialized needs, there are probably several marketing automation systems that will meet your requirements. Look at your key criteria and assess how well each vendor matches them – bearing in mind that a system can be too powerful as well as too simple. Once you’ve found one that you are confident will be sufficient, go ahead and buy it. Then you can start on what’s really important: better marketing results.
Selection criteria | Key factors | Vendor Fit | ||
Too Little | Appropriate | Too Much | ||
Functions | Outbound email | |||
Landing page and forms | ||||
Web behavior tracking | ||||
Lead scoring | ||||
Multi-step campaigns | ||||
Sales integration | ||||
Reporting and analysis | ||||
Usability | Easy to learn | |||
Efficient to use | ||||
Technology | Easy installation | |||
Flexibility | ||||
Cost | Direct (software and support) | |||
Indirect (staff, training, services) | ||||
Predictable | ||||
Expansion costs | ||||
Vendor | Staff resources | |||
Product plans | ||||
Financial stability |
7. Invest In Deployment
Marketing automation systems allow major improvements in marketing results. But those improvements require more than just a new system. If you don’t already have a formal description of the stages that prospects move through to become buyers, build one and instrument your systems to measure it. Use the stages as a framework to plan, design and develop a balanced set of marketing programs. Invest in the staff training and content to execute those programs successfully. Document and improve internal marketing processes. Work closely with sales to define lead scoring rules, hand-off mechanisms and service levels, and ways to capture results. Build measurement systems and use them to hold marketers at every level of the department responsible for results they control. Bring in outside resources, such as agencies and consultants, when you lack the internal expertise or time to do the work in-house.
Goal | Tasks |
Balanced set of marketing programs | Define lead lifecycle (buying process and buyer roles) |
Map existing programs to process stages and identify gaps | |
Prioritize new programs to close gaps | |
Execute programs and measure results | |
Refine programs with versions for different segments | |
Measurement | Track leads through stages in the buying process |
Import revenue from sales systems | |
Link revenue to lead source (acquisition programs) | |
Measure incremental impact (nurture programs) | |
Project future revenue from current lead inventory | |
Process management | Define processes to execute marketing programs |
Identify tasks and responsibilities within each process | |
Define measures to capture task performance | |
Assess existing processes and possible improvements | |
Monitor execution, test improvements, check results, repeat | |
Sales alignment | Identify key contacts between sales and marketing |
Agree on process for lead qualification, transfer to sales | |
Agree on measures for lead quality, revenue attribution | |
Deploy agreed processes, monitor results, review regularly | |
Staff training | Define skills needed to deploy new system |
Assess existing staff skills and identify gaps | |
Plan initial training to close gaps | |
Plan on-going training to maintain and expand skills |
Tuesday, April 12, 2011
Step-by-Step Guide to Selecting the Right Marketing Automation System - Part 1
Choosing a marketing automation system is a major decision. A disciplined selection process is essential to make a sound selection. This series of posts presents the seven-step methodology we use at Raab Associates, along with related worksheets. The first three are below.
For a complete list of the steps, worksheets, and background materials, visit the Raab Guide Website and download the Vendor Selection Workbook from the Resource Library (registration required).
1. Define Requirements
Create a list of your goals in buying the system. Relate these to financial values when possible. Then define how you’ll use the system to meet these goals, being as specific as you can about the actual processes involved. Be sure to include processes beyond what you do already: one of the reasons you’re looking at marketing automation is to expand what your department can accomplish. Your requirements are based on the tasks you must perform to meet your goals.
2. Research Your Options
Raab Associates’ B2B Marketing Automation Vendor Selection Tool (VEST) provides a good starting point for matching possible vendors to your requirements. In particular, match the scale and sophistication of your marketing operations to the different buyer segments used in the report. Bear in mind that company size alone doesn’t necessary predict the depth of your requirements: small businesses can run complex marketing programs, and large business programs may be simple.
3. Test Vendors Against Scenarios
Develop scenarios that describe actual marketing projects you expect to run through the system, and have the most promising vendors demonstrate how they would execute them. Scenarios based on your own needs are critical for understanding how well each system would function in your own environment. Be sure that some scenarios describe your more complicated processes, since these are most likely to highlight differences among systems. If vendor staff executes the scenarios for you, be sure to understand how much the vendor built in advance. This ensures that you get an accurate sense of the total work effort involved.
The next post in this series will present additional steps in our process.
For a complete list of the steps, worksheets, and background materials, visit the Raab Guide Website and download the Vendor Selection Workbook from the Resource Library (registration required).
1. Define Requirements
Create a list of your goals in buying the system. Relate these to financial values when possible. Then define how you’ll use the system to meet these goals, being as specific as you can about the actual processes involved. Be sure to include processes beyond what you do already: one of the reasons you’re looking at marketing automation is to expand what your department can accomplish. Your requirements are based on the tasks you must perform to meet your goals.
Goals | Related Requirements |
Generate more leads | Manage online and offline advertising campaigns |
Import email address lists and send personalized emails | |
Monitor and publish to social media | |
Build and deploy landing pages to capture responses | |
Use IP address to identify the company of Web site visitors | |
More effective nurturing | Capture the source and Web site activities of each visitor |
Create Web forms to gather information about visitors | |
Score visitors based on form responses and Web behaviors | |
Execute multi-step campaigns tailored to different groups | |
Use visitor behavior to trigger campaigns and other actions | |
Better sales integration | Synchronize data between sales and marketing systems |
Send leads to sales based on lead score and actions | |
Send alerts to sales based on Web site behaviors | |
Report on revenue generated by leads from marketing | |
More efficient marketing operations | Store marketing materials and share across programs |
Track planned and actual costs of marketing programs | |
Manage tasks and approvals during program development |
2. Research Your Options
Raab Associates’ B2B Marketing Automation Vendor Selection Tool (VEST) provides a good starting point for matching possible vendors to your requirements. In particular, match the scale and sophistication of your marketing operations to the different buyer segments used in the report. Bear in mind that company size alone doesn’t necessary predict the depth of your requirements: small businesses can run complex marketing programs, and large business programs may be simple.
Company Type | Key System Features |
Micro-business | Outbound email and multi-step nurture campaigns |
Landing pages and forms | |
Built-in sales and service features | |
Built-in or integrate with third party ecommerce and shopping cart | |
Small to mid-size business | Outbound email and multi-step nurture campaigns |
Landing pages and forms | |
Web site visitor tracking | |
Lead scoring (one score per lead) | |
Integrate with external sales automation system | |
Large business | Outbound email and multi-step nurture campaigns |
Landing pages and forms | |
Web site visitor tracking | |
Lead scoring (multiple scores per lead) | |
Integrate with external sales automation system | |
Manage marketing budgets, program tasks and approvals | |
Add custom tables with data from many sources | |
Limit different users to different tasks and programs |
3. Test Vendors Against Scenarios
Develop scenarios that describe actual marketing projects you expect to run through the system, and have the most promising vendors demonstrate how they would execute them. Scenarios based on your own needs are critical for understanding how well each system would function in your own environment. Be sure that some scenarios describe your more complicated processes, since these are most likely to highlight differences among systems. If vendor staff executes the scenarios for you, be sure to understand how much the vendor built in advance. This ensures that you get an accurate sense of the total work effort involved.
Scenario | Steps |
Outbound email campaign | Import list from CSV file, from Excel |
Compose personalized emails with embedded graphics | |
Create landing page with data entry form | |
Set automated email response to form submissions | |
Set rules to score leads and send qualified leads to sales | |
Report on results: sent, opened, clicked, completed form | |
Nurture campaign | Set start and end date for campaign |
Set rules to select leads, based on attributes and behaviors | |
Set priority of campaign vs. other campaigns | |
Define multi-step flow with wait periods between steps | |
Set rules for different treatments for segments within steps | |
Set rules to score leads and send qualified leads to sales | |
Create emails, landing pages, and forms | |
Report on results including leads to sales and revenue | |
Revenue reporting | Define stages in lead lifecycle |
Define rules to assign leads to lifecycle stages | |
Report on movement of leads through lifecycle stages | |
Set up process to import revenue from sales system | |
Define rules to link revenue to campaigns | |
Define rules to estimate incremental revenue per campaign | |
Report on revenue generated per campaign | |
Capture campaign costs | |
Report on campaign revenue vs. campaign cost |
The next post in this series will present additional steps in our process.
Thursday, February 10, 2011
Which B2B Marketing Automation Systems Have Hard-to-Find Features? The Answers May Surprise You
Summary: A close look at which vendors have the least common features finds some are widely distributed, while others are concentrated among products for big companies. As always, you need to look at the details to see which products have what you need.
Last week’s post used data from our B2B Marketing Automation Vendor Selection Tool (VEST) to identify leading vendors in categories such as lead generation, campaign management, and technology. The main point, at least as I saw it, was that no single vendor dominates everything. Different firms are best at serving small, mid-size and large marketing organizations, and for each of these buyer types, different vendors lead different categories.
It’s like the awards ceremony at a progressive elementary school: everybody is best at something.
That may sound all warm and fuzzy, but this conclusion also has great practical significance: it means that you can’t assume a sector leader has the best product for your particular needs. You must look at the details to find the best match.
I cautioned in that post that you can’t stop at the category level. Even the vendor with the highest category score won’t necessarily have a feature you need. The chart below takes the analysis to the next level, looking at which vendors meet specific requirements. It lists the 33 least common items of the 190 we captured in our research. The green cells identify vendors who fully match an item (score=2); the orange column at the right shows how many of the 18 vendors had this score. (Yellow cells indicate a score of 1, meaning an item was partly fulfilled. I’ve ignored those cells in the following discussion but show them to make the point that many vendors do have some capabilities in these areas.)
There is an obvious over-all pattern: the green cells cluster heavily towards the right. Since the chart is organized with small business systems to the left and big business systems to the right, this means the rarer features are most often found in products for big marketing departments. That makes intuitive sense – you’d expect bigger organizations to need more special features. (You'd also expect to need more features in general, which the data also confirms although I haven't illustrated it here.) One caveat is that the item list itself was leaned heavily towards the needs of mid-size and large businesses; a list of features tailored to small businesses might have a different pattern.
But while most green cells are on the right, there are plenty of exceptions. This is important: it means that buyers who need an unusual feature might find it any type of system. Indeed, nearly half of these items (14 of the 33) can be found in the small business columns (the first five on the left). Nor is it simply that these items are small business specialties. Twelve of the 14 are also found in five big-business products on the right.
The other pattern clearly visible is the two heavily-populated columns in the center, representing mid-market leaders Eloqua and Marketo. The high number of green cells (seven for Marketo and nine for Eloqua) shows that both products are feature-rich. But they're far from twins: their combined green cells cover 13 different items, and there are just three items which both vendors satisfy fully. Once more, the moral of the story is that even direct competitors are quite different when you start looking at details. The good news is this means that buyers who know exactly what they want should be able to distinguish strong from weak candidates quite easily.
Although the items I've listed are shared across all kinds of systems, the distribution isn't simply random. The chart below illustrates which kinds of features are found where. It summarizes the results for the three groups I’ve been discussing: small-business vendors (the five left-hand columns), big-business vendors (the five right-hand columns), and the Marketo/Eloqua combination. Green cells indicate at least one vendor in the group supports an item. Numbers indicate how many vendors support the item.

I’ve given yellow labels to items that are supported big-business systems only. You'll see that most relate advanced marketing planning and administration, which is something only big companies really need. These include detailed cost calculations, expiration dates on marketing content, project schedules, project task detail, results forecasts, approval workflows, marketing calendars, and plan vs. actual reporting. Several of the remaining items relate to advanced offer selection, another requirement for big programs because they have too many potential offers to manage manually or through simple rules. The rest, including multi-language user interface and on-premise deployment, also deal with needs unique to large enterprises. The only real odd-ball here is online chat. What can I say?
The blue labels are items found in a small-business system. All of these are also available in at least one other category, which basically confirms that all kinds of marketers need them. Half relate to specific output channels: fax, RSS, social media, direct mail, email, external Web sites, and Webinars. My interpretation is that vendors of all sizes see the need to simplify multi-channel integration for their clients. The balance are advanced capabilities used by sophisticated marketers in all sizes of organizations.
The five remaining items, with white labels, are shared by mid-tier and big-business systems. Two relate to channel integration (social media and events) and three are generally big-business concerns (fractional revenue attribution, offer coordination, and user-defined matching rules). It’s interesting that the second group are the items shared by Marketo and Eloqua. Without reading too much into this, it suggests that both vendors are looking to the needs of larger rather than smaller companies.
As you’re surely gathered by now, I find this data inherently intriguing. It's a way to understand the contours of the marketing automation industry. But most buyers just want to pick the right system. For them, this data simply reinforces that same central lesson: you must look at the details to find your best match. Still, that’s a lesson too few people have learned, so I’m perfectly happy to keep repeating it.
Last week’s post used data from our B2B Marketing Automation Vendor Selection Tool (VEST) to identify leading vendors in categories such as lead generation, campaign management, and technology. The main point, at least as I saw it, was that no single vendor dominates everything. Different firms are best at serving small, mid-size and large marketing organizations, and for each of these buyer types, different vendors lead different categories.
It’s like the awards ceremony at a progressive elementary school: everybody is best at something.
That may sound all warm and fuzzy, but this conclusion also has great practical significance: it means that you can’t assume a sector leader has the best product for your particular needs. You must look at the details to find the best match.
I cautioned in that post that you can’t stop at the category level. Even the vendor with the highest category score won’t necessarily have a feature you need. The chart below takes the analysis to the next level, looking at which vendors meet specific requirements. It lists the 33 least common items of the 190 we captured in our research. The green cells identify vendors who fully match an item (score=2); the orange column at the right shows how many of the 18 vendors had this score. (Yellow cells indicate a score of 1, meaning an item was partly fulfilled. I’ve ignored those cells in the following discussion but show them to make the point that many vendors do have some capabilities in these areas.)

But while most green cells are on the right, there are plenty of exceptions. This is important: it means that buyers who need an unusual feature might find it any type of system. Indeed, nearly half of these items (14 of the 33) can be found in the small business columns (the first five on the left). Nor is it simply that these items are small business specialties. Twelve of the 14 are also found in five big-business products on the right.
The other pattern clearly visible is the two heavily-populated columns in the center, representing mid-market leaders Eloqua and Marketo. The high number of green cells (seven for Marketo and nine for Eloqua) shows that both products are feature-rich. But they're far from twins: their combined green cells cover 13 different items, and there are just three items which both vendors satisfy fully. Once more, the moral of the story is that even direct competitors are quite different when you start looking at details. The good news is this means that buyers who know exactly what they want should be able to distinguish strong from weak candidates quite easily.
Although the items I've listed are shared across all kinds of systems, the distribution isn't simply random. The chart below illustrates which kinds of features are found where. It summarizes the results for the three groups I’ve been discussing: small-business vendors (the five left-hand columns), big-business vendors (the five right-hand columns), and the Marketo/Eloqua combination. Green cells indicate at least one vendor in the group supports an item. Numbers indicate how many vendors support the item.

I’ve given yellow labels to items that are supported big-business systems only. You'll see that most relate advanced marketing planning and administration, which is something only big companies really need. These include detailed cost calculations, expiration dates on marketing content, project schedules, project task detail, results forecasts, approval workflows, marketing calendars, and plan vs. actual reporting. Several of the remaining items relate to advanced offer selection, another requirement for big programs because they have too many potential offers to manage manually or through simple rules. The rest, including multi-language user interface and on-premise deployment, also deal with needs unique to large enterprises. The only real odd-ball here is online chat. What can I say?
The blue labels are items found in a small-business system. All of these are also available in at least one other category, which basically confirms that all kinds of marketers need them. Half relate to specific output channels: fax, RSS, social media, direct mail, email, external Web sites, and Webinars. My interpretation is that vendors of all sizes see the need to simplify multi-channel integration for their clients. The balance are advanced capabilities used by sophisticated marketers in all sizes of organizations.
The five remaining items, with white labels, are shared by mid-tier and big-business systems. Two relate to channel integration (social media and events) and three are generally big-business concerns (fractional revenue attribution, offer coordination, and user-defined matching rules). It’s interesting that the second group are the items shared by Marketo and Eloqua. Without reading too much into this, it suggests that both vendors are looking to the needs of larger rather than smaller companies.
As you’re surely gathered by now, I find this data inherently intriguing. It's a way to understand the contours of the marketing automation industry. But most buyers just want to pick the right system. For them, this data simply reinforces that same central lesson: you must look at the details to find your best match. Still, that’s a lesson too few people have learned, so I’m perfectly happy to keep repeating it.
Tuesday, February 01, 2011
Picking Your Best Marketing Automation Vendor: One Size Won't Fit All
Summary: Vendor scores from our new B2B Marketing Automation Vendor Selection Tool offer new proof of an old truth: there's no one best system for everyone.
The one point I make every time I discuss software selection is that you have to find a vendor that matches your own business needs. No one ever denies this, of course, but the typical next question is still, Who are industry leaders? – with the unstated but obvious intention of limiting consideration to whomever gets named.
It’s not that these people didn’t listen: they certainly want a system to meet their needs. But I think they’re assuming that most systems are pretty much the same, and therefore the industry leaders are the most likely meet their requirements. The assumption is wrong but it’s hard to shake. My reluctance to contribute to this error is the main reason I’ve carefully avoided any formal ranking of vendors over the years.
But of course you know that I’ve now abandoned that position with the new B2B Marketing Automation Vendor Selection Tool (VEST) – which I’ll remind you is both totally awesome and available for sale on this very nice Web page. I’ll admit my change is partly about giving the market what it wants. But I also believe the new VEST can help to educate people about product differences, leading them to look more deeply than they would otherwise. Certainly the VEST gives them fingertip access to vastly more information about more products than they are likely to gather on their own. So, in that sense at least, it will surely help them to consider more options.
Back to the education part. Even someone as wise as you, a Regular Reader Of This Blog, may wonder whether those Important Differences really exist. After all, wouldn’t it be safe to assume that the industry leaders are in fact the strongest products across the board?
Nope.
In fact, the best thing about the new VEST may be that I finally have hard data to prove this point. The graphic below may not be very legible, but it’s really intended to illustrate patterns rather than show a lot of detailed information.
Before you squint too hard, here’s what you’re looking at:
- left to right, I’ve listed the 18 VEST vendors (nice alliteration) in order of their percentage of small business clients. So vendors with mostly small clients are at the left, and vendors with mostly large clients are at the right.
- reading down, there are three big blocks relating to vendor scores for small, mid-size and large businesses. (In case you missed a class, the VEST has different scoring schemes for those three client groups because their needs are different.)
- within the three big blocks, there are blocks for product categories (lead generation, campaign management, scoring and distribution, reporting, technology, usability and pricing) and for vendorcategories (company strength and sector presence [sectors are another term for the small, mid-size and large businesses]). Each category has its own row.
- the bright green cells represent the highest-ranked vendors for each category. Specifically, I took the vendor scores (based on the weighted sum of vendor scores on the individual items—as many as 60 items in some categories) and normalized on a scale of 0 to 1. In the product categories, green cells represent a normalized score of .9 or higher (that is, the vendor’s score was within 10% of the highest score). In the vendor categories, where the top vendor sometimes scores much higher than the rest, green cells represent a normalized score of .75 or better.
- the dark green cells show the highest combined scores across all product and vendor categories. The combined scores reflect the weights applied to the individual categories, as I explained in my earlier posts. Again, the scores are normalized and the green indicates scores higher than .9 for product fit and .75 for vendor fit.
Ok then. Now that you know what you’re looking at, here are a few observations:
- colored cells are concentrated at the left in the upper blocks, spread pretty widely in the middle, and to the right in the lower blocks. In concrete terms, this means that vendors with the most small business clients are rated most highly on small business features, vendors with a mix of clients dominate the middle, and vendors with large clients have the strongest big-client features. Not at all surprising but good validation that the scores are realistic.
- there are no solid columns of cells. That is, no single vendor is best at everything, even within a single buyer type. The nearest exception is at the bottom right, where Neolane has five green product cells out of seven for large clients. Good for them, of course, but note there are five dark green cells on the large-company product fit row: that is, several other vendors have combined product scores within 10% of Neolane’s.
- light green cells are spread widely across the rows. This means that most vendors are among the best at something. In fact, only Genius lacks at least one green cell somewhere on the diagram. (And this isn’t fair to Genius, which has some unique features that are very important to certain users.)
- dark green cells aren’t necessarily below the most light green cells. The most glaring example is in the center row, where True Influence has a dark green cell (among the best over-all) without any light green cells (not the best in any category). This reflects the range in scores within each vendor: that is, vendors are often very good at some things and not so good at others.
All these observations lead back to the same central point: different vendors are good at different things and no one vendor is best at everything. This is exactly what buyers need to recognize to understand why it isn’t safe to look only at the market leaders. Nor can they simply decide based on the category rankings: there’s plenty of variation among individual items within those rankings too. In other words, there’s truly no substitute for understanding your requirements and researching the vendors in detail. The new VEST will help, but whether you buy it or not, you still have to do the work to make a good choice.
The one point I make every time I discuss software selection is that you have to find a vendor that matches your own business needs. No one ever denies this, of course, but the typical next question is still, Who are industry leaders? – with the unstated but obvious intention of limiting consideration to whomever gets named.
It’s not that these people didn’t listen: they certainly want a system to meet their needs. But I think they’re assuming that most systems are pretty much the same, and therefore the industry leaders are the most likely meet their requirements. The assumption is wrong but it’s hard to shake. My reluctance to contribute to this error is the main reason I’ve carefully avoided any formal ranking of vendors over the years.
But of course you know that I’ve now abandoned that position with the new B2B Marketing Automation Vendor Selection Tool (VEST) – which I’ll remind you is both totally awesome and available for sale on this very nice Web page. I’ll admit my change is partly about giving the market what it wants. But I also believe the new VEST can help to educate people about product differences, leading them to look more deeply than they would otherwise. Certainly the VEST gives them fingertip access to vastly more information about more products than they are likely to gather on their own. So, in that sense at least, it will surely help them to consider more options.
Back to the education part. Even someone as wise as you, a Regular Reader Of This Blog, may wonder whether those Important Differences really exist. After all, wouldn’t it be safe to assume that the industry leaders are in fact the strongest products across the board?
Nope.
In fact, the best thing about the new VEST may be that I finally have hard data to prove this point. The graphic below may not be very legible, but it’s really intended to illustrate patterns rather than show a lot of detailed information.

- left to right, I’ve listed the 18 VEST vendors (nice alliteration) in order of their percentage of small business clients. So vendors with mostly small clients are at the left, and vendors with mostly large clients are at the right.
- reading down, there are three big blocks relating to vendor scores for small, mid-size and large businesses. (In case you missed a class, the VEST has different scoring schemes for those three client groups because their needs are different.)
- within the three big blocks, there are blocks for product categories (lead generation, campaign management, scoring and distribution, reporting, technology, usability and pricing) and for vendorcategories (company strength and sector presence [sectors are another term for the small, mid-size and large businesses]). Each category has its own row.
- the bright green cells represent the highest-ranked vendors for each category. Specifically, I took the vendor scores (based on the weighted sum of vendor scores on the individual items—as many as 60 items in some categories) and normalized on a scale of 0 to 1. In the product categories, green cells represent a normalized score of .9 or higher (that is, the vendor’s score was within 10% of the highest score). In the vendor categories, where the top vendor sometimes scores much higher than the rest, green cells represent a normalized score of .75 or better.
- the dark green cells show the highest combined scores across all product and vendor categories. The combined scores reflect the weights applied to the individual categories, as I explained in my earlier posts. Again, the scores are normalized and the green indicates scores higher than .9 for product fit and .75 for vendor fit.
Ok then. Now that you know what you’re looking at, here are a few observations:
- colored cells are concentrated at the left in the upper blocks, spread pretty widely in the middle, and to the right in the lower blocks. In concrete terms, this means that vendors with the most small business clients are rated most highly on small business features, vendors with a mix of clients dominate the middle, and vendors with large clients have the strongest big-client features. Not at all surprising but good validation that the scores are realistic.
- there are no solid columns of cells. That is, no single vendor is best at everything, even within a single buyer type. The nearest exception is at the bottom right, where Neolane has five green product cells out of seven for large clients. Good for them, of course, but note there are five dark green cells on the large-company product fit row: that is, several other vendors have combined product scores within 10% of Neolane’s.
- light green cells are spread widely across the rows. This means that most vendors are among the best at something. In fact, only Genius lacks at least one green cell somewhere on the diagram. (And this isn’t fair to Genius, which has some unique features that are very important to certain users.)
- dark green cells aren’t necessarily below the most light green cells. The most glaring example is in the center row, where True Influence has a dark green cell (among the best over-all) without any light green cells (not the best in any category). This reflects the range in scores within each vendor: that is, vendors are often very good at some things and not so good at others.
All these observations lead back to the same central point: different vendors are good at different things and no one vendor is best at everything. This is exactly what buyers need to recognize to understand why it isn’t safe to look only at the market leaders. Nor can they simply decide based on the category rankings: there’s plenty of variation among individual items within those rankings too. In other words, there’s truly no substitute for understanding your requirements and researching the vendors in detail. The new VEST will help, but whether you buy it or not, you still have to do the work to make a good choice.
Wednesday, January 26, 2011
B2B Marketing Automation Report Is Ready...My Web Site, Not So Much
The good news is, my new B2B Marketing Automation report (more formally: the Vendor Selection Tool, or VEST) is now available. The bad news is I can't actually sell it online, despite the best efforts of Web masters on two continents. But the good news is I'm more than happy to take credit card orders directly if you send me an email or give me a call. Email is info@raabguide.com.
To recap a bit, the new report is based on a survey of 18 vendors, who answered nearly 200 questions about their products and companies. Most answers were scored from 0, 1 or 2, indicating whether a particular feature was available fully, partly, or not at all. I translated other answers such as starting price or number of employees into similar 0-2 ranges so I could combine everything in a scoring formula. See my posts over the past few weeks for details on that.
The final result was three sets of scores for each vendor. The sets represent fitness for small, mid-size and large businesses, and each set contains a product fit score and vendor fit score. The idea was to simulate the type of scoring that a typical business in each category might do in its own vendor evaluation. Of course, no one's business is truly typical, so the interactive version of the tool also lets you create your own custom scoring weights.
The core of the new report, therefore, contains two sections: scatter diagrams plotting all the vendors in a typical "industry matrix" style and individual vendor profiles.
The industry matrix puts leads at the top right, where God and Gartner evidently intended them to be, and cleverly named other groups everywhere else. The clever part is giving names that are descriptive without being insulting. I settled on:
- "alternatives" (strong product fit but weak vendor fit)
- "anomalies" (weak product but strong vendor fit)
- "long shots" (weak product fit and weak vendor fit)
The vendor profiles give more detail about each vendor, including showing the scores for components within the product fit (7 components) and vendor fit (2 components). This gives some good insight into where the rankings came from.
So far so good. As I hinted before, there's both an interactive version and non-interactive version of the report. This is partly because I don't think everyone will want to pay for the full price for the interactive version and partly because some people have had problems running the interactive version, which uses Adobe Flash within a PDF. The non-interactive version, which I'm tactfully referring to as "basic", has an introductory section with industry explanations, recommendations on a selection process, etc., plus the three industry matrix charts (for small, mid-size and large) and individual profiles on each vendor. The profiles offer some narrative and scores for the components within the larger scores: 7 components within the profit fit (lead generation, campaign management, scoring, etc.) and two within the vendor fit (company strength and sector expertise). These give some insight into where the sales came from. This is priced at $295.
The interactive version has all those elements, which are made interactive by the fact that users can change the weights assigned to the different components within the profit and vendor fit scores. You've seen some of this is the same PDFs I posted over the past few weeks. It's great fun: there are little sliders for the weights and the vendors zoom around on the chart as you move them. A wonderful feeling of power.
The interactive edition also contains three more sections:
- Item Detail, which lets you see the 200-ish individual items used in the scoring, including their definitions and the weights assigned in each of the three scoring schemes.
- Custom Weights, which lets you set your own scoring weights for the individual items. You can start with the existing small, mid-size, or large weights as a base.
- Compare (my personal favorite), which lets you pick any three vendors and see how their scores compare in any of the weighting sets (small, mid-size, large, or custom). You can see bar charts with overviews and then drill into the item-by-item details for each category. This is where you see the specific differences between vendors.
Price for the interactive edition is $795.
I'll be presenting some additional analysis based on what's in the reports over the next few weeks, and of course will make a formal announcement once the e-commerce bugs are worked out. Again, though, you're welcome to send me a note to get your copy at once.
To recap a bit, the new report is based on a survey of 18 vendors, who answered nearly 200 questions about their products and companies. Most answers were scored from 0, 1 or 2, indicating whether a particular feature was available fully, partly, or not at all. I translated other answers such as starting price or number of employees into similar 0-2 ranges so I could combine everything in a scoring formula. See my posts over the past few weeks for details on that.
The final result was three sets of scores for each vendor. The sets represent fitness for small, mid-size and large businesses, and each set contains a product fit score and vendor fit score. The idea was to simulate the type of scoring that a typical business in each category might do in its own vendor evaluation. Of course, no one's business is truly typical, so the interactive version of the tool also lets you create your own custom scoring weights.
The core of the new report, therefore, contains two sections: scatter diagrams plotting all the vendors in a typical "industry matrix" style and individual vendor profiles.
The industry matrix puts leads at the top right, where God and Gartner evidently intended them to be, and cleverly named other groups everywhere else. The clever part is giving names that are descriptive without being insulting. I settled on:
- "alternatives" (strong product fit but weak vendor fit)
- "anomalies" (weak product but strong vendor fit)
- "long shots" (weak product fit and weak vendor fit)
The vendor profiles give more detail about each vendor, including showing the scores for components within the product fit (7 components) and vendor fit (2 components). This gives some good insight into where the rankings came from.
So far so good. As I hinted before, there's both an interactive version and non-interactive version of the report. This is partly because I don't think everyone will want to pay for the full price for the interactive version and partly because some people have had problems running the interactive version, which uses Adobe Flash within a PDF. The non-interactive version, which I'm tactfully referring to as "basic", has an introductory section with industry explanations, recommendations on a selection process, etc., plus the three industry matrix charts (for small, mid-size and large) and individual profiles on each vendor. The profiles offer some narrative and scores for the components within the larger scores: 7 components within the profit fit (lead generation, campaign management, scoring, etc.) and two within the vendor fit (company strength and sector expertise). These give some insight into where the sales came from. This is priced at $295.
The interactive version has all those elements, which are made interactive by the fact that users can change the weights assigned to the different components within the profit and vendor fit scores. You've seen some of this is the same PDFs I posted over the past few weeks. It's great fun: there are little sliders for the weights and the vendors zoom around on the chart as you move them. A wonderful feeling of power.
The interactive edition also contains three more sections:
- Item Detail, which lets you see the 200-ish individual items used in the scoring, including their definitions and the weights assigned in each of the three scoring schemes.
- Custom Weights, which lets you set your own scoring weights for the individual items. You can start with the existing small, mid-size, or large weights as a base.
- Compare (my personal favorite), which lets you pick any three vendors and see how their scores compare in any of the weighting sets (small, mid-size, large, or custom). You can see bar charts with overviews and then drill into the item-by-item details for each category. This is where you see the specific differences between vendors.
Price for the interactive edition is $795.
I'll be presenting some additional analysis based on what's in the reports over the next few weeks, and of course will make a formal announcement once the e-commerce bugs are worked out. Again, though, you're welcome to send me a note to get your copy at once.
Monday, January 17, 2011
B2B Marketing Automation Vendor Comparison -- Here's a Sample
I’ve been having way too much fun working on my new industry report. I decided to make it an interactive document that lets users (viewers? readers? The Chosen?) set their own weights for the different scoring categories and do detailed, side-by-side comparisons of vendors they select. This gives the document vastly more play value than a simple report. Much more important, it reinforces the point that I keep stressing, which is that every evaluation must be based on the buyer’s unique needs. Having three different sets of scores was a step in that direction, and making things interactive goes still further.
Click here to download a sample of the format. (Beware that it's a big file and can take several minutes to load.) Vendor names have been replaced with football teams and the specific details are excluded. But you can see the results of the different scoring schemes and also get a list of the specific items with their weights and definitions. The sample also contains draft versions of the introductory materials, which need some reformatting. (Note to Mac users: this is an Adobe Flash document; you'll have to use Adobe Reader to view it.)
In case it’s not obvious, you can move the little sliders on the “Sector Chart” tab to see how the different vendors move around depending on how you weight different categories of attributes. The final report will show directly how much each category contributes to each vendor’s score. You can also adjust the category weights on the grid within the “Scoring Weights” tab, which shows the detailed items. You have to mouse over the numbers in the grids – not the most convenient method, but the one allowed by the software I’m using (SAP Business Objects’ Xcelsius).
I actually did work up a version of the report that lets users set their own weights for the individual items. Unfortunately, that seems to overtax the software, so I’ll have to leave that out of the final product. I might put it out as a separate product or upgrade to the base report.
Please take a look and let me know what you think. The report itself should be ready for distribution within a few days, and of course I'll announce it here first.
Click here to download a sample of the format. (Beware that it's a big file and can take several minutes to load.) Vendor names have been replaced with football teams and the specific details are excluded. But you can see the results of the different scoring schemes and also get a list of the specific items with their weights and definitions. The sample also contains draft versions of the introductory materials, which need some reformatting. (Note to Mac users: this is an Adobe Flash document; you'll have to use Adobe Reader to view it.)
In case it’s not obvious, you can move the little sliders on the “Sector Chart” tab to see how the different vendors move around depending on how you weight different categories of attributes. The final report will show directly how much each category contributes to each vendor’s score. You can also adjust the category weights on the grid within the “Scoring Weights” tab, which shows the detailed items. You have to mouse over the numbers in the grids – not the most convenient method, but the one allowed by the software I’m using (SAP Business Objects’ Xcelsius).
I actually did work up a version of the report that lets users set their own weights for the individual items. Unfortunately, that seems to overtax the software, so I’ll have to leave that out of the final product. I might put it out as a separate product or upgrade to the base report.
Please take a look and let me know what you think. The report itself should be ready for distribution within a few days, and of course I'll announce it here first.
Monday, December 27, 2010
Ranking B2B Marketing Automation Vendors: How I Built My Scores (part 1)
Summary: The first of three posts describing my new scoring system for B2B marketing automation vendors.
I’ve finally had time to work up the vendor scores based on the 150+ RFP questions I distributed back in September. The result will be one of those industry landscape charts that analysts seem pretty much obliged to produce. I have never liked those charts because so many buyers consider only the handful of anointed “leaders”, even though one of the less popular vendors might actually be a better fit. This happens no matter how loudly analysts warn buyers not to make that mistake.
On the other hand, such charts are immensely popular. Recognizing that buyers will use the chart to select products no matter what I tell them, I settled on dimensions that are directly related to the purchase process:
- product fit, which assesses how well a product matches buyer needs. This is a combination of features, usability, technology, and price.
- vendor strength, which assesses a vendor’s current and future business position. This is a combination of company size, client base, and financial resources.
These are conceptually quite different from the dimensions used in the Gartner and Forrester reports* , which are designed to illustrate competitive position. But I’m perfectly aware that only readers of this blog will recognize the distinction. So I've also decided to create three versions of the chart, each tailored to the needs of different types of buyers.
In the interest of simplicity, my three charts will address marketers at small, medium and big companies. The labels are really short-hand for the relative sophistication and complexity of user requirements. But if I explicitly used a scale from simple to sophisticated, no one would ever admit that their needs were simple -- even to themselves. I've hoping the relatively neutral labels will encourage people to be more realistic. In practice, we all know that some small companies are very sophisticated marketers and some big companies are not. I can only hope that buyers will judge for themselves which category is most appropriate.
The trick to producing three different rankings from the same set of data is to produce three sets of weights for the different elements. Raab Associates’ primary business for the past two decades has been selecting systems, so we have a well-defined methodology for vendor scoring.
Our approach is to first set the weights for major categories and then allocate weights within those categories. The key is that the weights must add to 100%. This forces trade-offs first among the major categories and then among factors within each category. Without the 100% limit, two things happen:
- everything is listed as high priority. We consistently found that if you ask people to rate features as "must have" "desirable" and "not needed", 95% of requirements are rated as “must have”. From a prioritization standpoint, that's effectively useless.
- categories with many factors are overweighted. What happens is that each factor gets at least one point, giving the category a high aggregate total. For example, category with five factors has a weight of at least five, while a category with 20 factors has a weight of 20 or more.
The following table shows the major weights I assigned. The heaviest weight goes to lead generation and nurturing campaigns – a combined 40% across all buyer types. I weighted pricing much more heavily for small firms, and gabe technology, lead scoring and technology heavier weights at larger firms. You’ll notice that Vendor is weighted at zero in all cases: remember that these are weights for product fitness scores. Vendor strength will be scored on a separate dimension.

I think these weights are reasonable representations of how buyers think in the different categories. But they’re ultimately just my opinion. So I also created a reality check by looking at vendors who target the different buyer types.
This was possible because the matrix asked vendors to describe their percentage of clients in small, medium and large businesses. (The ranges were under $20 million, $20 million to $500 million, and over $500 million annual revenue.) Grouping vendors with similar percentages of small clients yielded the following sets:
- small business (60% or more small business clients): Infusionsoft, OfficeAutoPilot, TrueInfluence
- mixed (33-66% small business clients): Pardot, Marketo, Eloqua, Manticore Technology, Silverpop, Genius
- specialists (15%-33% small business): LeadFormix, TreeHouse Interactive, SalesFUSION
- big clients (fewer than 15% small business): Marketbright, Neolane, Aprimo On Demand
(I also have data from LoopFuse, Net Results, and HubSpot, but didn’t have the client distribution for the first two. I excluded HubSpot because it is a fundamentally different product.)
If my weights were reasonable, two things should happen:
- vendors specializing in each client type should have the highest scores for that client type (that is, small business vendors have higher scores than big business vendors using the small business weights.)
- vendors should have their highest scores for their primary client type (that is, small business vendors should have higher scores with small business weights than with big business weights).
As the table below shows, that is pretty much what happened:

So far so good. But how did I know I’d assigned the right weights to the right features?
I was particularly worried about the small business weights. These showed a relatively small difference in scores across the different vendor groups. In addition, I knew I had weighted price heavily. In fact, it turned out that if I took price out of consideration, the other vendor groups would actually have higher scores than the small business specialists. This couldn't be right: the other systems are really too complicated for small business users, regardless of price.

Clearly some adjustments were necessary. I'll describe how I handled this in tomorrow's post.
_______________________________________________________
* “ability to execute” and “completeness of vision” for Gartner, “current offering”, “market presence” and “strategy” for Forrester.
I’ve finally had time to work up the vendor scores based on the 150+ RFP questions I distributed back in September. The result will be one of those industry landscape charts that analysts seem pretty much obliged to produce. I have never liked those charts because so many buyers consider only the handful of anointed “leaders”, even though one of the less popular vendors might actually be a better fit. This happens no matter how loudly analysts warn buyers not to make that mistake.
On the other hand, such charts are immensely popular. Recognizing that buyers will use the chart to select products no matter what I tell them, I settled on dimensions that are directly related to the purchase process:
- product fit, which assesses how well a product matches buyer needs. This is a combination of features, usability, technology, and price.
- vendor strength, which assesses a vendor’s current and future business position. This is a combination of company size, client base, and financial resources.
These are conceptually quite different from the dimensions used in the Gartner and Forrester reports* , which are designed to illustrate competitive position. But I’m perfectly aware that only readers of this blog will recognize the distinction. So I've also decided to create three versions of the chart, each tailored to the needs of different types of buyers.
In the interest of simplicity, my three charts will address marketers at small, medium and big companies. The labels are really short-hand for the relative sophistication and complexity of user requirements. But if I explicitly used a scale from simple to sophisticated, no one would ever admit that their needs were simple -- even to themselves. I've hoping the relatively neutral labels will encourage people to be more realistic. In practice, we all know that some small companies are very sophisticated marketers and some big companies are not. I can only hope that buyers will judge for themselves which category is most appropriate.
The trick to producing three different rankings from the same set of data is to produce three sets of weights for the different elements. Raab Associates’ primary business for the past two decades has been selecting systems, so we have a well-defined methodology for vendor scoring.
Our approach is to first set the weights for major categories and then allocate weights within those categories. The key is that the weights must add to 100%. This forces trade-offs first among the major categories and then among factors within each category. Without the 100% limit, two things happen:
- everything is listed as high priority. We consistently found that if you ask people to rate features as "must have" "desirable" and "not needed", 95% of requirements are rated as “must have”. From a prioritization standpoint, that's effectively useless.
- categories with many factors are overweighted. What happens is that each factor gets at least one point, giving the category a high aggregate total. For example, category with five factors has a weight of at least five, while a category with 20 factors has a weight of 20 or more.
The following table shows the major weights I assigned. The heaviest weight goes to lead generation and nurturing campaigns – a combined 40% across all buyer types. I weighted pricing much more heavily for small firms, and gabe technology, lead scoring and technology heavier weights at larger firms. You’ll notice that Vendor is weighted at zero in all cases: remember that these are weights for product fitness scores. Vendor strength will be scored on a separate dimension.

I think these weights are reasonable representations of how buyers think in the different categories. But they’re ultimately just my opinion. So I also created a reality check by looking at vendors who target the different buyer types.
This was possible because the matrix asked vendors to describe their percentage of clients in small, medium and large businesses. (The ranges were under $20 million, $20 million to $500 million, and over $500 million annual revenue.) Grouping vendors with similar percentages of small clients yielded the following sets:
- small business (60% or more small business clients): Infusionsoft, OfficeAutoPilot, TrueInfluence
- mixed (33-66% small business clients): Pardot, Marketo, Eloqua, Manticore Technology, Silverpop, Genius
- specialists (15%-33% small business): LeadFormix, TreeHouse Interactive, SalesFUSION
- big clients (fewer than 15% small business): Marketbright, Neolane, Aprimo On Demand
(I also have data from LoopFuse, Net Results, and HubSpot, but didn’t have the client distribution for the first two. I excluded HubSpot because it is a fundamentally different product.)
If my weights were reasonable, two things should happen:
- vendors specializing in each client type should have the highest scores for that client type (that is, small business vendors have higher scores than big business vendors using the small business weights.)
- vendors should have their highest scores for their primary client type (that is, small business vendors should have higher scores with small business weights than with big business weights).
As the table below shows, that is pretty much what happened:

So far so good. But how did I know I’d assigned the right weights to the right features?
I was particularly worried about the small business weights. These showed a relatively small difference in scores across the different vendor groups. In addition, I knew I had weighted price heavily. In fact, it turned out that if I took price out of consideration, the other vendor groups would actually have higher scores than the small business specialists. This couldn't be right: the other systems are really too complicated for small business users, regardless of price.

Clearly some adjustments were necessary. I'll describe how I handled this in tomorrow's post.
_______________________________________________________
* “ability to execute” and “completeness of vision” for Gartner, “current offering”, “market presence” and “strategy” for Forrester.
Wednesday, December 08, 2010
Case Study: Using a Scenario to Select Business Intelligence Software
Summary: Testing products against a scenario is critical to making a sound selection. But the scenario has to reflect your own requirements. While this post shows results from one test, rankings could be very different for someone else.
I’m forever telling people that the only reliable way to select software is to devise scenarios and test the candidate products against them. I recently went through that process for a client and thought I’d share the results.
1. Define Requirements. In this particular case, the requirements were quite clear: the client had a number of workers who needed a data visualization tool to improve their presentations. These were smart but not particularly technical people and they only did a couple of presentations each month. This mean the tool had to be extremely easy to use, because the workers wouldn’t find time for extensive training and, being just occasional users, would quickly forget most of they had learned. They also wanted to do some light ad hoc analysis within the tool, but just on small, summary data sets since the serious analytics are done by other users earlier in the process. And, oh, by the way, if the same tool could provide live, updatable dashboards for clients to access directly, that would be nice too. (In a classic case of scope creep, the client later added mapping capabilities to the list, merging this with a project that had been running separately.)
During our initial discussions, I also mentioned that Crystal Xcelsius (now SAP Crystal Dashboard Design) has the very neat ability to embed live charts within Powerpoint documents. This became a requirement too. (Unfortunately, I couldn’t find a way to embed one of those images directly within this post, but you can click here to see a sample embedded in a pdf. Click on the radio buttons to see the different variables. How fun is that?)
2. Identify Options. Based on my own knowledge and a little background research, I built a list of candidate systems. Again, the main criteria were visualization, ease of use and – it nearly goes without saying – low cost. A few were eliminated immediately due to complexity or other reasons. This left:
3. Define the Scenario. I defined a typical analysis for the client: a bar chart comparing index values for four variables across seven customer segments. The simplest bar chart showed all segment values for one variable. Another shows all variables for all segments, sorted first by variable and then by segment, with the segments ranked according to response rate (one of the five variables). This would show how the different variables related to response rate. It looked like this:

The tasks to execute the scenario were:
4. Results. I was able to download free or trial versions of each system. I installed these and then timed how long it took to complete the scenario, or at least to get as far as I could before reaching the frustration level where a typical end-user would stop.
I did my best to approach each system as if I’d never seen it before, although in fact, I’ve done at least some testing on every product except SpotFire, and have worked extensively with Xcelsius and QlikView. As a bit of a double-check, I dragooned one of my kids into testing one system when he was innocently visiting home over Thanksgiving: his time was actually quicker than mine. I took that as a proof I'd tested fairly.
Notes from the tests are below.
5. Final Assessment. Although the scenario nicely addressed ease of use, there were other considerations that played into the final decision. These required a bit more research and some trade-offs, particularly regarding the Xcelsius-style ability to embed interactive charts within a Powerpoint slide. No one else on the list could do this without either loading additional software (often a problem when end-user PCs are locked down by corporate IT) or accessing an external server (a problem for mobile users and with license costs).
The following table shows my results:

6. Next Steps. The result of this project wasn’t a final selection, but a recommendation of a couple of products to explore in depth. There were still plenty of details to research and confirm. However, starting with a scenario greatly sped up the work, narrowed the field, and ensured that the final choice would meet operational requirements. That was well worth the effort. I strongly suggest you do the same.
____________________________________
* The actual data looked like this. Here's a link if you want to download it:
I’m forever telling people that the only reliable way to select software is to devise scenarios and test the candidate products against them. I recently went through that process for a client and thought I’d share the results.
1. Define Requirements. In this particular case, the requirements were quite clear: the client had a number of workers who needed a data visualization tool to improve their presentations. These were smart but not particularly technical people and they only did a couple of presentations each month. This mean the tool had to be extremely easy to use, because the workers wouldn’t find time for extensive training and, being just occasional users, would quickly forget most of they had learned. They also wanted to do some light ad hoc analysis within the tool, but just on small, summary data sets since the serious analytics are done by other users earlier in the process. And, oh, by the way, if the same tool could provide live, updatable dashboards for clients to access directly, that would be nice too. (In a classic case of scope creep, the client later added mapping capabilities to the list, merging this with a project that had been running separately.)
During our initial discussions, I also mentioned that Crystal Xcelsius (now SAP Crystal Dashboard Design) has the very neat ability to embed live charts within Powerpoint documents. This became a requirement too. (Unfortunately, I couldn’t find a way to embed one of those images directly within this post, but you can click here to see a sample embedded in a pdf. Click on the radio buttons to see the different variables. How fun is that?)
2. Identify Options. Based on my own knowledge and a little background research, I built a list of candidate systems. Again, the main criteria were visualization, ease of use and – it nearly goes without saying – low cost. A few were eliminated immediately due to complexity or other reasons. This left:
3. Define the Scenario. I defined a typical analysis for the client: a bar chart comparing index values for four variables across seven customer segments. The simplest bar chart showed all segment values for one variable. Another shows all variables for all segments, sorted first by variable and then by segment, with the segments ranked according to response rate (one of the five variables). This would show how the different variables related to response rate. It looked like this:

The tasks to execute the scenario were:
- connect to a simple Excel spreadsheet (seven segments x four variables.)*
- create a bar chart showing data for all segments for a single variable.
- create a bar chart showing data for all segments for all variables, clustered by variable and sorted by the value of one variable (response index).
- provide users with an option to select or highlight individual variables and segments.
4. Results. I was able to download free or trial versions of each system. I installed these and then timed how long it took to complete the scenario, or at least to get as far as I could before reaching the frustration level where a typical end-user would stop.
I did my best to approach each system as if I’d never seen it before, although in fact, I’ve done at least some testing on every product except SpotFire, and have worked extensively with Xcelsius and QlikView. As a bit of a double-check, I dragooned one of my kids into testing one system when he was innocently visiting home over Thanksgiving: his time was actually quicker than mine. I took that as a proof I'd tested fairly.
Notes from the tests are below.
- Xcelsius (SAP Crystal Dashboard Design): 3 hours to set up bar chart with one variable and allowing selection of individual variables. Did not attempt to create chart showing multiple variables. (Note: most of the time was spent figuring out how Xcelsius did the variable selection, which is highly unintuitive. I finally had to cheat and use the help functions, and even then it took at least another half hour. Remember that Xcelsius is a system I’d used extensively in the past, so I already had some idea of what I was looking for. On the other hand, I reproduced that chart in just a few minutes when I was creating the pdf for this post. Xcelsius would work very well for a frequent user, but it’s not for people who use it only occasionally.)
- Advizor: 3/4 hour to set up bar chart. Able to show multiple variables on same chart but not to group or sort by variable. Not obvious how to make changes (must click on a pull down menu to expose row of icons).
- Spotfire: 1/2 hour to set up bar chart. Needed to read Help to put multiple lines or bars on same chart. Could not find way to sort or group by variable.
- QlikView: 1/4 hour to set up bar chart (using default wizard). Able to add multiple variables and sort segments by response index, but could not cluster by variable or expose menu to add/remove variables. Not obvious how to make changes (must right-click to open properties box – I wouldn’t have known this without my prior QlikView experience).
- Lyzasoft: 1/4 hour to set up bar chart with multiple variables. Able to select individual variables, cluster by variable and sort by response index, but couldn’t easily assign different colors to different variables (required for legibility). Annoying lag each time chart is redrawn.
- Tableau: 1/4 hour to set up bar chart with multiple variables. Able to select individual variables, cluster by variable, and sort by variable. Only system to complete the full scenario.
5. Final Assessment. Although the scenario nicely addressed ease of use, there were other considerations that played into the final decision. These required a bit more research and some trade-offs, particularly regarding the Xcelsius-style ability to embed interactive charts within a Powerpoint slide. No one else on the list could do this without either loading additional software (often a problem when end-user PCs are locked down by corporate IT) or accessing an external server (a problem for mobile users and with license costs).
The following table shows my results:

6. Next Steps. The result of this project wasn’t a final selection, but a recommendation of a couple of products to explore in depth. There were still plenty of details to research and confirm. However, starting with a scenario greatly sped up the work, narrowed the field, and ensured that the final choice would meet operational requirements. That was well worth the effort. I strongly suggest you do the same.
____________________________________
* The actual data looked like this. Here's a link if you want to download it:

Thursday, September 16, 2010
150+ Questions for Your Marketing Automation RFP
Summary: I've posted a list of nearly 200 RFP questions that I hope many people will adopt to their own needs. If it's used widely, buyers and vendors both benefit.
Death, taxes and RFPs. For business software vendors, all three are equally inevitable – and it's not clear which they dislike most. In my on-going humble efforts to serve the industry, I’ve posted nearly 200 detailed questions that could serve as the backbone for many RFPs. The list is available in the Resources section at www.raabguide.com; it’s free once you register.
The thought here is that everyone would benefit if many buyers worked from a standard list. Vendors could prepare one set of answers and buyers would get faster and more reliable responses to a thorough set of questions.
I do have a minor ulterior motive in posting this list. Those of you familiar with the Raab Guide to Demand Generation Systems know it already contains very detailed information on major vendors (Aprimo, Eloqua, Genius.com, Manticore Technology, Marketbright, Marketo, Neolane and Silverpop). But preparing each entry takes a tremendous amount of work and, frankly, it’s hard to make sense of the results. So I’ve come up with a list of mostly yes/no questions that highlight key differences among vendors. This is much easier to prepare and probably easier for buyers to use. I’ve sent this list to two dozen vendors and will publish the results in a new report as soon as the replies come back. Posting the list will encourage the vendors to participate, since they can expect other people to ask the same questions.
Obviously I wouldn't have planned the new report if I didn't think it was worthwhile. Still, the approach has several drawbacks. Here's how I'm dealing with them.
- It relies on the vendors to answer accurately. Outright puffery aside, written questions are open to interpretation and you can bet the vendors will give themselves the benefit of any doubt. The best I could do was to make the questions as specific as possible. Here’s a typical example:
share assets across campaigns: marketing materials such as templates, emails, Web pages and forms, content blocks and links can be shared across campaigns. “Sharing” means the component is stored outside of a specific campaign in a central repository which is accessed during campaign development. The system may either create an independent copy of the item for each campaign, meaning changes to the local copy or the master do not affect each other, or it can establish a link between the campaign and the master copy, meaning any change to the master will be reflected in all campaigns using that item.
Hopefully this is precise enough that a “yes” actually means something. I’ve also described a couple of alternative ways of solving the problem, in the hope that this will encourage buyers to dig deeper on their own.
- The list is generic. Buyers have different needs. Each will care about only some of the questions on the list and about other questions I’ve left out. Of course, I can (and just did) warn buyers to select the items that matter to them. Beyond that, I’m creating separate weights for how important each answer is to small, medium and large marketing departments. That will let my final report include summary scores that help identify which vendors are best suited for each type of buyer.
Naturally, people will disagree with some of my weights. But that’s a healthy debate. In fact, prioritizing requirements is the most important discussion buyers can have when selecting a product. So bring it on.
- Not everything can be scored. Usability, vendor support and reliability are just some items that are hard to capture in yes/no questions. They also can change pretty rapidly. I can’t offer a solution other than to stress the importance of buyers doing their own research through demonstrations (based on their own scenarios), reference checking and conversations with other users.
In theory it should be possible for social media to provide a public forum for such issues. But I don’t see a way to do this without having self-interested parties distort the results. Suggestions, anyone?
Indeed, a truly ideal solution would be for vendors to post their answers on their own Web sites. That would give buyers clear, consistent information without issuing a RFP at all. I’m not holding my breath for that one, however.
In any case, please download the list, use it as you see fit, and let me know what happens. As near as I can tell, everybody wins.
Death, taxes and RFPs. For business software vendors, all three are equally inevitable – and it's not clear which they dislike most. In my on-going humble efforts to serve the industry, I’ve posted nearly 200 detailed questions that could serve as the backbone for many RFPs. The list is available in the Resources section at www.raabguide.com; it’s free once you register.
The thought here is that everyone would benefit if many buyers worked from a standard list. Vendors could prepare one set of answers and buyers would get faster and more reliable responses to a thorough set of questions.
I do have a minor ulterior motive in posting this list. Those of you familiar with the Raab Guide to Demand Generation Systems know it already contains very detailed information on major vendors (Aprimo, Eloqua, Genius.com, Manticore Technology, Marketbright, Marketo, Neolane and Silverpop). But preparing each entry takes a tremendous amount of work and, frankly, it’s hard to make sense of the results. So I’ve come up with a list of mostly yes/no questions that highlight key differences among vendors. This is much easier to prepare and probably easier for buyers to use. I’ve sent this list to two dozen vendors and will publish the results in a new report as soon as the replies come back. Posting the list will encourage the vendors to participate, since they can expect other people to ask the same questions.
Obviously I wouldn't have planned the new report if I didn't think it was worthwhile. Still, the approach has several drawbacks. Here's how I'm dealing with them.
- It relies on the vendors to answer accurately. Outright puffery aside, written questions are open to interpretation and you can bet the vendors will give themselves the benefit of any doubt. The best I could do was to make the questions as specific as possible. Here’s a typical example:
share assets across campaigns: marketing materials such as templates, emails, Web pages and forms, content blocks and links can be shared across campaigns. “Sharing” means the component is stored outside of a specific campaign in a central repository which is accessed during campaign development. The system may either create an independent copy of the item for each campaign, meaning changes to the local copy or the master do not affect each other, or it can establish a link between the campaign and the master copy, meaning any change to the master will be reflected in all campaigns using that item.
Hopefully this is precise enough that a “yes” actually means something. I’ve also described a couple of alternative ways of solving the problem, in the hope that this will encourage buyers to dig deeper on their own.
- The list is generic. Buyers have different needs. Each will care about only some of the questions on the list and about other questions I’ve left out. Of course, I can (and just did) warn buyers to select the items that matter to them. Beyond that, I’m creating separate weights for how important each answer is to small, medium and large marketing departments. That will let my final report include summary scores that help identify which vendors are best suited for each type of buyer.
Naturally, people will disagree with some of my weights. But that’s a healthy debate. In fact, prioritizing requirements is the most important discussion buyers can have when selecting a product. So bring it on.
- Not everything can be scored. Usability, vendor support and reliability are just some items that are hard to capture in yes/no questions. They also can change pretty rapidly. I can’t offer a solution other than to stress the importance of buyers doing their own research through demonstrations (based on their own scenarios), reference checking and conversations with other users.
In theory it should be possible for social media to provide a public forum for such issues. But I don’t see a way to do this without having self-interested parties distort the results. Suggestions, anyone?
* * *
Speaking of suggestions, I’m sure people will think of questions that should be added. I actually have a few myself. Changes will have to wait because the current set has already gone out to vendors. But if this concept generally works, we can expect future iterations of both the report and the master list. So there will be time for updates. If this really takes off, perhaps the list can be maintained in a communal form such as a Wiki. Raab Associates does not need to own this.Indeed, a truly ideal solution would be for vendors to post their answers on their own Web sites. That would give buyers clear, consistent information without issuing a RFP at all. I’m not holding my breath for that one, however.
In any case, please download the list, use it as you see fit, and let me know what happens. As near as I can tell, everybody wins.
Subscribe to:
Posts (Atom)