Summary: Vendor scores from our new B2B Marketing Automation Vendor Selection Tool offer new proof of an old truth: there's no one best system for everyone.
The one point I make every time I discuss software selection is that you have to find a vendor that matches your own business needs. No one ever denies this, of course, but the typical next question is still, Who are industry leaders? – with the unstated but obvious intention of limiting consideration to whomever gets named.
It’s not that these people didn’t listen: they certainly want a system to meet their needs. But I think they’re assuming that most systems are pretty much the same, and therefore the industry leaders are the most likely meet their requirements. The assumption is wrong but it’s hard to shake. My reluctance to contribute to this error is the main reason I’ve carefully avoided any formal ranking of vendors over the years.
But of course you know that I’ve now abandoned that position with the new B2B Marketing Automation Vendor Selection Tool (VEST) – which I’ll remind you is both totally awesome and available for sale on this very nice Web page. I’ll admit my change is partly about giving the market what it wants. But I also believe the new VEST can help to educate people about product differences, leading them to look more deeply than they would otherwise. Certainly the VEST gives them fingertip access to vastly more information about more products than they are likely to gather on their own. So, in that sense at least, it will surely help them to consider more options.
Back to the education part. Even someone as wise as you, a Regular Reader Of This Blog, may wonder whether those Important Differences really exist. After all, wouldn’t it be safe to assume that the industry leaders are in fact the strongest products across the board?
Nope.
In fact, the best thing about the new VEST may be that I finally have hard data to prove this point. The graphic below may not be very legible, but it’s really intended to illustrate patterns rather than show a lot of detailed information.
Before you squint too hard, here’s what you’re looking at:
- left to right, I’ve listed the 18 VEST vendors (nice alliteration) in order of their percentage of small business clients. So vendors with mostly small clients are at the left, and vendors with mostly large clients are at the right.
- reading down, there are three big blocks relating to vendor scores for small, mid-size and large businesses. (In case you missed a class, the VEST has different scoring schemes for those three client groups because their needs are different.)
- within the three big blocks, there are blocks for product categories (lead generation, campaign management, scoring and distribution, reporting, technology, usability and pricing) and for vendorcategories (company strength and sector presence [sectors are another term for the small, mid-size and large businesses]). Each category has its own row.
- the bright green cells represent the highest-ranked vendors for each category. Specifically, I took the vendor scores (based on the weighted sum of vendor scores on the individual items—as many as 60 items in some categories) and normalized on a scale of 0 to 1. In the product categories, green cells represent a normalized score of .9 or higher (that is, the vendor’s score was within 10% of the highest score). In the vendor categories, where the top vendor sometimes scores much higher than the rest, green cells represent a normalized score of .75 or better.
- the dark green cells show the highest combined scores across all product and vendor categories. The combined scores reflect the weights applied to the individual categories, as I explained in my earlier posts. Again, the scores are normalized and the green indicates scores higher than .9 for product fit and .75 for vendor fit.
Ok then. Now that you know what you’re looking at, here are a few observations:
- colored cells are concentrated at the left in the upper blocks, spread pretty widely in the middle, and to the right in the lower blocks. In concrete terms, this means that vendors with the most small business clients are rated most highly on small business features, vendors with a mix of clients dominate the middle, and vendors with large clients have the strongest big-client features. Not at all surprising but good validation that the scores are realistic.
- there are no solid columns of cells. That is, no single vendor is best at everything, even within a single buyer type. The nearest exception is at the bottom right, where Neolane has five green product cells out of seven for large clients. Good for them, of course, but note there are five dark green cells on the large-company product fit row: that is, several other vendors have combined product scores within 10% of Neolane’s.
- light green cells are spread widely across the rows. This means that most vendors are among the best at something. In fact, only Genius lacks at least one green cell somewhere on the diagram. (And this isn’t fair to Genius, which has some unique features that are very important to certain users.)
- dark green cells aren’t necessarily below the most light green cells. The most glaring example is in the center row, where True Influence has a dark green cell (among the best over-all) without any light green cells (not the best in any category). This reflects the range in scores within each vendor: that is, vendors are often very good at some things and not so good at others.
All these observations lead back to the same central point: different vendors are good at different things and no one vendor is best at everything. This is exactly what buyers need to recognize to understand why it isn’t safe to look only at the market leaders. Nor can they simply decide based on the category rankings: there’s plenty of variation among individual items within those rankings too. In other words, there’s truly no substitute for understanding your requirements and researching the vendors in detail. The new VEST will help, but whether you buy it or not, you still have to do the work to make a good choice.
Tuesday, February 01, 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment