I mentioned 6Sense briefly in a recent post about vendors who help companies find prospects on the Web. Since then, I’ve had a more detailed briefing, which clarified that their scope extends well beyond prospect lists to predictive models applied across all stages of the purchase cycle. We also clarified that users can extract company-level profiles including attributes (industry, revenue, etc.) and key activities (Web site visits, topics researched) and scores at both company and individual levels.
The extraction features are important – at least to me – because they determine whether 6Sense qualifies as a “customer data platform” (CDP), a type of system I see as fundamental for future marketing. As a quick refresher, CDP is defined as “a marketer-controlled system that supports external marketing execution based on persistent, cross-channel customer data.” The part about “supports external marketing execution” is where data extraction comes in: it means that external systems can access data within the CDP for their own use. 6Sense wouldn't be a CDP if it merely displayed its data on a CRM screen without letting the CRM system import it. If 6Sense exposed model scores but no other data, it would qualify as a CDP by the thinnest margin possible.
Of course, there are more important things about 6Sense than whether I consider it a CDP. Starting at the beginning, the system imports a list of each client’s customers and sales opportunities from CRM and marketing automation systems. Standard integrations are available for Salesforce.com, Oracle Eloqua and Marketo. APIs can load data from other sources, potentially including other CRM marketing automation products, Web logs and tags, order processing, bookings, call centers, media impressions, and pretty much anything else.
The system standardizes and deduplicates this data at the individual and company levels. It then matches against company profiles that 6Sense itself has gathered from the usual Web sources – public social media, Web sites, job boards, directories, etc. – and from a network of third-party Web sites. The Web site network is unusual if not unique among B2B data providers; the most similar offerings I can think of are audience profiles from B2C site networks, from owners of large B2B sites, and based on other B2B activity such as email response. The advantage of Web site activity is it finds companies early in the buying cycle, when they are most open to considering new vendors. The system can map known individuals to individuals on partner Web sites, using hashing techniques to avoid passing personally identifiable information. .
The result of all this is a database with deep company and individual profiles including both attributes and activities. 6Sense uses this to build company and individual-level predictive models. Company models score each company’s likelihood to buy from the client. Individual models predict the individual’s likelihood to be the best sales contact. Models are built by 6Sense staff using automated techniques and take about three weeks to complete.
The system can also estimate what product each company is most likely to purchase, when it will buy, and what stage it has reached in the buying process. Stages are defined in consultation with the client. Assignment rules might use purchase likelihood or a predictive model trained against a sample of companies in each buying stage.
Outputs from 6Sense can include lists of likely new prospect companies (not in the client’s existing database), contacts at those companies, current prospects organized by purchase stage and ranked by purchase likelihood, current contacts within each company, and key indicators that drive each company’s score. The key indicators can be very specific, such as searches for competitors’ names, visits to product detail pages, or activity by known leads.
Users can define segments based on these or other attributes and export their related data to CRM, marketing automation, ad targeting, or Web personalization systems via file transfers or API calls. 6Sense can also display the information on screen to help guide sales conversations and is now testing an extension to recommend specific talking points.
Pricing for 6Sense starts at more than $100,000 and is based on factors including the number of models created and volume of new net contacts provided. The company was founded in 2013 and released early versions of its product that same year. Formal release was in May 2014. It has ten current customers and more in the pipeline.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Thursday, August 28, 2014
Tuesday, August 26, 2014
HubSpot Files for IPO: Solid Financials for a Young Company
HubSpot filed its much-anticipated S-1 for a public stock offering yesterday. Since the company has been admirably transparent all along about its finances, there were no big surprises: they lose considerable money, as expected, but their expenses seem about in line. A comparison with the S-1 figures of Eloqua and Marketo, plus Marketo’s most recent six months, shows:
- loss equal to 35% of revenue, compared with 11% for Eloqua (which was run very conservatively) and 55% for Marketo (which was highly aggressive).
- high absolute revenue level (well on track to exceed $100 million for the year, about 20% higher than Eloqua and 60% higher than Marketo at the time of S-1).
- subscription and support costs are 25% of subscription revenue, in between Eloqua and Marketo. (This ratio is important because it hints at the profitability of on-going operations regardless of sales costs. Marketo has made great strides in bringing it down since their S-1. Marketo’s R&D costs are also now more in line, at 21% of revenue. In fact, Marketo today looks a lot like the HubSpot S-1.)
- sales and marketing costs at 64% of revenue, well above Eloqua (which was growing much more slowly) and similar to Marketo (which was growing slightly faster).
The basic picture, then, is a disciplined company that has grown quickly while keeping costs in line. As I say, pretty much what we suspected.
The S-1 does provide some other insights – in particular, highlighting HubSpot’s shift in focus from very small businesses to mid-size business. The following table, taken directly from the S-1, shows this clearly: revenue per customer has climbed steadily from $5,395 in 2011 to $8,823 in the first half of 2014 – a 64% increase. Still, the average revenue per client is nowhere near Marketo, which is in the $30,000 to $40,000 range.
The table also shows that customer acquisition cost has increased by 76%, which is even more than average revenue. This makes sense: it’s harder to sell bigger accounts. The increased cost might be worrisome – it means HubSpot is losing more money on each new account – but the higher lifetime revenue means the difference will be made up fairly quickly. Even more important, the "subscription dollar retention rate" has improved sharply, from 71.6% to 90.3%, probably reflecting a combinatino of better customer retention, higher revenue from existing customers, and an increase average revenue from new customers.
The S-1 also reveals that agencies and agency referrals accounted for 42% of customers and 33% of revenue for the six months ended June 30, 2014 – meaning that agency clients tend to be smaller than average. I’d expect HubSpot to have more direct sales as it engages larger clients, but that doesn’t seem to be the case, at least yet: compensation to agency partners grew by $1.2 million for all of 2013 and by $0.9 million for the first half of 2014, suggesting a 50% or better increase for the year as a whole. This is roughly in line with over-all revenue growth.
In fact, the only thing that struck me as a bit odd in the prospectus was HubSpot’s frequent description of itself as an “all in one” marketing and sales solution – a term more usually applied to micro-business specialists like Infusionsoft and Ontraport, which combine marketing automation with CRM. HubSpot does make several references to supporting sales departments in its document, which a casual reader might interpret to mean it also provides CRM features. But this is something the company has adamantly refused to do for years, despite pressure from its original small business clients and the partners who serve them. It makes even less sense when selling to the mid-market. The prospectus does eventually state that its sales features are designed to integrate with CRM, but the ambiguity is atypically off-message for a firm that is usually so clear about its position.
- loss equal to 35% of revenue, compared with 11% for Eloqua (which was run very conservatively) and 55% for Marketo (which was highly aggressive).
- high absolute revenue level (well on track to exceed $100 million for the year, about 20% higher than Eloqua and 60% higher than Marketo at the time of S-1).
- subscription and support costs are 25% of subscription revenue, in between Eloqua and Marketo. (This ratio is important because it hints at the profitability of on-going operations regardless of sales costs. Marketo has made great strides in bringing it down since their S-1. Marketo’s R&D costs are also now more in line, at 21% of revenue. In fact, Marketo today looks a lot like the HubSpot S-1.)
- sales and marketing costs at 64% of revenue, well above Eloqua (which was growing much more slowly) and similar to Marketo (which was growing slightly faster).
The basic picture, then, is a disciplined company that has grown quickly while keeping costs in line. As I say, pretty much what we suspected.
The S-1 does provide some other insights – in particular, highlighting HubSpot’s shift in focus from very small businesses to mid-size business. The following table, taken directly from the S-1, shows this clearly: revenue per customer has climbed steadily from $5,395 in 2011 to $8,823 in the first half of 2014 – a 64% increase. Still, the average revenue per client is nowhere near Marketo, which is in the $30,000 to $40,000 range.
The table also shows that customer acquisition cost has increased by 76%, which is even more than average revenue. This makes sense: it’s harder to sell bigger accounts. The increased cost might be worrisome – it means HubSpot is losing more money on each new account – but the higher lifetime revenue means the difference will be made up fairly quickly. Even more important, the "subscription dollar retention rate" has improved sharply, from 71.6% to 90.3%, probably reflecting a combinatino of better customer retention, higher revenue from existing customers, and an increase average revenue from new customers.
The S-1 also reveals that agencies and agency referrals accounted for 42% of customers and 33% of revenue for the six months ended June 30, 2014 – meaning that agency clients tend to be smaller than average. I’d expect HubSpot to have more direct sales as it engages larger clients, but that doesn’t seem to be the case, at least yet: compensation to agency partners grew by $1.2 million for all of 2013 and by $0.9 million for the first half of 2014, suggesting a 50% or better increase for the year as a whole. This is roughly in line with over-all revenue growth.
In fact, the only thing that struck me as a bit odd in the prospectus was HubSpot’s frequent description of itself as an “all in one” marketing and sales solution – a term more usually applied to micro-business specialists like Infusionsoft and Ontraport, which combine marketing automation with CRM. HubSpot does make several references to supporting sales departments in its document, which a casual reader might interpret to mean it also provides CRM features. But this is something the company has adamantly refused to do for years, despite pressure from its original small business clients and the partners who serve them. It makes even less sense when selling to the mid-market. The prospectus does eventually state that its sales features are designed to integrate with CRM, but the ambiguity is atypically off-message for a firm that is usually so clear about its position.
Thursday, August 21, 2014
MarTech Conference: Chief Marketing Technology Officers Come Out and Play
I got home late last night from the inaugural two-day MarTech conference in Boston, which was simply terrific. Conference chair Scott Brinker assembled an all-star cast of presenters and, more important, a finely balanced mix of topics from industry trends to practical issues of planning, hiring, organization, and technology choices. (I guess I should note that I also presented and contributed a few thoughts on the agenda, but credit for the success goes elsewhere.)
While the formal topic of the conference was “marketing technology”, its real theme was “marketing technology leadership”, and in particular emergence of the “chief marketing technology officer” as a critical role. Many attendees held that job, either in title or de facto responsibilities. Most were clearly delighted to find so many other people sharing the same opportunities and challenges. They had probably developed a secret handshake by the time the conference was over, although as a mere consultant I wasn’t told what it was.
The presentations were consistently excellent, which in itself is close to amazing: I guess Scott had checked everyone out carefully before extending his invitations. Different attendees probably had their own favorites depending on their own interests. That being the case, I think it doesn't insult anyone to say that the two that most resonated with me personally were by Laura McLellan of Gartner – source of the famous “marketing will spend more than IT by 2017” forecast, which she reported has already come true -- and Clorox Director of Marketing Technology Shawn Goodin.
The factoid I recall from McLellan’s presentation was the 81% of large companies already have someone in the chief marketing technologist role – so that particular future had already arrived. My favorite part of Goodin’s talk was a marketing technology capability heat map that displayed all of a company’s tech strengths and weaknesses on one page.
I was also immensely impressed with SapientNitro CTO Sheldon Monteiro’s description of their in-house training program to grow their own chief marketing technology officers – and in particular his response to the objection that people they train might then leave: “What if we don’t train them and they stay?”
The next edition of MarTech is already planned for March 31- April 1 in San Francisco, presumably to be followed by another Boston edition next year. I’m sure they’ll make the obvious extensions like more tracks and pre/post-conference intensive trainings. But why stop there? This is basically summer camp for marketing tech geeks, so I’ve already suggested to Scott that he add audience participation including:
Seriously, Scott – who would not pay good money to attend?
-
While the formal topic of the conference was “marketing technology”, its real theme was “marketing technology leadership”, and in particular emergence of the “chief marketing technology officer” as a critical role. Many attendees held that job, either in title or de facto responsibilities. Most were clearly delighted to find so many other people sharing the same opportunities and challenges. They had probably developed a secret handshake by the time the conference was over, although as a mere consultant I wasn’t told what it was.
The presentations were consistently excellent, which in itself is close to amazing: I guess Scott had checked everyone out carefully before extending his invitations. Different attendees probably had their own favorites depending on their own interests. That being the case, I think it doesn't insult anyone to say that the two that most resonated with me personally were by Laura McLellan of Gartner – source of the famous “marketing will spend more than IT by 2017” forecast, which she reported has already come true -- and Clorox Director of Marketing Technology Shawn Goodin.
I was also immensely impressed with SapientNitro CTO Sheldon Monteiro’s description of their in-house training program to grow their own chief marketing technology officers – and in particular his response to the objection that people they train might then leave: “What if we don’t train them and they stay?”
The next edition of MarTech is already planned for March 31- April 1 in San Francisco, presumably to be followed by another Boston edition next year. I’m sure they’ll make the obvious extensions like more tracks and pre/post-conference intensive trainings. But why stop there? This is basically summer camp for marketing tech geeks, so I’ve already suggested to Scott that he add audience participation including:
- role-playing: if marketers ran tech and techies ran marketing; if buyers acted like vendors and vendors acted like buyers
- TV show knockoffs: CMTO Shark Tank business plans for marketing technology investments; The Vendor Selection Dating Game; Martech Recruiting Bachelorette; CSI MarTech Unit analyzing project failures; and of course Survivor: CMTO
- board games: CMTO versions of Monopoly, Snakes and Ladders, and Dungeons and Dragons
- scavenger hunt: find the best short list of products via Web research in a fixed period of time, without ever talking to a salesperson
- camp fire stories: vendors share their scariest client experiences, while wearing paper bag masks to protect their jobs
- tall tale telling contest: who can make the most ludicrous claim with a straight face (note: separate divisions for buyers and vendors).
- not to mention a hackathon, talent show, and, karaoke.
Seriously, Scott – who would not pay good money to attend?
-
Thursday, August 14, 2014
Lots of Vendors Can Help You Find Leads on the Web
Few people would suggest you learn salesmanship from the play Glengarry Glen Ross,* but its central message rings true: good leads are the lifeblood of a sales organization.** That’s why scanning the Internet to find new prospects is such an exciting opportunity. At least a dozen firms are now following that path.
These firms scan company Web sites, social media, news sites, directories, and other sources to identify companies, extract attributes like revenue, growth rates, and technologies used, and flag events that might indicate a sales opportunity, such as opening a new office, launching a new product, or hiring new management. Of course, there are plenty of important differences which impact which might make sense for you. Some of the more important ones include:
• Specific data sources, scanning techniques, and analytical methods. Evaluating these in the abstract is interesting, but what works well for one purpose in one industry might work poorly for something else. So buyers really need to run their own tests to see what works for them.
• Types of predictive models available. Some vendors only rank leads while others build multiple models for different purposes.
• Use of the client's internal data for model scoring, and whether this extends to sources beyond CRM.
• Whether the vendor sells prospect lists or only enhance names provided by the client.
• Whether the vendor provides lists of individuals as well as companies. Since Web scanning is usually at the company level, the individual names usually come from other sources.
• Coverage outside the United States
• Information returned beyond names and lead scores, such as recommended treatments and social profiles.
• Whether the company maintains a permanent database on all businesses or only scans when clients request information about specified businesses or segments. The permanent database costs more to maintain but stores history and trend information that is otherwise unavailable.
Here are brief profiles of the vendors I’ve identified in or near this space. There are probably others. I’ve grouped them based on how much information I have available. This correlates to some degree with market presence.
Vendors I’ve Reviewed
• Mintigo both returns new prospects and applies scores to prospect lists provided by the client. It is currently stressing uses of predictive modeling beyond traditional lead scoring and making it easier for clients to set up new models on their own. I last reviewed them in June 2013.
• Lattice Engines runs different types of models against names provided by the client. It provides recommendations for customer treatments in addition to scores. I wrote about them in April 2013.
• Infer runs multiple models against leads provided by the client. It originally returned only lead scores, although they are now adding multiple applications that create different scores for different purposes. I wrote about them in August 2013.
• Fliptop returns scores and some summary data on names provided by the client. It stresses quick model building. I reviewed them in June 2014.
• LeadSpace scans for data on demand, rather than maintaining its own master database. It can find new prospects in specified segments and enhance names provided by the client. It returns individual names as well as companies. I wrote about them in June 2013.
Vendors I’ve Spoken with But Not Reviewed
• Growth Intelligence is a relatively recent UK-based startup that provides lists of companies and associated contacts that are likely to become customers. It draws from Web information, government lists, and similarity to the client’s current customer base.
• Kemvi is just emerging from stealth and plans to launch formally late this year or early 2015. It expects to focus on finding trigger events and advising salespeople about the best ways to approach each prospect.
• 6Sense finds new prospects using behavioral data gathered from a network of "several thousand" Web publishers rather scanning public sources like others in this list. So it doesn’t quite belong here, but it’s interesting nevertheless.
• Radius finds small business prospects that resemble current customers and deploys them to Salesforce.com, along with key profile information and lead scores.
Vendors I’ve Only Seen on the Web
• Avention (formerly OneSource, now part of D&B Hoovers) scans an eclectic collection of data sources to find prospect companies based on attributes and signals. It can rank companies with scoring but the scoring formulas are built manually.
• Gagein sends alerts on trigger events in media, social or public Web sites. It can track companies named by the client or build prospects lists for client-specified segments. It’s primarily a sales tool, with other features such as social selling and apparently without any predictive modeling.
• RealSociable is another sales-oriented product that tracks social media for trigger events related to target accounts. It appears to let users decide which events are important without using predictive models. But it seems to have some clever technology to extract the trigger events from unstructured social streams. That (presumed) semantic filtering is the only reason to include it on this list -- otherwise, the limit to social sources and lack of predictive models would rule it out.
_______________________________________________________________________
*and the one person who admitted to it now makes his living as an arts critic.
** Also, coffee really is for closers.
These firms scan company Web sites, social media, news sites, directories, and other sources to identify companies, extract attributes like revenue, growth rates, and technologies used, and flag events that might indicate a sales opportunity, such as opening a new office, launching a new product, or hiring new management. Of course, there are plenty of important differences which impact which might make sense for you. Some of the more important ones include:
• Specific data sources, scanning techniques, and analytical methods. Evaluating these in the abstract is interesting, but what works well for one purpose in one industry might work poorly for something else. So buyers really need to run their own tests to see what works for them.
• Types of predictive models available. Some vendors only rank leads while others build multiple models for different purposes.
• Use of the client's internal data for model scoring, and whether this extends to sources beyond CRM.
• Whether the vendor sells prospect lists or only enhance names provided by the client.
• Whether the vendor provides lists of individuals as well as companies. Since Web scanning is usually at the company level, the individual names usually come from other sources.
• Coverage outside the United States
• Information returned beyond names and lead scores, such as recommended treatments and social profiles.
• Whether the company maintains a permanent database on all businesses or only scans when clients request information about specified businesses or segments. The permanent database costs more to maintain but stores history and trend information that is otherwise unavailable.
Here are brief profiles of the vendors I’ve identified in or near this space. There are probably others. I’ve grouped them based on how much information I have available. This correlates to some degree with market presence.
Vendors I’ve Reviewed
• Mintigo both returns new prospects and applies scores to prospect lists provided by the client. It is currently stressing uses of predictive modeling beyond traditional lead scoring and making it easier for clients to set up new models on their own. I last reviewed them in June 2013.
• Lattice Engines runs different types of models against names provided by the client. It provides recommendations for customer treatments in addition to scores. I wrote about them in April 2013.
• Infer runs multiple models against leads provided by the client. It originally returned only lead scores, although they are now adding multiple applications that create different scores for different purposes. I wrote about them in August 2013.
• Fliptop returns scores and some summary data on names provided by the client. It stresses quick model building. I reviewed them in June 2014.
• LeadSpace scans for data on demand, rather than maintaining its own master database. It can find new prospects in specified segments and enhance names provided by the client. It returns individual names as well as companies. I wrote about them in June 2013.
Vendors I’ve Spoken with But Not Reviewed
• Growth Intelligence is a relatively recent UK-based startup that provides lists of companies and associated contacts that are likely to become customers. It draws from Web information, government lists, and similarity to the client’s current customer base.
• Kemvi is just emerging from stealth and plans to launch formally late this year or early 2015. It expects to focus on finding trigger events and advising salespeople about the best ways to approach each prospect.
• 6Sense finds new prospects using behavioral data gathered from a network of "several thousand" Web publishers rather scanning public sources like others in this list. So it doesn’t quite belong here, but it’s interesting nevertheless.
• Radius finds small business prospects that resemble current customers and deploys them to Salesforce.com, along with key profile information and lead scores.
Vendors I’ve Only Seen on the Web
• Avention (formerly OneSource, now part of D&B Hoovers) scans an eclectic collection of data sources to find prospect companies based on attributes and signals. It can rank companies with scoring but the scoring formulas are built manually.
• Gagein sends alerts on trigger events in media, social or public Web sites. It can track companies named by the client or build prospects lists for client-specified segments. It’s primarily a sales tool, with other features such as social selling and apparently without any predictive modeling.
• RealSociable is another sales-oriented product that tracks social media for trigger events related to target accounts. It appears to let users decide which events are important without using predictive models. But it seems to have some clever technology to extract the trigger events from unstructured social streams. That (presumed) semantic filtering is the only reason to include it on this list -- otherwise, the limit to social sources and lack of predictive models would rule it out.
_______________________________________________________________________
*and the one person who admitted to it now makes his living as an arts critic.
** Also, coffee really is for closers.
Wednesday, August 06, 2014
The Biggest Gap in Marketing Software Selection Isn't Product Information
There’s a reason I’m not a professional copy writer, which is that I’m bad at it. But, as with the press release I described yesterday, each new edition of the VEST report also requires me to write a promotional email for my house list. My solution today was:
Dear [First Name],
A friend of mine who is building one of those "wisdom of the crowd" software review sites tells me her research shows that what buyers want most is "apples to apples" comparisons of product features.
Duh.
I'll spare you my rant on why crowd sourced recommendations are a bad idea (hint: when you're sick, do you go to a doctor or ask a bunch of random strangers which treatments worked for them?) Suffice it to say that Raab Associates' B2B Marketing Automation Vendor Selection Tool (VEST) is written by a professional analyst (me) who has assembled 200 rigorously defined points of comparison on 25 marketing automation systems, allowing buyers to quickly find vendors who meet their needs. At a time when the marketing automation industry is more confusing than ever -- and when 25% of marketing automation buyers are unhappy with their results*-- it's essential to have solid, detailed information to make a sound decision.
That’s not terrible, at least by my pitifully low personal standards. But it did leave me feeling a bit uncomfortable about bashing the crowd-sourced software review sites. The problem was the doctor analogy: although it does express my fundamental objection accurately, it doesn’t tell quite the whole story. It’s true that random strangers can’t accurately diagnose you or prescribe a treatment. But random strangers can indeed provide useful information about whether a doctor is good to deal with and how well they their recommendations worked out. Similarly, crowd-sourced sites can provide valid information on how easy it is to use a piece of software and how well the company does at customer support. This is much closer with the kind of information you’d get from consumer view sites like Yelp. Customers don’t need to be technical experts to tell you whether they’re happy.
You’ll note that I haven’t criticized the crowd-sourced sites for the typical review site problems of fake reviews and biased reviewers. Companies like g2crowd (my main point of reference here, although not my friend’s business) do a reasonably good job at controlling for these by requiring users to verify their identity by logging in through LinkedIn. Of course, smart vendors will still game the system by encouraging satisfied users to post reviews, so the relative rankings will reflect the vendors’ marketing skills at least as much as their actual product quality. There’s nothing unethical about that, but it does undermine the notion that the resulting ranking accurately reflect the relative quality of the products and not just the relative skills of each company’s marketers.
On the other hand, good crowd sourcing sites let users see reviews from companies similar to their own in terms of size, industry, etc., and ask specific enough questions to get meaningful answers. And even the general comments they gather are somewhat useful as indicators of what a (highly biased) sample of users think.
But, now that I’m on the subject, I’ll let you know what I really think: which is that feature comparisons, whether prepared by a not-so-wise crowd or a professional analyst like Yours Truly, are not really the problem. What stops marketers from choosing the right software isn’t a lack of information about product features. It's a lack of understanding which features each marketer needs. Figuring out their feature needs requires crossing the gap between their business objectives, which most marketers do understand, and the features needed to support those objectives, which most marketers do not. Making that translation is where industry experts really add value, even more than in familiarity with the details of individual products. I’ve recently been working on a very interesting project to close that gap…but that’s a topic for another day.
Dear [First Name],
A friend of mine who is building one of those "wisdom of the crowd" software review sites tells me her research shows that what buyers want most is "apples to apples" comparisons of product features.
Duh.
I'll spare you my rant on why crowd sourced recommendations are a bad idea (hint: when you're sick, do you go to a doctor or ask a bunch of random strangers which treatments worked for them?) Suffice it to say that Raab Associates' B2B Marketing Automation Vendor Selection Tool (VEST) is written by a professional analyst (me) who has assembled 200 rigorously defined points of comparison on 25 marketing automation systems, allowing buyers to quickly find vendors who meet their needs. At a time when the marketing automation industry is more confusing than ever -- and when 25% of marketing automation buyers are unhappy with their results*-- it's essential to have solid, detailed information to make a sound decision.
That’s not terrible, at least by my pitifully low personal standards. But it did leave me feeling a bit uncomfortable about bashing the crowd-sourced software review sites. The problem was the doctor analogy: although it does express my fundamental objection accurately, it doesn’t tell quite the whole story. It’s true that random strangers can’t accurately diagnose you or prescribe a treatment. But random strangers can indeed provide useful information about whether a doctor is good to deal with and how well they their recommendations worked out. Similarly, crowd-sourced sites can provide valid information on how easy it is to use a piece of software and how well the company does at customer support. This is much closer with the kind of information you’d get from consumer view sites like Yelp. Customers don’t need to be technical experts to tell you whether they’re happy.
You’ll note that I haven’t criticized the crowd-sourced sites for the typical review site problems of fake reviews and biased reviewers. Companies like g2crowd (my main point of reference here, although not my friend’s business) do a reasonably good job at controlling for these by requiring users to verify their identity by logging in through LinkedIn. Of course, smart vendors will still game the system by encouraging satisfied users to post reviews, so the relative rankings will reflect the vendors’ marketing skills at least as much as their actual product quality. There’s nothing unethical about that, but it does undermine the notion that the resulting ranking accurately reflect the relative quality of the products and not just the relative skills of each company’s marketers.
On the other hand, good crowd sourcing sites let users see reviews from companies similar to their own in terms of size, industry, etc., and ask specific enough questions to get meaningful answers. And even the general comments they gather are somewhat useful as indicators of what a (highly biased) sample of users think.
But, now that I’m on the subject, I’ll let you know what I really think: which is that feature comparisons, whether prepared by a not-so-wise crowd or a professional analyst like Yours Truly, are not really the problem. What stops marketers from choosing the right software isn’t a lack of information about product features. It's a lack of understanding which features each marketer needs. Figuring out their feature needs requires crossing the gap between their business objectives, which most marketers do understand, and the features needed to support those objectives, which most marketers do not. Making that translation is where industry experts really add value, even more than in familiarity with the details of individual products. I’ve recently been working on a very interesting project to close that gap…but that’s a topic for another day.
Tuesday, August 05, 2014
VEST Report: Analytics Tops List of Upgraded Marketing Automation Features
I finished the latest release of the B2B Marketing Automation Vendor Selection Tool (VEST) yesterday, which is always a great relief. But the elation lasted about two minutes, since I then had to write a press release announcing it. The challenge with that is you need a “news hook”, meaning something that gives reporters a reason to write about your story. For the January release, that’s always easy, since I have a new estimate of industry revenues and the press loves that sort of thing. But I can’t repeat that for the mid-year release. That meant I had to dive back into the VEST data and find something interesting to say about it.
Of course, that isn’t all bad, since rolling around in industry data makes me as happy as a pig in mud.* But finding clever insights on demand is still tough. Happily, I did find something intriguing, at least to my obviously-biased eyes. You can read the headline in the press release or – lucky you – get even more details below.
What I did for my analysis was look at changes in vendor scores for the 200 items that go into the VEST data. That gives an interesting view of where vendors are improving their products. I had no particular expectation of what I’d find. But when I looked at the most common items (those which had been upgraded by three or more vendors), it immediately became clear that changes related to analytics were heavily represented. In fact, if you count lead scoring and content testing as part of analytics, seven of the dozen items fell into that category. Who knew?
Looking deeper, I expanded my analysis to include items upgraded by two or more vendors, which included 43 of the 200 total. By golly, the results were similar – 19 of the items fell into analytics, compared with just four each in the next most common groups (campaign management, content marketing, and CRM integration). Houston, we have a pattern.
As I say, this result was totally unexpected, but it can still be explained with 20/20 hindsight. I might have expected more development of features for social, mobile, and content marketing, which are top-of-mind for many marketers today. But social and content marketing are mostly managed outside of marketing automation and mobile is mostly limited to ensuring messages are viewable on mobile devices. By contrast, analytics is something most marketers do want from their marketing automation system and an area where great improvements are still possible. So a clear-eyed understanding of how marketing automation is actually used, as opposed to what people are talking about, would have predicted analytics as the focus of vendor attention.
Needless to say, this analysis is really just a byproduct of the primary purpose of the VEST, which is to assemble apples-to-apples comparisons of B2B marketing automation vendors so that buyers have an easier time finding the right system. I’ll probably circle back and write a bit more about the latest data in another post. In the meantime, if you’re actually in the process of making a purchase, or just want to understand the industry better, you can buy your very own copy at the Raab Guide Web site.
_______________________________________________________________________________
* Does anyone know whether pigs really like to roll in mud? It’s a great cliché and all, but I am not a farm boy.
Of course, that isn’t all bad, since rolling around in industry data makes me as happy as a pig in mud.* But finding clever insights on demand is still tough. Happily, I did find something intriguing, at least to my obviously-biased eyes. You can read the headline in the press release or – lucky you – get even more details below.
What I did for my analysis was look at changes in vendor scores for the 200 items that go into the VEST data. That gives an interesting view of where vendors are improving their products. I had no particular expectation of what I’d find. But when I looked at the most common items (those which had been upgraded by three or more vendors), it immediately became clear that changes related to analytics were heavily represented. In fact, if you count lead scoring and content testing as part of analytics, seven of the dozen items fell into that category. Who knew?
Looking deeper, I expanded my analysis to include items upgraded by two or more vendors, which included 43 of the 200 total. By golly, the results were similar – 19 of the items fell into analytics, compared with just four each in the next most common groups (campaign management, content marketing, and CRM integration). Houston, we have a pattern.
As I say, this result was totally unexpected, but it can still be explained with 20/20 hindsight. I might have expected more development of features for social, mobile, and content marketing, which are top-of-mind for many marketers today. But social and content marketing are mostly managed outside of marketing automation and mobile is mostly limited to ensuring messages are viewable on mobile devices. By contrast, analytics is something most marketers do want from their marketing automation system and an area where great improvements are still possible. So a clear-eyed understanding of how marketing automation is actually used, as opposed to what people are talking about, would have predicted analytics as the focus of vendor attention.
Needless to say, this analysis is really just a byproduct of the primary purpose of the VEST, which is to assemble apples-to-apples comparisons of B2B marketing automation vendors so that buyers have an easier time finding the right system. I’ll probably circle back and write a bit more about the latest data in another post. In the meantime, if you’re actually in the process of making a purchase, or just want to understand the industry better, you can buy your very own copy at the Raab Guide Web site.
_______________________________________________________________________________
* Does anyone know whether pigs really like to roll in mud? It’s a great cliché and all, but I am not a farm boy.