Summary: The first two posts in this series described my scoring for product fit. The third and final post describes scoring for vendor strength. And I'll give a little preview of the charts these scores produce...without product names attached.
Beyond assessing a vendor's current product, buyers also want to understand the current and future market position of the vendor itself. I had much less data to work with relating to vendor strength and there are many fewer conceptual issues. From a buyer’s perspective, the big questions about vendors are whether they’ll remain in business, whether they’ll continue to support and update the product, and whether they understand the needs of customers like me.
As with product fit, I used different weights for different types of buyers. As you'll see below, the bulk of the weight was assigned to concentration within each market. This reflects the fact that buyers really do want vendors who have experience with similar companies. Specific rationales are in the table. I converted the entries to the standard 0-2 scale and originally required the weights to add to 100. This changed when I added negative scoring to sharpen distinctions among vendor groups.
These weights produced a reasonable set of vendor group scores – small vendors scored best for small buyers, mixed and special vendors scored best for mid-size buyers, and big vendors scored best for big buyers. QED.
I should stress that all the score development I've described in these posts was done by looking at the vendor groups, not at individual vendors. (Well, maybe I peeked a little.) The acid test is when the individual vendors scores are plotted -- are different kinds of vendors pretty much where expected, without each category being so tightly clustered together that there's no meaningful differentiation?
The charts below show the results, without revealing specific vendor names. Instead, I've color-coded the points (each representing one vendor) using the same categories as before: green for small business vendors, black for mixed vendors, violet for specialists, and blue for big company vendors.
As you can see, the blue and green dots do dominate the upper right quadrants of their respective charts. The other colors are distributed in intriguing positions that will be very interesting indeed once names are attached. This should happen in early to mid January, once I finish packaging the data into a proper report. Stay tuned, and in the meantime have a Happy New Year.
This is the blog of David M. Raab, marketing technology consultant and analyst. Mr. Raab is founder and CEO of the Customer Data Platform Institute and Principal at Raab Associates Inc. All opinions here are his own. The blog is named for the Customer Experience Matrix, a tool to visualize marketing and operational interactions between a company and its customers.
Wednesday, December 29, 2010
Tuesday, December 28, 2010
Ranking B2B Marketing Automation Vendors: Part 2
Summary: Yesterday's post described the objectives of my product fit scores for B2B marketing automation vendors and how I set up the original weighting for individual elements. But the original set of scores seemed to favor more complex products, even for small business marketers. Here's how I addressed the problem.
Having decided that my weights needed adjusting, I wanted an independent assessment of which features were most appropriate for each type of buyer. I decided I could base this on the features each set of vendors provided. The only necessary assumption is that vendors offer the features that their target buyers need most. That seems like a reasonable premise -- or at least, more reliable than just applying my own opinions.
For this analysis, I first calculated the average score for each feature in each vendor group. Remember that I was working with a matrix of 150+ features for each vendor, each scored from 0 to 2 (0=not provided, 1=partly provided, 2=fully provided). A higher average means that more vendors provide the feature.
I then sorted the feature list based on average scores for the small business vendors. This put the least common small business features at the top and the most common at the bottom. I divided the list into six roughly-equal sized segments, representing feature groups that ranged from rare to very common. The final two segments both contained features shared by all small business vendors. One segment had features that were also shared by all big business vendors; the other had features that big business vendors didn't share. Finally, I calculated an average score for the big business vendors for each of the six groups.
What I found, not surprisingly, was that some features are more common in big-company systems, some are in all types of systems, and a few are concentrated among small-company systems. In each group, the intermediate vendors (mixed and special) had scores between the small and large vendor scores. This is additional confirmation that the groupings reflect a realistic ranking by buyer needs (or, at least, the vendors’ collective judgment of those needs).
The next step was to see whether my judgment matched the vendors’. Using the same feature groups, I calculated the aggregate weights I had already assigned to the those features for each buyer type. Sure enough, the big business features had the highest weights in the big business set, and the small business weights got relatively larger as you moved towards the small business features. The mid-size weights were somewhere in between, exactly where they should have been. Hooray for me!
Self-congratulation aside, we now have firmer ground for adjusting the weights to distinguish systems for different types of buyers. Remember, the small business scores in particular weren’t very different for the different vendor groups, and actually gave higher scores to big business vendors once you removed the adjustment for price. (As you may have guessed, most features in the “more small” group are price-related – proving, as if proof were necessary, that small businesses are very price sensitive.)
From here, the technical solution here is quite obvious: assign negative weights to big business features in the small business weight set. This recognizes that unnecessary features actually reduce the value of a system by making it harder to use. The caveat is that different users need different features. But that's why we have different weight sets in the first place.
(As an aside, it’s worth exploring why only assigning lower weights to the unnecessary features won’t suffice. Start with the fact that even a low weight increases rather than reduces a product score, so products with more features will always have a higher total. This is a fundamental problem with many feature-based scoring systems. In theory, assigning higher weights to other, more relevant factors might overcome this, but only if those features are more common among the simpler systems. In practice, most of the reassigned points will go to basic features which are present in all systems. This means the advanced systems get points for all the simple features plus the advanced features, while simple systems get points for the simple features only. So the advanced systems still win. That's just what happened with my original scores.)
Fortified with this evidence, I revisited my small business scoring and applied negative weights to items I felt were important only to large businesses. I applied similar but less severe adjustments to the mid-size weight set. The mid-size weights were in some ways a harder set of choices, since some big-company features do add value for mid-size firms. Although I worked without looking at the feature groups, the negative scores were indeed concentrated among the features in the large business groups:
I used the adjusted weights to create new product fit scores. These now show much more reasonable relationships across the vendor groups: that is, each vendor group has the highest scores for its primary buyer type and there’s a big difference between small and big business vendors. Hooray for me, again.
One caveat is that negative scores mean that weights in each set no longer add to 100%. This means that scores from different weight sets (i.e., reading down the chart) are no longer directly comparable. There are technical ways to solve this, but it's not worth the trouble for this particular project.
Tomorrow I'll describe the vendor fit scores. Mercifully, they are much simpler.
Having decided that my weights needed adjusting, I wanted an independent assessment of which features were most appropriate for each type of buyer. I decided I could base this on the features each set of vendors provided. The only necessary assumption is that vendors offer the features that their target buyers need most. That seems like a reasonable premise -- or at least, more reliable than just applying my own opinions.
For this analysis, I first calculated the average score for each feature in each vendor group. Remember that I was working with a matrix of 150+ features for each vendor, each scored from 0 to 2 (0=not provided, 1=partly provided, 2=fully provided). A higher average means that more vendors provide the feature.
I then sorted the feature list based on average scores for the small business vendors. This put the least common small business features at the top and the most common at the bottom. I divided the list into six roughly-equal sized segments, representing feature groups that ranged from rare to very common. The final two segments both contained features shared by all small business vendors. One segment had features that were also shared by all big business vendors; the other had features that big business vendors didn't share. Finally, I calculated an average score for the big business vendors for each of the six groups.
What I found, not surprisingly, was that some features are more common in big-company systems, some are in all types of systems, and a few are concentrated among small-company systems. In each group, the intermediate vendors (mixed and special) had scores between the small and large vendor scores. This is additional confirmation that the groupings reflect a realistic ranking by buyer needs (or, at least, the vendors’ collective judgment of those needs).
The next step was to see whether my judgment matched the vendors’. Using the same feature groups, I calculated the aggregate weights I had already assigned to the those features for each buyer type. Sure enough, the big business features had the highest weights in the big business set, and the small business weights got relatively larger as you moved towards the small business features. The mid-size weights were somewhere in between, exactly where they should have been. Hooray for me!
Self-congratulation aside, we now have firmer ground for adjusting the weights to distinguish systems for different types of buyers. Remember, the small business scores in particular weren’t very different for the different vendor groups, and actually gave higher scores to big business vendors once you removed the adjustment for price. (As you may have guessed, most features in the “more small” group are price-related – proving, as if proof were necessary, that small businesses are very price sensitive.)
From here, the technical solution here is quite obvious: assign negative weights to big business features in the small business weight set. This recognizes that unnecessary features actually reduce the value of a system by making it harder to use. The caveat is that different users need different features. But that's why we have different weight sets in the first place.
(As an aside, it’s worth exploring why only assigning lower weights to the unnecessary features won’t suffice. Start with the fact that even a low weight increases rather than reduces a product score, so products with more features will always have a higher total. This is a fundamental problem with many feature-based scoring systems. In theory, assigning higher weights to other, more relevant factors might overcome this, but only if those features are more common among the simpler systems. In practice, most of the reassigned points will go to basic features which are present in all systems. This means the advanced systems get points for all the simple features plus the advanced features, while simple systems get points for the simple features only. So the advanced systems still win. That's just what happened with my original scores.)
Fortified with this evidence, I revisited my small business scoring and applied negative weights to items I felt were important only to large businesses. I applied similar but less severe adjustments to the mid-size weight set. The mid-size weights were in some ways a harder set of choices, since some big-company features do add value for mid-size firms. Although I worked without looking at the feature groups, the negative scores were indeed concentrated among the features in the large business groups:
I used the adjusted weights to create new product fit scores. These now show much more reasonable relationships across the vendor groups: that is, each vendor group has the highest scores for its primary buyer type and there’s a big difference between small and big business vendors. Hooray for me, again.
One caveat is that negative scores mean that weights in each set no longer add to 100%. This means that scores from different weight sets (i.e., reading down the chart) are no longer directly comparable. There are technical ways to solve this, but it's not worth the trouble for this particular project.
Tomorrow I'll describe the vendor fit scores. Mercifully, they are much simpler.
Monday, December 27, 2010
Ranking B2B Marketing Automation Vendors: How I Built My Scores (part 1)
Summary: The first of three posts describing my new scoring system for B2B marketing automation vendors.
I’ve finally had time to work up the vendor scores based on the 150+ RFP questions I distributed back in September. The result will be one of those industry landscape charts that analysts seem pretty much obliged to produce. I have never liked those charts because so many buyers consider only the handful of anointed “leaders”, even though one of the less popular vendors might actually be a better fit. This happens no matter how loudly analysts warn buyers not to make that mistake.
On the other hand, such charts are immensely popular. Recognizing that buyers will use the chart to select products no matter what I tell them, I settled on dimensions that are directly related to the purchase process:
- product fit, which assesses how well a product matches buyer needs. This is a combination of features, usability, technology, and price.
- vendor strength, which assesses a vendor’s current and future business position. This is a combination of company size, client base, and financial resources.
These are conceptually quite different from the dimensions used in the Gartner and Forrester reports* , which are designed to illustrate competitive position. But I’m perfectly aware that only readers of this blog will recognize the distinction. So I've also decided to create three versions of the chart, each tailored to the needs of different types of buyers.
In the interest of simplicity, my three charts will address marketers at small, medium and big companies. The labels are really short-hand for the relative sophistication and complexity of user requirements. But if I explicitly used a scale from simple to sophisticated, no one would ever admit that their needs were simple -- even to themselves. I've hoping the relatively neutral labels will encourage people to be more realistic. In practice, we all know that some small companies are very sophisticated marketers and some big companies are not. I can only hope that buyers will judge for themselves which category is most appropriate.
The trick to producing three different rankings from the same set of data is to produce three sets of weights for the different elements. Raab Associates’ primary business for the past two decades has been selecting systems, so we have a well-defined methodology for vendor scoring.
Our approach is to first set the weights for major categories and then allocate weights within those categories. The key is that the weights must add to 100%. This forces trade-offs first among the major categories and then among factors within each category. Without the 100% limit, two things happen:
- everything is listed as high priority. We consistently found that if you ask people to rate features as "must have" "desirable" and "not needed", 95% of requirements are rated as “must have”. From a prioritization standpoint, that's effectively useless.
- categories with many factors are overweighted. What happens is that each factor gets at least one point, giving the category a high aggregate total. For example, category with five factors has a weight of at least five, while a category with 20 factors has a weight of 20 or more.
The following table shows the major weights I assigned. The heaviest weight goes to lead generation and nurturing campaigns – a combined 40% across all buyer types. I weighted pricing much more heavily for small firms, and gabe technology, lead scoring and technology heavier weights at larger firms. You’ll notice that Vendor is weighted at zero in all cases: remember that these are weights for product fitness scores. Vendor strength will be scored on a separate dimension.
I think these weights are reasonable representations of how buyers think in the different categories. But they’re ultimately just my opinion. So I also created a reality check by looking at vendors who target the different buyer types.
This was possible because the matrix asked vendors to describe their percentage of clients in small, medium and large businesses. (The ranges were under $20 million, $20 million to $500 million, and over $500 million annual revenue.) Grouping vendors with similar percentages of small clients yielded the following sets:
- small business (60% or more small business clients): Infusionsoft, OfficeAutoPilot, TrueInfluence
- mixed (33-66% small business clients): Pardot, Marketo, Eloqua, Manticore Technology, Silverpop, Genius
- specialists (15%-33% small business): LeadFormix, TreeHouse Interactive, SalesFUSION
- big clients (fewer than 15% small business): Marketbright, Neolane, Aprimo On Demand
(I also have data from LoopFuse, Net Results, and HubSpot, but didn’t have the client distribution for the first two. I excluded HubSpot because it is a fundamentally different product.)
If my weights were reasonable, two things should happen:
- vendors specializing in each client type should have the highest scores for that client type (that is, small business vendors have higher scores than big business vendors using the small business weights.)
- vendors should have their highest scores for their primary client type (that is, small business vendors should have higher scores with small business weights than with big business weights).
As the table below shows, that is pretty much what happened:
So far so good. But how did I know I’d assigned the right weights to the right features?
I was particularly worried about the small business weights. These showed a relatively small difference in scores across the different vendor groups. In addition, I knew I had weighted price heavily. In fact, it turned out that if I took price out of consideration, the other vendor groups would actually have higher scores than the small business specialists. This couldn't be right: the other systems are really too complicated for small business users, regardless of price.
Clearly some adjustments were necessary. I'll describe how I handled this in tomorrow's post.
_______________________________________________________
* “ability to execute” and “completeness of vision” for Gartner, “current offering”, “market presence” and “strategy” for Forrester.
I’ve finally had time to work up the vendor scores based on the 150+ RFP questions I distributed back in September. The result will be one of those industry landscape charts that analysts seem pretty much obliged to produce. I have never liked those charts because so many buyers consider only the handful of anointed “leaders”, even though one of the less popular vendors might actually be a better fit. This happens no matter how loudly analysts warn buyers not to make that mistake.
On the other hand, such charts are immensely popular. Recognizing that buyers will use the chart to select products no matter what I tell them, I settled on dimensions that are directly related to the purchase process:
- product fit, which assesses how well a product matches buyer needs. This is a combination of features, usability, technology, and price.
- vendor strength, which assesses a vendor’s current and future business position. This is a combination of company size, client base, and financial resources.
These are conceptually quite different from the dimensions used in the Gartner and Forrester reports* , which are designed to illustrate competitive position. But I’m perfectly aware that only readers of this blog will recognize the distinction. So I've also decided to create three versions of the chart, each tailored to the needs of different types of buyers.
In the interest of simplicity, my three charts will address marketers at small, medium and big companies. The labels are really short-hand for the relative sophistication and complexity of user requirements. But if I explicitly used a scale from simple to sophisticated, no one would ever admit that their needs were simple -- even to themselves. I've hoping the relatively neutral labels will encourage people to be more realistic. In practice, we all know that some small companies are very sophisticated marketers and some big companies are not. I can only hope that buyers will judge for themselves which category is most appropriate.
The trick to producing three different rankings from the same set of data is to produce three sets of weights for the different elements. Raab Associates’ primary business for the past two decades has been selecting systems, so we have a well-defined methodology for vendor scoring.
Our approach is to first set the weights for major categories and then allocate weights within those categories. The key is that the weights must add to 100%. This forces trade-offs first among the major categories and then among factors within each category. Without the 100% limit, two things happen:
- everything is listed as high priority. We consistently found that if you ask people to rate features as "must have" "desirable" and "not needed", 95% of requirements are rated as “must have”. From a prioritization standpoint, that's effectively useless.
- categories with many factors are overweighted. What happens is that each factor gets at least one point, giving the category a high aggregate total. For example, category with five factors has a weight of at least five, while a category with 20 factors has a weight of 20 or more.
The following table shows the major weights I assigned. The heaviest weight goes to lead generation and nurturing campaigns – a combined 40% across all buyer types. I weighted pricing much more heavily for small firms, and gabe technology, lead scoring and technology heavier weights at larger firms. You’ll notice that Vendor is weighted at zero in all cases: remember that these are weights for product fitness scores. Vendor strength will be scored on a separate dimension.
I think these weights are reasonable representations of how buyers think in the different categories. But they’re ultimately just my opinion. So I also created a reality check by looking at vendors who target the different buyer types.
This was possible because the matrix asked vendors to describe their percentage of clients in small, medium and large businesses. (The ranges were under $20 million, $20 million to $500 million, and over $500 million annual revenue.) Grouping vendors with similar percentages of small clients yielded the following sets:
- small business (60% or more small business clients): Infusionsoft, OfficeAutoPilot, TrueInfluence
- mixed (33-66% small business clients): Pardot, Marketo, Eloqua, Manticore Technology, Silverpop, Genius
- specialists (15%-33% small business): LeadFormix, TreeHouse Interactive, SalesFUSION
- big clients (fewer than 15% small business): Marketbright, Neolane, Aprimo On Demand
(I also have data from LoopFuse, Net Results, and HubSpot, but didn’t have the client distribution for the first two. I excluded HubSpot because it is a fundamentally different product.)
If my weights were reasonable, two things should happen:
- vendors specializing in each client type should have the highest scores for that client type (that is, small business vendors have higher scores than big business vendors using the small business weights.)
- vendors should have their highest scores for their primary client type (that is, small business vendors should have higher scores with small business weights than with big business weights).
As the table below shows, that is pretty much what happened:
So far so good. But how did I know I’d assigned the right weights to the right features?
I was particularly worried about the small business weights. These showed a relatively small difference in scores across the different vendor groups. In addition, I knew I had weighted price heavily. In fact, it turned out that if I took price out of consideration, the other vendor groups would actually have higher scores than the small business specialists. This couldn't be right: the other systems are really too complicated for small business users, regardless of price.
Clearly some adjustments were necessary. I'll describe how I handled this in tomorrow's post.
_______________________________________________________
* “ability to execute” and “completeness of vision” for Gartner, “current offering”, “market presence” and “strategy” for Forrester.
Wednesday, December 22, 2010
Teradata Buys Aprimo for $525 Million: More Marketing Automation Consolidation To Come
Summary: Teradata's acquisition of Aprimo takes the largest remaining independent marketing automation vendor off the market. The market will probably split between enterprise-wide suites and more limited marketing automation systems.
Teradata announced today that is acquiring marketing automation vendor Aprimo for a very hefty $525 million – even more than the $480 million that IBM paid for somewhat larger Unica in August.
Given the previous Unica deal. other recent marketing system acquisitions, and wide knowledge that Aprimo was eager to sell, no one is particularly surprised by this transaction. Teradata is a logical buyer, having a complementary campaign management system but lacking Aprimo’s marketing resource management, cloud-based technology and strong B2B client base (although Aprimo has stressed to me more than once that 60% of their revenue is from B2C clients).
This is obviously a huge decision for Teradata, a $1.7 billion company compared with IBM’s $100 billion in revenue. It stakes a claim to a piece of the emerging market for enterprise-wide marketing systems, the same turf targeted in recent deals by IBM, Oracle, Adobe and Infor (and SAS and SAP although they haven’t made major acquisitions).
This enterprise market is probably going to evolve into something distinct from traditional “marketing automation”. The difference: marketing automation is focused on batch and interactive campaign management but just touches slightly on advertising, marketing resource management and analytics. The enterprise market involves unified systems sold at the CEO, CFO, CIO and CMO levels, whereas marketing automation has been sold largely to email and Web marketers within marketing departments.
The existence of C-level buyers for marketing systems is not yet proven, and I remain a bit of a skeptic. But many smart people are betting a lot of money that it will appear, and will spend more money to make it happen. Aprimo is probably the vendor best positioned to benefit because its MRM systems inherently work across an entire marketing department (although I’m sure many Aprimo deployments are more limited). So, in that sense at least, Teradata has positioned itself particularly well to take advantage of the new trend. And if IBM and Oracle want to invest in developing that market so that Teradata can benefit, so much the better for Teradata.
That said, there's still some question whether Teradata can really benefit if this market takes off. Aprimo adds a great deal of capability, but the combined company still lacks the strong Web analytics and BI applications of its main competitors. A closer alliance with SAS might fill that gap nicely...and acquisition or merger between the two firms is perfectly conceivable, at least superficially. Lack of professional services is perhaps less an issue since it makes Teradata a more attractive partner to the large consulting firms (Accenture, CapGemini, etc.) who already use its tools and must be increasingly nervous about competition from IBM’s services group.
The other group closely watching these deals are the remaining marketing automation vendors themselves. Many would no doubt be delighted to sell at such prices. But, as Eloqua’s Joe Payne points out in his own comment on the Aprimo deal, the remaining vendors are all much smaller: while Unica and Aprimo each had around $100 million revenue, Eloqua and Alterian are around $50 million, Neolane and SmartFocus are $20-$30 million, and Marketo said recently it expects nearly $15 million in 2010. I doubt any of the others reach $10 million. (This excludes email companies like ExactTarget, Responsys and Silverpop [which does have a marketing automation component].) Moreoever, the existing firms skew heavily to B2B clients and smaller companies, which are not the primary clients targeted by big enterprise systems vendors.
That said, I do expect continued acquisitions within this space. I’d be surprised to see the 4-5x revenue price levels of the Unica and Aprimo deals, but even lower valuations would be attractive to owners and investors facing increasingly cut-throat competition. As I’ve written many times before, the long-term trend will be for larger CRM and Web marketing suites to incorporate marketing automation functions, making stand-alone marketing automation less competitive. Survivors will offer features for particular industries or specialized functions that justify purchase outside of the corporate standard. And the real money will be made by service vendors who can help marketers fully benefit from these systems.
Teradata announced today that is acquiring marketing automation vendor Aprimo for a very hefty $525 million – even more than the $480 million that IBM paid for somewhat larger Unica in August.
Given the previous Unica deal. other recent marketing system acquisitions, and wide knowledge that Aprimo was eager to sell, no one is particularly surprised by this transaction. Teradata is a logical buyer, having a complementary campaign management system but lacking Aprimo’s marketing resource management, cloud-based technology and strong B2B client base (although Aprimo has stressed to me more than once that 60% of their revenue is from B2C clients).
This is obviously a huge decision for Teradata, a $1.7 billion company compared with IBM’s $100 billion in revenue. It stakes a claim to a piece of the emerging market for enterprise-wide marketing systems, the same turf targeted in recent deals by IBM, Oracle, Adobe and Infor (and SAS and SAP although they haven’t made major acquisitions).
This enterprise market is probably going to evolve into something distinct from traditional “marketing automation”. The difference: marketing automation is focused on batch and interactive campaign management but just touches slightly on advertising, marketing resource management and analytics. The enterprise market involves unified systems sold at the CEO, CFO, CIO and CMO levels, whereas marketing automation has been sold largely to email and Web marketers within marketing departments.
The existence of C-level buyers for marketing systems is not yet proven, and I remain a bit of a skeptic. But many smart people are betting a lot of money that it will appear, and will spend more money to make it happen. Aprimo is probably the vendor best positioned to benefit because its MRM systems inherently work across an entire marketing department (although I’m sure many Aprimo deployments are more limited). So, in that sense at least, Teradata has positioned itself particularly well to take advantage of the new trend. And if IBM and Oracle want to invest in developing that market so that Teradata can benefit, so much the better for Teradata.
That said, there's still some question whether Teradata can really benefit if this market takes off. Aprimo adds a great deal of capability, but the combined company still lacks the strong Web analytics and BI applications of its main competitors. A closer alliance with SAS might fill that gap nicely...and acquisition or merger between the two firms is perfectly conceivable, at least superficially. Lack of professional services is perhaps less an issue since it makes Teradata a more attractive partner to the large consulting firms (Accenture, CapGemini, etc.) who already use its tools and must be increasingly nervous about competition from IBM’s services group.
The other group closely watching these deals are the remaining marketing automation vendors themselves. Many would no doubt be delighted to sell at such prices. But, as Eloqua’s Joe Payne points out in his own comment on the Aprimo deal, the remaining vendors are all much smaller: while Unica and Aprimo each had around $100 million revenue, Eloqua and Alterian are around $50 million, Neolane and SmartFocus are $20-$30 million, and Marketo said recently it expects nearly $15 million in 2010. I doubt any of the others reach $10 million. (This excludes email companies like ExactTarget, Responsys and Silverpop [which does have a marketing automation component].) Moreoever, the existing firms skew heavily to B2B clients and smaller companies, which are not the primary clients targeted by big enterprise systems vendors.
That said, I do expect continued acquisitions within this space. I’d be surprised to see the 4-5x revenue price levels of the Unica and Aprimo deals, but even lower valuations would be attractive to owners and investors facing increasingly cut-throat competition. As I’ve written many times before, the long-term trend will be for larger CRM and Web marketing suites to incorporate marketing automation functions, making stand-alone marketing automation less competitive. Survivors will offer features for particular industries or specialized functions that justify purchase outside of the corporate standard. And the real money will be made by service vendors who can help marketers fully benefit from these systems.
Sunday, December 12, 2010
Predictions for B2B Marketing in 2011
I don't usually bother with the traditional "predictions for next year" piece at this time of year. But I happened to write one in response to a question at the Focus online community last week. So I figured I'd share it here as well.
Summary: 2011 will see continued adjustment as B2B lead generators experiment with the opportunities provided by new media.
1. Marketing automation hits an inflection point, or maybe two. Mainstream B2B marketers will purchase marketing automation systems in large numbers, having finally heard about it often enough to believe it's worthwhile. But many buyers will be following the herd without understanding why, and as a result will not invest in the training, program development and process change necessary for success. This will eventually lead to a backlash against marketing automation, although that might not happen until after 2011.
2. Training and support will be critical success factors. Whether or not they use marketing automation systems, marketers will increasingly rely on external training, consultants and agencies to help them take advantage of the new possibilities opened by changes in media and buying patterns. Companies that aggressively seek help in improving their skills will succeed; those who try to learn everything for themselves by trial-and-error will increasingly fall behind the industry. Marketing automation vendors will move beyond current efforts at generic industry education to provide one-on-one assistance to their clients via their own staff, partners, and built-in system features that automatically review client work, recommend changes and sometimes implement them automatically. (Current examples: Hubspot's Web site grader for SEO, Omniture Test & Target for landing page optimization, Google AdWords for keyword and copy testing.)
3. Integration will be the new mantra. Marketers will struggle to incorporate an ever-expanding array of online marketing options: not just Web sites and email, but social, mobile, location-based, game-based, app-based, video-based, and perhaps even base-based. Growing complexity will lead them to seek integrated solutions that provide a unified dashboard to view and manage all these media. Vendors will scramble to fill this need. Competitors will include existing marketing automation and CRM systems seeking to use their existing functions as a base, and entirely new systems that provide a consistent interface to access many different products transparently via their APIs.
4. SMB systems will lead the way. Systems built for small businesses will set the standard for ease of use, integration, automation and feedback. Lessons learned from these systems will be applied by their developers and observant competitors to help marketers at larger companies as well. But enterprise marketers have additional needs related to scalability, content sharing and user rights management, which SMB systems are not designed to address. Selling to enterprises is also very different from selling to SMBs. So the SMB vendors themselves won't necessarily succeed at moving upwards to larger clients.
5. Social marketing inches forward. Did you really think I'd talk about trends without mentioning social media? Marketers in 2011 will still be confused about how to make best use of the many opportunities presented by social media. Better tools will emerge to simplify and integrate social monitoring, response and value measurement. Like most new channels, social will at first be treated as a separate specialty. But advanced firms will increasingly see it as one of many channels to be managed, measured and eventually integrated with the rest of their marketing programs. Social extensions to traditional marketing automation systems will make this easier.
6. The content explosion implodes: marketers will rein in runaway content generation by adopting a more systematic approach to understanding the types of content needed for different customer personas at different stages in the buying cycle. Content management and delivery systems will be mapped against these persona/stage models to simplify delivery of the right content in the right situation. Marketers will develop small, reusable content "bites" that can be assembled into custom messages, thereby both reducing the need for new content and enabling more appropriate customer treatments. Marketers will also be increasingly insistent on measuring the impact of their messages, so they can use the results to improve the quality of their messages and targeting. Since this measurement will draw on data from multiple systems, including sales and Web behaviors, it will occur in measurement systems that are outside the delivery systems themselves.
7. Last call for last click attribution: marketers will seriously address the need to show the relationship between their efforts and revenue. This will force them to abandon last-click attribution in favor of methods that address the impact of all treatments delivered to each lead. Different vendors and analysts will propose different techniques to do this, but no single standard will emerge before the end of 2011.
Summary: 2011 will see continued adjustment as B2B lead generators experiment with the opportunities provided by new media.
1. Marketing automation hits an inflection point, or maybe two. Mainstream B2B marketers will purchase marketing automation systems in large numbers, having finally heard about it often enough to believe it's worthwhile. But many buyers will be following the herd without understanding why, and as a result will not invest in the training, program development and process change necessary for success. This will eventually lead to a backlash against marketing automation, although that might not happen until after 2011.
2. Training and support will be critical success factors. Whether or not they use marketing automation systems, marketers will increasingly rely on external training, consultants and agencies to help them take advantage of the new possibilities opened by changes in media and buying patterns. Companies that aggressively seek help in improving their skills will succeed; those who try to learn everything for themselves by trial-and-error will increasingly fall behind the industry. Marketing automation vendors will move beyond current efforts at generic industry education to provide one-on-one assistance to their clients via their own staff, partners, and built-in system features that automatically review client work, recommend changes and sometimes implement them automatically. (Current examples: Hubspot's Web site grader for SEO, Omniture Test & Target for landing page optimization, Google AdWords for keyword and copy testing.)
3. Integration will be the new mantra. Marketers will struggle to incorporate an ever-expanding array of online marketing options: not just Web sites and email, but social, mobile, location-based, game-based, app-based, video-based, and perhaps even base-based. Growing complexity will lead them to seek integrated solutions that provide a unified dashboard to view and manage all these media. Vendors will scramble to fill this need. Competitors will include existing marketing automation and CRM systems seeking to use their existing functions as a base, and entirely new systems that provide a consistent interface to access many different products transparently via their APIs.
4. SMB systems will lead the way. Systems built for small businesses will set the standard for ease of use, integration, automation and feedback. Lessons learned from these systems will be applied by their developers and observant competitors to help marketers at larger companies as well. But enterprise marketers have additional needs related to scalability, content sharing and user rights management, which SMB systems are not designed to address. Selling to enterprises is also very different from selling to SMBs. So the SMB vendors themselves won't necessarily succeed at moving upwards to larger clients.
5. Social marketing inches forward. Did you really think I'd talk about trends without mentioning social media? Marketers in 2011 will still be confused about how to make best use of the many opportunities presented by social media. Better tools will emerge to simplify and integrate social monitoring, response and value measurement. Like most new channels, social will at first be treated as a separate specialty. But advanced firms will increasingly see it as one of many channels to be managed, measured and eventually integrated with the rest of their marketing programs. Social extensions to traditional marketing automation systems will make this easier.
6. The content explosion implodes: marketers will rein in runaway content generation by adopting a more systematic approach to understanding the types of content needed for different customer personas at different stages in the buying cycle. Content management and delivery systems will be mapped against these persona/stage models to simplify delivery of the right content in the right situation. Marketers will develop small, reusable content "bites" that can be assembled into custom messages, thereby both reducing the need for new content and enabling more appropriate customer treatments. Marketers will also be increasingly insistent on measuring the impact of their messages, so they can use the results to improve the quality of their messages and targeting. Since this measurement will draw on data from multiple systems, including sales and Web behaviors, it will occur in measurement systems that are outside the delivery systems themselves.
7. Last call for last click attribution: marketers will seriously address the need to show the relationship between their efforts and revenue. This will force them to abandon last-click attribution in favor of methods that address the impact of all treatments delivered to each lead. Different vendors and analysts will propose different techniques to do this, but no single standard will emerge before the end of 2011.
Wednesday, December 08, 2010
Case Study: Using a Scenario to Select Business Intelligence Software
Summary: Testing products against a scenario is critical to making a sound selection. But the scenario has to reflect your own requirements. While this post shows results from one test, rankings could be very different for someone else.
I’m forever telling people that the only reliable way to select software is to devise scenarios and test the candidate products against them. I recently went through that process for a client and thought I’d share the results.
1. Define Requirements. In this particular case, the requirements were quite clear: the client had a number of workers who needed a data visualization tool to improve their presentations. These were smart but not particularly technical people and they only did a couple of presentations each month. This mean the tool had to be extremely easy to use, because the workers wouldn’t find time for extensive training and, being just occasional users, would quickly forget most of they had learned. They also wanted to do some light ad hoc analysis within the tool, but just on small, summary data sets since the serious analytics are done by other users earlier in the process. And, oh, by the way, if the same tool could provide live, updatable dashboards for clients to access directly, that would be nice too. (In a classic case of scope creep, the client later added mapping capabilities to the list, merging this with a project that had been running separately.)
During our initial discussions, I also mentioned that Crystal Xcelsius (now SAP Crystal Dashboard Design) has the very neat ability to embed live charts within Powerpoint documents. This became a requirement too. (Unfortunately, I couldn’t find a way to embed one of those images directly within this post, but you can click here to see a sample embedded in a pdf. Click on the radio buttons to see the different variables. How fun is that?)
2. Identify Options. Based on my own knowledge and a little background research, I built a list of candidate systems. Again, the main criteria were visualization, ease of use and – it nearly goes without saying – low cost. A few were eliminated immediately due to complexity or other reasons. This left:
3. Define the Scenario. I defined a typical analysis for the client: a bar chart comparing index values for four variables across seven customer segments. The simplest bar chart showed all segment values for one variable. Another shows all variables for all segments, sorted first by variable and then by segment, with the segments ranked according to response rate (one of the five variables). This would show how the different variables related to response rate. It looked like this:
The tasks to execute the scenario were:
4. Results. I was able to download free or trial versions of each system. I installed these and then timed how long it took to complete the scenario, or at least to get as far as I could before reaching the frustration level where a typical end-user would stop.
I did my best to approach each system as if I’d never seen it before, although in fact, I’ve done at least some testing on every product except SpotFire, and have worked extensively with Xcelsius and QlikView. As a bit of a double-check, I dragooned one of my kids into testing one system when he was innocently visiting home over Thanksgiving: his time was actually quicker than mine. I took that as a proof I'd tested fairly.
Notes from the tests are below.
5. Final Assessment. Although the scenario nicely addressed ease of use, there were other considerations that played into the final decision. These required a bit more research and some trade-offs, particularly regarding the Xcelsius-style ability to embed interactive charts within a Powerpoint slide. No one else on the list could do this without either loading additional software (often a problem when end-user PCs are locked down by corporate IT) or accessing an external server (a problem for mobile users and with license costs).
The following table shows my results:
6. Next Steps. The result of this project wasn’t a final selection, but a recommendation of a couple of products to explore in depth. There were still plenty of details to research and confirm. However, starting with a scenario greatly sped up the work, narrowed the field, and ensured that the final choice would meet operational requirements. That was well worth the effort. I strongly suggest you do the same.
____________________________________
* The actual data looked like this. Here's a link if you want to download it:
I’m forever telling people that the only reliable way to select software is to devise scenarios and test the candidate products against them. I recently went through that process for a client and thought I’d share the results.
1. Define Requirements. In this particular case, the requirements were quite clear: the client had a number of workers who needed a data visualization tool to improve their presentations. These were smart but not particularly technical people and they only did a couple of presentations each month. This mean the tool had to be extremely easy to use, because the workers wouldn’t find time for extensive training and, being just occasional users, would quickly forget most of they had learned. They also wanted to do some light ad hoc analysis within the tool, but just on small, summary data sets since the serious analytics are done by other users earlier in the process. And, oh, by the way, if the same tool could provide live, updatable dashboards for clients to access directly, that would be nice too. (In a classic case of scope creep, the client later added mapping capabilities to the list, merging this with a project that had been running separately.)
During our initial discussions, I also mentioned that Crystal Xcelsius (now SAP Crystal Dashboard Design) has the very neat ability to embed live charts within Powerpoint documents. This became a requirement too. (Unfortunately, I couldn’t find a way to embed one of those images directly within this post, but you can click here to see a sample embedded in a pdf. Click on the radio buttons to see the different variables. How fun is that?)
2. Identify Options. Based on my own knowledge and a little background research, I built a list of candidate systems. Again, the main criteria were visualization, ease of use and – it nearly goes without saying – low cost. A few were eliminated immediately due to complexity or other reasons. This left:
3. Define the Scenario. I defined a typical analysis for the client: a bar chart comparing index values for four variables across seven customer segments. The simplest bar chart showed all segment values for one variable. Another shows all variables for all segments, sorted first by variable and then by segment, with the segments ranked according to response rate (one of the five variables). This would show how the different variables related to response rate. It looked like this:
The tasks to execute the scenario were:
- connect to a simple Excel spreadsheet (seven segments x four variables.)*
- create a bar chart showing data for all segments for a single variable.
- create a bar chart showing data for all segments for all variables, clustered by variable and sorted by the value of one variable (response index).
- provide users with an option to select or highlight individual variables and segments.
4. Results. I was able to download free or trial versions of each system. I installed these and then timed how long it took to complete the scenario, or at least to get as far as I could before reaching the frustration level where a typical end-user would stop.
I did my best to approach each system as if I’d never seen it before, although in fact, I’ve done at least some testing on every product except SpotFire, and have worked extensively with Xcelsius and QlikView. As a bit of a double-check, I dragooned one of my kids into testing one system when he was innocently visiting home over Thanksgiving: his time was actually quicker than mine. I took that as a proof I'd tested fairly.
Notes from the tests are below.
- Xcelsius (SAP Crystal Dashboard Design): 3 hours to set up bar chart with one variable and allowing selection of individual variables. Did not attempt to create chart showing multiple variables. (Note: most of the time was spent figuring out how Xcelsius did the variable selection, which is highly unintuitive. I finally had to cheat and use the help functions, and even then it took at least another half hour. Remember that Xcelsius is a system I’d used extensively in the past, so I already had some idea of what I was looking for. On the other hand, I reproduced that chart in just a few minutes when I was creating the pdf for this post. Xcelsius would work very well for a frequent user, but it’s not for people who use it only occasionally.)
- Advizor: 3/4 hour to set up bar chart. Able to show multiple variables on same chart but not to group or sort by variable. Not obvious how to make changes (must click on a pull down menu to expose row of icons).
- Spotfire: 1/2 hour to set up bar chart. Needed to read Help to put multiple lines or bars on same chart. Could not find way to sort or group by variable.
- QlikView: 1/4 hour to set up bar chart (using default wizard). Able to add multiple variables and sort segments by response index, but could not cluster by variable or expose menu to add/remove variables. Not obvious how to make changes (must right-click to open properties box – I wouldn’t have known this without my prior QlikView experience).
- Lyzasoft: 1/4 hour to set up bar chart with multiple variables. Able to select individual variables, cluster by variable and sort by response index, but couldn’t easily assign different colors to different variables (required for legibility). Annoying lag each time chart is redrawn.
- Tableau: 1/4 hour to set up bar chart with multiple variables. Able to select individual variables, cluster by variable, and sort by variable. Only system to complete the full scenario.
5. Final Assessment. Although the scenario nicely addressed ease of use, there were other considerations that played into the final decision. These required a bit more research and some trade-offs, particularly regarding the Xcelsius-style ability to embed interactive charts within a Powerpoint slide. No one else on the list could do this without either loading additional software (often a problem when end-user PCs are locked down by corporate IT) or accessing an external server (a problem for mobile users and with license costs).
The following table shows my results:
6. Next Steps. The result of this project wasn’t a final selection, but a recommendation of a couple of products to explore in depth. There were still plenty of details to research and confirm. However, starting with a scenario greatly sped up the work, narrowed the field, and ensured that the final choice would meet operational requirements. That was well worth the effort. I strongly suggest you do the same.
____________________________________
* The actual data looked like this. Here's a link if you want to download it:
Tuesday, December 07, 2010
Tableau Software Adds In-Memory Database Engine
Summary: Tableau has added a large-scale in-memory database engine to its data analysis and visualization software. This makes it a lot more powerful.
Hard to believe, but it's more than three years since my review of Tableau Software’s data analysis system. Tableau has managed quite well without my attention: sales have doubled every year and should exceed $40 million in 2010; they have 5,500 clients, 60,000 users and 185 employees; and they plan to add 100 more employees next year. Ah, I knew them when.
What really matters from a user perspective is that the product itself has matured. Back in 2007, my main complaint was that Tableau lacked a data engine. The system either issued SQL queries against an external database or imported a small data set into memory. This meant response time depended on the speed of the external system and that users were constrained by the external files' data structure.
Tableau’s most recent release (6.0, launched on November 10) finally changes this by adding a built-in data engine. Note that I said “changes” rather than “fixes”, since Tableau has obviously been successful without this feature. Instead, the vendor has built connectors for high-speed analytical databases and appliances including Hyperion Essbase, Greenplum, Netezza, PostgreSQL, Microsoft PowerPivot, ParAccel, Sybase IQ, Teradata, and Vertica. These provide good performance on any size database, but they still leave the Tableau user tethered to an external system. An internal database allows much more independence and offers high performance when no external analytical engine is present. This is a big advantage since such engines are still relatively rare and, even if a company has one, it might not contain all the right data or be accessible to Tableau users.
Of course, this assumes that Tableau's internal database is itself a high-speed analytical engine. That’s apparently the case: the engine is home-grown but it passes the buzzword test (in-memory, columnar, compressed) and – at least in an online demo – offered near-immediate response to queries against a 7 million row file. It also supports multi-table data structures and in-memory “blending” of disparate data sources, further freeing users from the constraints of their corporate environment. The system is also designed to work with data sets that are too large to fit into memory: it will use as much memory as possible and then access the remaining data from disk storage.
Tableau has added some nice end-user enhancements too. These include:
- new types of combination charts;
- ability to display the same data at different aggregation levels on the same chart (e.g., average as a line and individual observations as points);
- more powerful calculations including multi-pass formulas that can calculate against a calculated value
- user-entered parameters to allow what-if calculations
The Tableau interface hasn’t changed much since 2007. But that's okay since I liked it then and still like it now. In fact, it won a little test we conducted recently to see how far totally untrained users could get with a moderately complex task. (I'll give more details in a future post.)
Tableau can run either as traditional software installed on the user's PC or on a server accessed over the Internet. Pricing for a single user desktop system is still $999 for a version that can connect to Excel, Access or text files, and has risen slightly to $1,999 for one that can connect to other databases. These are perpetual license fees; annual maintenance is 20%.
There’s also a free reader that lets unlimited users download and read workbooks created in the desktop system. The server version allows multiple users to access workbooks on a central server. Pricing for this starts at $10,000 for ten users and you still need at least one desktop license to create the workbooks. Large server installations can avoid per-user fees by purchasing CPU-based licenses, which are priced north of $100,000.
Although the server configuration makes Tableau a candidate for some enterprise reporting tasks, it can't easily limit different users to different data, which is a typical reporting requirement. So Tableau is still primarily a self-service tool for business and data analysts. The new database, calculation and data blending features add considerably to their power.
Hard to believe, but it's more than three years since my review of Tableau Software’s data analysis system. Tableau has managed quite well without my attention: sales have doubled every year and should exceed $40 million in 2010; they have 5,500 clients, 60,000 users and 185 employees; and they plan to add 100 more employees next year. Ah, I knew them when.
What really matters from a user perspective is that the product itself has matured. Back in 2007, my main complaint was that Tableau lacked a data engine. The system either issued SQL queries against an external database or imported a small data set into memory. This meant response time depended on the speed of the external system and that users were constrained by the external files' data structure.
Tableau’s most recent release (6.0, launched on November 10) finally changes this by adding a built-in data engine. Note that I said “changes” rather than “fixes”, since Tableau has obviously been successful without this feature. Instead, the vendor has built connectors for high-speed analytical databases and appliances including Hyperion Essbase, Greenplum, Netezza, PostgreSQL, Microsoft PowerPivot, ParAccel, Sybase IQ, Teradata, and Vertica. These provide good performance on any size database, but they still leave the Tableau user tethered to an external system. An internal database allows much more independence and offers high performance when no external analytical engine is present. This is a big advantage since such engines are still relatively rare and, even if a company has one, it might not contain all the right data or be accessible to Tableau users.
Of course, this assumes that Tableau's internal database is itself a high-speed analytical engine. That’s apparently the case: the engine is home-grown but it passes the buzzword test (in-memory, columnar, compressed) and – at least in an online demo – offered near-immediate response to queries against a 7 million row file. It also supports multi-table data structures and in-memory “blending” of disparate data sources, further freeing users from the constraints of their corporate environment. The system is also designed to work with data sets that are too large to fit into memory: it will use as much memory as possible and then access the remaining data from disk storage.
Tableau has added some nice end-user enhancements too. These include:
- new types of combination charts;
- ability to display the same data at different aggregation levels on the same chart (e.g., average as a line and individual observations as points);
- more powerful calculations including multi-pass formulas that can calculate against a calculated value
- user-entered parameters to allow what-if calculations
The Tableau interface hasn’t changed much since 2007. But that's okay since I liked it then and still like it now. In fact, it won a little test we conducted recently to see how far totally untrained users could get with a moderately complex task. (I'll give more details in a future post.)
Tableau can run either as traditional software installed on the user's PC or on a server accessed over the Internet. Pricing for a single user desktop system is still $999 for a version that can connect to Excel, Access or text files, and has risen slightly to $1,999 for one that can connect to other databases. These are perpetual license fees; annual maintenance is 20%.
There’s also a free reader that lets unlimited users download and read workbooks created in the desktop system. The server version allows multiple users to access workbooks on a central server. Pricing for this starts at $10,000 for ten users and you still need at least one desktop license to create the workbooks. Large server installations can avoid per-user fees by purchasing CPU-based licenses, which are priced north of $100,000.
Although the server configuration makes Tableau a candidate for some enterprise reporting tasks, it can't easily limit different users to different data, which is a typical reporting requirement. So Tableau is still primarily a self-service tool for business and data analysts. The new database, calculation and data blending features add considerably to their power.
Monday, December 06, 2010
QlikView's New Release Focuses on Enterprise Deployment
I haven’t written much about QlikView recently, partly because my own work hasn’t required using it and partly because it’s now well enough known that other people cover it in depth. But it remains my personal go-to tool for data analysis and I do keep an eye on it. The company released QlikView 10 in October and Senior Director of Product Marketing Erica Driver briefed me on it in a couple of weeks ago. Here’s what’s up.
- Business is good. If you follow the industry at all, you already know that QlikView had a successful initial public stock offering in July. Driver said the purpose was less to raise money than to gain the credibility that comes from being a public company. (The share price has nearly doubled since launch, incidentally.) The company has continued its rapid growth, exceeding 15,000 clients and showing 40% higher revenue vs. the prior year in its most recent quarter. Total revenues will easily exceed $200 million for 2010. Most clients are still mid-sized businesses, which is QlikView’s traditional stronghold. But more big enterprises are signing on as well.
- Features are stable. Driver walked me through the major changes in QlikView 10. From an end-user perspective, none were especially exciting -- which simply confirms that QlikView already had pretty much all the features it needed.
Even the most intriguing user-facing improvements are pretty subtle. For example, there’s now an “associative search” feature that means I can enter client names in a sales rep selection box and the system will find the reps who serve those clients. Very clever and quite useful if you think about it, but I’m guessing you didn’t fall off you chair when you heard the news.
The other big enhancement was a “mekko” chart, which is bar chart where the width of the bar reflects a data dimension. So, you could have a bar chart where the height represents revenue and the width represents profitability. Again, kinda neat but not earth-shattering.
Let me stress again that I’m not complaining: QlikView didn’t need a lot of new end-user features because the existing set was already terrific.
- Development is focused on integration and enterprise support. With features under control, developers have been spending their time on improving performance, integration and scalability. This involves geeky things aimed at like a documented data format for faster loads, simpler embedding of QlikView as an app within external Web sites, faster repainting of pages in the AJAX client, more multithreading, centralized user management and section access controls, better audit logging, and prebuilt connectors for products including SAP and Salesforce.com.
There’s also a new API that lets external objects to display data from QlikView charts. That means a developer can, say, put QlikView data in a Gantt chart even though QlikView itself doesn’t support Gantt charts. The company has also made it easier to merge QlikView with other systems like Google Maps and SharePoint.
These open up some great opportunities for QlikView deployments, but they depend on sophisticated developers to take advantage of them. In other words, they are not capabilities that a business analyst -- even a power user who's mastered QlikView scripts -- will be able to handle. They mark the extension of QlikView from stand-alone dashboards to a system that is managed by an IT department and integrated with the rest of the corporate infrastructure.
This is exactly the "pervasive business intelligence" that industry gurus currently tout as the future of BI. QlikView has correctly figured out that it must move in this direction to continue growing, and in particular to compete against traditional BI vendors at large enterprises. That said, I think QlikView still has plenty of room to grow within the traditional business intelligence market as well.
- Mobile interface. This actually came out in April and it’s just not that important in the grand scheme of things. But if you’re as superficial as I am, you’ll think it’s the most exciting news of all. Yes, you can access QlikView reports on iPad, Android and Blackberry smartphones, including those touchscreen features you’ve wanted since seeing Minority Report. The iPad version will even use the embedded GPS to automatically select localized information. How cool is that?
- Business is good. If you follow the industry at all, you already know that QlikView had a successful initial public stock offering in July. Driver said the purpose was less to raise money than to gain the credibility that comes from being a public company. (The share price has nearly doubled since launch, incidentally.) The company has continued its rapid growth, exceeding 15,000 clients and showing 40% higher revenue vs. the prior year in its most recent quarter. Total revenues will easily exceed $200 million for 2010. Most clients are still mid-sized businesses, which is QlikView’s traditional stronghold. But more big enterprises are signing on as well.
- Features are stable. Driver walked me through the major changes in QlikView 10. From an end-user perspective, none were especially exciting -- which simply confirms that QlikView already had pretty much all the features it needed.
Even the most intriguing user-facing improvements are pretty subtle. For example, there’s now an “associative search” feature that means I can enter client names in a sales rep selection box and the system will find the reps who serve those clients. Very clever and quite useful if you think about it, but I’m guessing you didn’t fall off you chair when you heard the news.
The other big enhancement was a “mekko” chart, which is bar chart where the width of the bar reflects a data dimension. So, you could have a bar chart where the height represents revenue and the width represents profitability. Again, kinda neat but not earth-shattering.
Let me stress again that I’m not complaining: QlikView didn’t need a lot of new end-user features because the existing set was already terrific.
- Development is focused on integration and enterprise support. With features under control, developers have been spending their time on improving performance, integration and scalability. This involves geeky things aimed at like a documented data format for faster loads, simpler embedding of QlikView as an app within external Web sites, faster repainting of pages in the AJAX client, more multithreading, centralized user management and section access controls, better audit logging, and prebuilt connectors for products including SAP and Salesforce.com.
There’s also a new API that lets external objects to display data from QlikView charts. That means a developer can, say, put QlikView data in a Gantt chart even though QlikView itself doesn’t support Gantt charts. The company has also made it easier to merge QlikView with other systems like Google Maps and SharePoint.
These open up some great opportunities for QlikView deployments, but they depend on sophisticated developers to take advantage of them. In other words, they are not capabilities that a business analyst -- even a power user who's mastered QlikView scripts -- will be able to handle. They mark the extension of QlikView from stand-alone dashboards to a system that is managed by an IT department and integrated with the rest of the corporate infrastructure.
This is exactly the "pervasive business intelligence" that industry gurus currently tout as the future of BI. QlikView has correctly figured out that it must move in this direction to continue growing, and in particular to compete against traditional BI vendors at large enterprises. That said, I think QlikView still has plenty of room to grow within the traditional business intelligence market as well.
- Mobile interface. This actually came out in April and it’s just not that important in the grand scheme of things. But if you’re as superficial as I am, you’ll think it’s the most exciting news of all. Yes, you can access QlikView reports on iPad, Android and Blackberry smartphones, including those touchscreen features you’ve wanted since seeing Minority Report. The iPad version will even use the embedded GPS to automatically select localized information. How cool is that?
Thursday, December 02, 2010
HubSpot Expands Its Services But Stays Focused on Small Business
Summary: HubSpot has continued to grow its customer base and expand its product. It's looking more like a conventional small-business marketing automation system every day.
You have to admire a company that defines a clear strategy and methodically executes it. HubSpot has always aimed to provide small businesses with one easy-to-use system for all their marketing needs. The company began with search engine optimization to attract traffic, and added landing pages, blogging, Web hosting, lead scoring, and Salesforce.com integration. Since my July 2009 review, HubSpot has further extended the system to include social media monitoring and sharing, limited list segmentation and simple drip marketing campaigns. It is now working on more robust outbound email, support for mobile Web pages, and APIs for outside developers to create add-on applications.
The extension into email is a particularly significant step for HubSpot, placing it in more direct competition with other small business marketing systems like Infusionsoft, OfficeAutoPilot and Genoo. Of course, this competition was always implicit – few small businesses would have purchased HubSpot plus one of those products. But HubSpot’s “inbound marketing” message was different enough that most buyers would have decided based on their marketing priorities (Web site or email?). As both sets of systems expand their scope, their features will overlap more and marketers will compare them directly.
Choices will be based on individual features and supporting services. In terms of features, HubSpot still offers unmatched search engine optimization and only Genoo shares its ability to host a complete Web site (as opposed to just landing pages and microsites). On the other hand, HubSpot’s lead scoring, email and nurture campaigns are quite limited compared with its competitors. Web analytics, social media and CRM integration seem roughly equivalent.
One distinct disadvantage is that most small business marketing automation systems offer their own low-cost alternative to Salesforce.com, while HubSpot does not. HubSpot’s Kirsten Knipp told me the company has no plans to add this, relying instead on easy integration with systems like SugarCRM and Zoho. But I wouldn’t be surprised if they changed their minds.
In general, though, HubSpot’s growth strategy seems to rely more on expanding services than features. This makes sense: like everyone else, they've recognized that most small businesses (and many not-so-small businesses) don’t know how to make good use of a marketing automation program. This makes support essential for both selling and retaining them as customers.
One aspect of service is consulting support. HubSpot offers three pricing tiers that add service as well as features at the levels increase. The highest tier, still a relatively modest $18,000 per year, includes a weekly telephone consultation.
The company has also set up new programs to help recruit and train marketing experts who can resell the product and/or use it to support their own clients. These programs include sales training, product training, and certification. They should both expand HubSpot’s sales and provide experts to help buyers that HubSpot sells directly.
So far, HubSpot’s strategy has been working quite nicely. The company has been growing at a steady pace, reaching 3,500 customers in October with 98% monthly retention. A couple hundred of these are at the highest pricing tier, with the others split about evenly between the $3,000 and $9,000 levels. This is still fewer clients than Infusionsoft, which had more than 6,000 clients as of late September. But it's probably more than any other marketing automation vendor and impressive by any standard.
You have to admire a company that defines a clear strategy and methodically executes it. HubSpot has always aimed to provide small businesses with one easy-to-use system for all their marketing needs. The company began with search engine optimization to attract traffic, and added landing pages, blogging, Web hosting, lead scoring, and Salesforce.com integration. Since my July 2009 review, HubSpot has further extended the system to include social media monitoring and sharing, limited list segmentation and simple drip marketing campaigns. It is now working on more robust outbound email, support for mobile Web pages, and APIs for outside developers to create add-on applications.
The extension into email is a particularly significant step for HubSpot, placing it in more direct competition with other small business marketing systems like Infusionsoft, OfficeAutoPilot and Genoo. Of course, this competition was always implicit – few small businesses would have purchased HubSpot plus one of those products. But HubSpot’s “inbound marketing” message was different enough that most buyers would have decided based on their marketing priorities (Web site or email?). As both sets of systems expand their scope, their features will overlap more and marketers will compare them directly.
Choices will be based on individual features and supporting services. In terms of features, HubSpot still offers unmatched search engine optimization and only Genoo shares its ability to host a complete Web site (as opposed to just landing pages and microsites). On the other hand, HubSpot’s lead scoring, email and nurture campaigns are quite limited compared with its competitors. Web analytics, social media and CRM integration seem roughly equivalent.
One distinct disadvantage is that most small business marketing automation systems offer their own low-cost alternative to Salesforce.com, while HubSpot does not. HubSpot’s Kirsten Knipp told me the company has no plans to add this, relying instead on easy integration with systems like SugarCRM and Zoho. But I wouldn’t be surprised if they changed their minds.
In general, though, HubSpot’s growth strategy seems to rely more on expanding services than features. This makes sense: like everyone else, they've recognized that most small businesses (and many not-so-small businesses) don’t know how to make good use of a marketing automation program. This makes support essential for both selling and retaining them as customers.
One aspect of service is consulting support. HubSpot offers three pricing tiers that add service as well as features at the levels increase. The highest tier, still a relatively modest $18,000 per year, includes a weekly telephone consultation.
The company has also set up new programs to help recruit and train marketing experts who can resell the product and/or use it to support their own clients. These programs include sales training, product training, and certification. They should both expand HubSpot’s sales and provide experts to help buyers that HubSpot sells directly.
So far, HubSpot’s strategy has been working quite nicely. The company has been growing at a steady pace, reaching 3,500 customers in October with 98% monthly retention. A couple hundred of these are at the highest pricing tier, with the others split about evenly between the $3,000 and $9,000 levels. This is still fewer clients than Infusionsoft, which had more than 6,000 clients as of late September. But it's probably more than any other marketing automation vendor and impressive by any standard.